CN116883297A - Multi-target automatic verification method based on infrared and visible light fusion - Google Patents
Multi-target automatic verification method based on infrared and visible light fusion Download PDFInfo
- Publication number
- CN116883297A CN116883297A CN202310791996.1A CN202310791996A CN116883297A CN 116883297 A CN116883297 A CN 116883297A CN 202310791996 A CN202310791996 A CN 202310791996A CN 116883297 A CN116883297 A CN 116883297A
- Authority
- CN
- China
- Prior art keywords
- target
- infrared
- fusion
- visible light
- targets
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)
Abstract
本发明公开了一种基于红外可见光融合的多目标自动查证方法,包括:目标搜索阶段,构建基于可见光图像和红外图像决策级融合的目标检测算法,实现在任务区中目标有无的搜索和检测,并对多个目标进行融合目标检测;目标查证阶段,在融合目标检测阶段的检测结果满足进入查证阶段的判定条件后,将融合目标检测结果叠加到红外图像上,并由基于红外图像的目标跟踪算法进行跟踪;对发现的多个目标通过变焦依次进行细粒度识别,获得目标的细粒度信息,并由机动目标跟踪模型对目标进行位置估计与预测。本发明能有效解决高度相似目标的重识别问题,可以实现更加快速准确的目标锁定切换,提高了真实场景下无人机的感知能力。
The invention discloses a multi-target automatic verification method based on infrared and visible light fusion, which includes: a target search stage, constructing a target detection algorithm based on the decision-level fusion of visible light images and infrared images, and realizing the search and detection of the presence or absence of targets in the task area. , and conduct fusion target detection on multiple targets; in the target verification stage, after the detection results in the fusion target detection stage meet the judgment conditions for entering the verification stage, the fusion target detection results are superimposed on the infrared image, and the target based on the infrared image is The tracking algorithm is used for tracking; multiple discovered targets are sequentially identified through zooming to obtain fine-grained information about the target, and the maneuvering target tracking model estimates and predicts the target's position. The present invention can effectively solve the problem of re-identification of highly similar targets, achieve faster and more accurate target lock switching, and improve the perception ability of UAVs in real scenarios.
Description
技术领域Technical field
本发明涉及计算机视觉、目标定位、无人机自主控制技术领域,尤其涉及一种基于红外可见光融合的多目标自动查证方法。The invention relates to the technical fields of computer vision, target positioning, and autonomous control of drones, and in particular, to a multi-target automatic verification method based on infrared and visible light fusion.
背景技术Background technique
空对地海多目标自动查证方法,是指无人机通过可变焦光电吊舱,自动对任务区域内的多个目标进行细粒度识别并对其进行定位的地理位置的技术。The air-to-ground and sea multi-target automatic verification method refers to the technology in which UAVs use variable-focus photoelectric pods to automatically conduct fine-grained identification of multiple targets in the mission area and locate their geographical locations.
在目前的基于无人机光电平台的多目标查证方法中,大多利用人工控制的方式依次对目标进行查证,而在仅有的一些自动化的方法中,也都遵循着大视场搜索发现多个目标,减小视场对单个目标细粒度识别,重新放大视场更换目标进行查证的范式。在该范式中,现有的方法包括通过基于深度学习的目标重识别算法来对视场变化前后目标进行一致化表征的方法以及通过结合目标定位,即对目标检测获得的每个目标估计其地理位置,进而构建视场变化前后目标的一致化表征。In the current multi-target verification methods based on UAV optoelectronic platforms, most of them use manual control to verify the targets in sequence. In the only automated methods, they also follow the large field of view search to find multiple targets. target, reduce the field of view to fine-grained identification of a single target, and re-enlarge the field of view to change the target for verification. In this paradigm, existing methods include methods to consistently represent targets before and after changes in the field of view through deep learning-based target re-identification algorithms and by combining target localization, that is, estimating the geography of each target obtained by target detection. position, and then construct a consistent representation of the target before and after the field of view changes.
现有方法都存在较大的局限性,无法适应无人机对多目标自主查证的能力要求。对于基于目标重识别的方法来说,机载嵌入式设备算力有限,在运行目标检测、目标跟踪算法之后很难在保证实时性的情况下运行目标重识别模型;对于结合目标定位的方法来说,现有目标定位算法大多只能保证视场中心处,即光轴处的目标,对于图像边缘的目标,定位误差较大,无法有效用于目标的重识别。Existing methods have major limitations and cannot adapt to the capability requirements of UAVs for autonomous verification of multiple targets. For methods based on target re-identification, the computing power of airborne embedded devices is limited. After running the target detection and target tracking algorithms, it is difficult to run the target re-identification model while ensuring real-time performance; for methods combined with target positioning, it is difficult to run the target re-identification model in real-time. It is said that most of the existing target positioning algorithms can only guarantee the target at the center of the field of view, that is, at the optical axis. For targets at the edge of the image, the positioning error is large and cannot be effectively used for target re-identification.
而目前大部分无人机都可以挂载双光吊舱甚至三光吊舱,而红外与可见光之间又可以单独以不同的视场角对环境进行感知,而现有方法对可见光和红外图像的融合仅仅扩充了无人机信息感知的丰富度,并没有提升无人机对环境感知的灵活性。At present, most UAVs can be equipped with dual-light pods or even three-light pods, and infrared and visible light can perceive the environment separately with different field of view angles. However, existing methods cannot detect visible light and infrared images. Fusion only expands the richness of the UAV's information perception, but does not improve the flexibility of the UAV's environmental perception.
本专利要解决以下技术问题:This patent aims to solve the following technical problems:
1、多模态融合适应性差:现有的无人机对地海目标检测大多进通过单光进行检测,或者通过像素级或者特征级的模态融合进而进行目标检测,前者不能很好的利用红外与可见光之间的信息互补特点,后者计算成本较高,且不能针对环境灵活的调整模型。1. Poor multi-modal fusion adaptability: Existing UAVs mostly detect targets on the ground and sea through single light, or through pixel-level or feature-level modal fusion for target detection. The former cannot be well utilized. The information between infrared and visible light is complementary. The latter has higher computational cost and cannot flexibly adjust the model according to the environment.
2、目标的一致化表征难:无人机对地海目标侦察时,由于多目标相似程度高(多个坦克、多个统一服装人员、多艘相似船只、多个海面浮标等)以及算法实时性等原因,难以通过基于深度学习的目标重识别算法构建视场变化前后目标的一致化表征,进而导致无人机侦察过程自主化程度较低。2. It is difficult to achieve consistent representation of targets: When UAVs conduct reconnaissance of earth and sea targets, due to the high degree of similarity of multiple targets (multiple tanks, multiple uniformed personnel, multiple similar ships, multiple sea surface buoys, etc.) and the real-time performance of the algorithm Due to various reasons, it is difficult to construct a consistent representation of the target before and after the field of view changes through the target re-identification algorithm based on deep learning, which in turn leads to a low degree of autonomy in the UAV reconnaissance process.
在该问题中,存在两个子问题。a:当目标搜索阶段发现目标准备进入目标查证阶段时候,对目标的检测有可见光与红外融合检测变为了红外图像的检测跟踪,则可能会出现可见光中发现目标但是红外图像中没有发现目标/>的情况,此时转入目标查证阶段是,可见光不再同于目标跟踪,如何在红外图像中保持对目标/>的跟踪。b:在无人机行进、盘旋过程中,存在目标/>,其在某一时刻/>离开红外图像的视场,并在/>时刻内,重新进入视场,如何对其进行重识别,重新识别为目标/>而不是/>,从而不对其进行重复查证。In this problem, there are two sub-problems. a: When the target is discovered in the target search stage and is ready to enter the target verification stage, the target detection changes from visible light and infrared fusion detection to infrared image detection and tracking, and the target may be found in visible light. But no target was found in the infrared image/> At this time, it enters the target verification stage. Visible light is no longer different from target tracking. How to maintain target tracking in infrared images/> tracking. b: During the movement and hovering of the drone, there is a target/> , which at a certain moment/> Leave the field of view of the infrared image and in/> Within a short period of time, it re-enters the field of view, how to re-identify it and re-identify it as a target/> instead of/> , so as to avoid repeated verification.
3、无人机多任务侦察灵活性差:在执行对地海目标的侦察时,无人机上的计算平台需要同时运行目标检测算法、目标跟踪算法、细粒度目标识别算法等,现有算法不能充分利用多传感器吊舱保证任务的灵活性,需要构建新的算法框架,充分发挥红外与可见光各自的优势,更加灵活的执行任务。3. UAV multi-task reconnaissance has poor flexibility: When performing reconnaissance of Earth-sea targets, the computing platform on the UAV needs to run target detection algorithms, target tracking algorithms, fine-grained target recognition algorithms, etc. at the same time. The existing algorithms cannot fully To ensure the flexibility of tasks using multi-sensor pods, a new algorithm framework needs to be constructed to give full play to the respective advantages of infrared and visible light to perform tasks more flexibly.
发明内容Contents of the invention
本发明要解决的技术问题在于针对现有技术中的缺陷,提供一种基于红外可见光融合的多目标自动查证方法。The technical problem to be solved by the present invention is to provide a multi-target automatic verification method based on infrared and visible light fusion in view of the defects in the existing technology.
本发明解决其技术问题所采用的技术方案是:The technical solutions adopted by the present invention to solve the technical problems are:
本发明提供一种基于红外可见光融合的多目标自动查证方法,该方法包括以下步骤:The present invention provides a multi-target automatic verification method based on infrared visible light fusion. The method includes the following steps:
目标搜索阶段,构建基于可见光图像和红外图像决策级融合的目标检测算法,实现在任务区中目标有无的搜索和检测,并对多个目标进行融合目标检测;In the target search stage, a target detection algorithm based on the decision-level fusion of visible light images and infrared images is constructed to realize the search and detection of the presence or absence of targets in the task area, and perform fusion target detection on multiple targets;
目标查证阶段,在融合目标检测阶段的检测结果满足进入查证阶段的判定条件后,将融合目标检测结果叠加到红外图像上,并由基于红外图像的目标跟踪算法进行跟踪;对发现的多个目标通过变焦依次进行细粒度识别,获得目标的细粒度信息,并由机动目标跟踪模型对目标进行位置估计与预测。In the target verification stage, after the detection results of the fused target detection stage meet the judgment conditions for entering the verification stage, the fused target detection results are superimposed on the infrared image and tracked by the target tracking algorithm based on the infrared image; multiple targets discovered are Fine-grained recognition is performed sequentially through zooming to obtain fine-grained information about the target, and the target's position is estimated and predicted by the maneuvering target tracking model.
进一步地,本发明的该方法中基于可见光图像和红外图像决策级融合的目标检测算法具体为:Further, the target detection algorithm based on the decision-level fusion of visible light images and infrared images in the method of the present invention is specifically:
保留只在可见图像或只在红外图像中检测到的目标的准确结果;Preserve accurate results for targets detected only in visible images or only in infrared images;
对在可见光图像和红外图像中同时检测到的同一标的准确结果进行加权融合,包括对框的融合与对目标置信度的融合;Perform weighted fusion on the accurate results of the same target detected simultaneously in visible light images and infrared images, including fusion of frames and fusion of target confidence;
将所得的检测结果进行合并,作为融合图像中所有对应目标的检测结果进而实现基于决策级融合的快速目标检测。The obtained detection results are merged and used as the detection results of all corresponding targets in the fused image to achieve fast target detection based on decision-level fusion.
进一步地,本发明的该方法中将融合目标检测结果叠加到红外图像上,并由基于红外图像的目标跟踪算法进行跟踪的方法具体为:Further, in the method of the present invention, the fusion target detection result is superimposed on the infrared image, and the method for tracking by the target tracking algorithm based on the infrared image is specifically:
在融合目标检测阶段的检测结果满足进入查证阶段的判定条件后,将融合目标检测结果叠加到红外图像上,并由基于红外图像的目标跟踪算法进行跟踪,为了防止由于红外图像中目标的特征不明显导致目标/>跟踪失败,在进行目标锁定查证时,优先锁定目标/>对其进行细粒度目标识别;若在进入查证阶段时,存在多个这样的目标,则查证第一个目标之后记录该目标位置,重新进入目标搜索阶段,直至进入查证阶段时,目标可基于红外图像被跟踪;After the detection results in the fusion target detection stage meet the judgment conditions for entering the verification stage, the fusion target detection results are superimposed on the infrared image and tracked by the target tracking algorithm based on the infrared image. In order to prevent the target from being detected in the infrared image The characteristics of the target are not obvious/> Tracking failed. When performing target locking verification, priority is given to locking the target/> Perform fine-grained target identification on it; if there are multiple such targets when entering the verification stage, the target position will be recorded after verifying the first target, and the target search stage will be re-entered. Until the verification stage is entered, the target can be detected based on infrared Images are tracked;
若存在目标重新进入红外视场,首先给定目标编号/>,在锁定目标进行细粒度识别时,首先根据目标定位结果判断该目标是否已经查证完毕,若满足:If there is a target To re-enter the infrared field of view, first give the target number/> , when locking the target for fine-grained identification, first judge whether the target has been verified based on the target positioning result. If it meets:
则认为为目标/>重新进入视场,否则,认为目标为新目标或者目标位置估计模型误差过大,目标编号不变,等待进一步查证,Pos Ti表示目标T i的位置。It is considered that for target/> Re-enter the field of view, otherwise, it is considered that the target is a new target or the target position estimation model error is too large, the target number remains unchanged, waiting for further verification, Pos Ti represents the position of target Ti .
进一步地,本发明的该方法中对发现的多个目标通过变焦依次进行细粒度识别的方法具体为:Further, in the method of the present invention, the method of sequentially performing fine-grained identification of multiple discovered targets through zooming is specifically:
根据侦察任务内容预先加载采用的细粒度目标识别的模型,包括开放场景字符识别模型、人员识别模型、人脸检测模型,对进入侦察范围的船只进行查证,若检测到船上甲板区域的人员,则进一步通过人脸检测模型,对人脸进行识别,对获取的船只图像、船只舷号、船上人员数量、人员人脸信息进行保存并回传指挥中心。The fine-grained target recognition model used is pre-loaded according to the content of the reconnaissance mission, including open scene character recognition model, person recognition model, and face detection model. Vessels entering the reconnaissance range are verified. If people in the deck area of the ship are detected, then The face detection model is further used to identify faces, and the acquired ship images, ship hull numbers, number of people on board, and personnel face information are saved and transmitted back to the command center.
进一步地,本发明的该方法的目标搜索阶段具体包括以下步骤:Further, the target search stage of the method of the present invention specifically includes the following steps:
步骤1:可见光与红外镜头调至最大视场,进入目标搜索状态;Step 1: Adjust the visible light and infrared lenses to the maximum field of view and enter the target search state;
步骤2:分别获取可见光与红外图像与/>并且分别进行图像预处理;Step 2: Obtain visible and infrared images separately with/> And perform image preprocessing separately;
步骤3:分别将图像与/>输入可见光目标检测模型/>与红外目标检测模型/>进行模型推理,获得检测框集合/>与/>然后通过分辨率对齐之后两者取并集,并对交集/>部分检测结果进行融合,即当红外与可见光图像都检测到目标时,对检测框进行融合,融合后的框的中心位置由红外与可见光结果加权表示,预测置信度分数/>,其中/>根据具体情况预先设定;已经检测到的目标分为三组,/>表示可见光图像中检测到,但红外图像未检测到的目标组,/>表示红外图像中检测到,但可见光图像中未检测到的目标组,/>表示两者都检测到的目标组;Step 3: Separate images with/> Enter the visible light target detection model/> and infrared target detection model/> Carry out model inference and obtain the detection frame set/> with/> Then take the union of the two after aligning them by resolution, and compare the intersection/> Partial detection results are fused, that is, when the target is detected in both infrared and visible light images, the detection frame is fused. The center position of the fused frame is weighted by the infrared and visible light results, and the prediction confidence score /> , of which/> Preset according to specific situations; detected targets are divided into three groups,/> Represents the target group detected in the visible light image but not detected in the infrared image, /> Represents a target group detected in the infrared image but not detected in the visible light image, /> Represents target groups detected by both;
步骤4:通过目标跟踪算法对视野中的多个目标进行跟踪,并对其进行排序编号,顺序为,对于每个组内的目标,按照置信度从高到低的进行排序;Step 4: Track multiple targets in the field of view through the target tracking algorithm, and sort and number them in the order of , for the targets in each group, sort them from high to low in terms of confidence;
步骤5:自适用的调整可见光与红外的焦距,重复步骤4,在连读多次检测稳定发现目标的次数大于阈值时,进入目标查证阶段。Step 5: Adjust the focal length of visible light and infrared adaptively, repeat step 4, and enter the target verification stage when the number of times the target is stably found after multiple consecutive readings is greater than the threshold.
进一步地,本发明的该方法的目标查证阶段具体包括以下步骤:Further, the target verification stage of the method of the present invention specifically includes the following steps:
步骤6:按照融合目标检测结果生成的目标ID,对ID为的目标进行锁定凝视,首次锁定时/>;Step 6: According to the target ID generated by the fusion target detection result, the ID is lock gaze on the target, when locking for the first time/> ;
步骤7:在红外图像中,保持对目标的锁定跟踪,通过改变可见光镜头的焦距,放大号目标对其进行细粒度的目标识别,获得目标的具体类别,包括船只型号、船舷号、船上数量信息,并在获取细粒度信息的同时,通过定位算法与构建的基于多模型粒子滤波的目标状态估计算法对目标的位置、速度信息进行估计,并对目标的接下来的位置进行预测;Step 7: In the infrared image, keep track of the target and zoom in by changing the focal length of the visible light lens Perform fine-grained target identification on the target to obtain the specific category of the target, including ship model, shipboard number, and quantity information on board. While obtaining the fine-grained information, the target based on multi-model particle filtering is constructed through the positioning algorithm and The state estimation algorithm estimates the target's position and speed information, and predicts the target's next position;
步骤8:获取ID为的目标信息之后,与红外图像的跟踪状态进行同步切换锁定ID为/>的目标,重复步骤6、7。Step 8: Get the ID as After the target information is obtained, the tracking status of the infrared image is switched synchronously to the lock ID of/> target, repeat steps 6 and 7.
本发明产生的有益效果是:The beneficial effects produced by the present invention are:
1、本发明构建的方法在无人机进行多目标查证的时候,可以通过定位算法、红外可见光融合目标检测算法等方法构建目标的一致化表征,有效解决高度相似目标的重识别问题。1. The method constructed in the present invention can construct a consistent representation of the target through positioning algorithm, infrared visible light fusion target detection algorithm and other methods when the UAV conducts multi-target verification, and effectively solves the problem of re-identification of highly similar targets.
2、本发明充分发挥了双光吊舱的灵活性,通过红外与可见光的单独控制,可以实现更加快速准确的目标锁定切换,充分利用了红外图像跟踪友好特点与可见光图像的无线粒度特点。2. The present invention gives full play to the flexibility of the dual-light pod. Through independent control of infrared and visible light, it can achieve faster and more accurate target locking switching, and fully utilizes the tracking-friendly characteristics of infrared images and the wireless granularity characteristics of visible light images.
3、本发明充分将目标定位与目标检测跟踪相结合,提高了真实场景下无人机的感知能力。3. The present invention fully combines target positioning with target detection and tracking, improving the perception ability of the drone in real scenarios.
4、本发明极大程度提高了无人机侦察系统的自主化程度,可以在较短的时间内完成对任务区域内多目标的快速自主查证。4. The present invention greatly improves the degree of autonomy of the UAV reconnaissance system and can complete rapid independent verification of multiple targets in the mission area in a short period of time.
附图说明Description of the drawings
下面将结合附图及实施例对本发明作进一步说明,附图中:The present invention will be further described below in conjunction with the accompanying drawings and examples. In the accompanying drawings:
图1是基于可见光图像和红外图像决策级融合的目标检测算法;Figure 1 is a target detection algorithm based on decision-level fusion of visible light images and infrared images;
图2是红外与可见红目标检测结果IOU融合示意图;Figure 2 is a schematic diagram of the IOU fusion of infrared and visible red target detection results;
图3是红外与可见光以不同视场执行不同任务示意图。Figure 3 is a schematic diagram of infrared and visible light performing different tasks in different fields of view.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。In order to make the purpose, technical solutions and advantages of the present invention more clear, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention and are not intended to limit the present invention.
实施例一Embodiment 1
本发明实施例的基于红外可见光融合的多目标自动查证方法,包括两个阶段,即目标搜索阶段和目标查证阶段,其中目标搜索阶段主要解决在任务区中目标有无的问题,而目标查证阶段主要对发现的多个目标通过变焦依次进行细粒度识别,获得船只型号、舷号,坦克型号等细粒度信息,通过结合武器火力分类等算法模型,可进一步实现威胁程度评估告警等不同后端任务。The multi-target automatic verification method based on infrared visible light fusion according to the embodiment of the present invention includes two stages, namely the target search stage and the target verification stage. The target search stage mainly solves the problem of whether the target is present in the task area, and the target verification stage It mainly performs fine-grained identification of multiple targets discovered through zooming to obtain fine-grained information such as ship model, hull number, tank model, etc. By combining algorithm models such as weapon firepower classification, different back-end tasks such as threat assessment and alarms can be further realized. .
针对技术问题1,构建基于可见光图像和红外图像决策级融合的目标检测算法,首先,保留只在可见图像或只在红外图像中检测到的目标的准确结果;然后对在可见光图像和红外图像中同时检测到的同一标的准确结果进行加权融合,包括对框的融合与对目标置信度的融合;最后,将所得的检测结果进行合并,作为融合图像中所有对应目标的检测结果进而实现基于决策级融合的快速目标检测。In response to technical question 1, a target detection algorithm based on the decision-level fusion of visible light images and infrared images is constructed. First, the accurate results of targets detected only in visible images or only in infrared images are retained; then, the target detected in visible light images and infrared images is The accurate results of the same target detected at the same time are weighted and fused, including the fusion of frames and the fusion of target confidence. Finally, the detection results are merged and used as the detection results of all corresponding targets in the fused image to achieve a decision-level based Fusion of fast object detection.
其中算法框架图如图1所示,其中可以根据天气状况人工选择图像增强算法(如雾天情况下的自适应去雾算法等),根据任务需求权衡小目标检测能力、算法实时性,背景复杂程度等选择不同的目标检测网络模型,相比于特征级或者像素级的红外可见光融合,该方法实际应用中更加灵活。The algorithm framework diagram is shown in Figure 1, in which the image enhancement algorithm can be manually selected according to weather conditions (such as adaptive defogging algorithm in foggy conditions, etc.), and the small target detection capability, algorithm real-time performance, and complex background can be weighed according to task requirements. Different target detection network models are selected according to the degree, etc. Compared with feature-level or pixel-level infrared and visible light fusion, this method is more flexible in practical applications.
红外与可见光的图像融合目标检测主要分为三类,即像素级图像融合、特征级图像融合、决策级图像融合。像素级图像融合指在输入网络模型之前,对红外和可见光进行配准,将对应的像素点进行处理从而得到融合之后的图像的过程,这种方法准确性较高,但是计算量大,同时实际应用过程中由于红外可见光的焦距、算法的处理时延等原因可能会是的获取的两张图像不是同一时刻、同一焦距所得,为配准带来较大的困难;特征级融合指对通过卷积对特征进行提取后,只保留重要信息,而去除一些冗余信息,然后对特征进行融合,这在一定程度上压缩了源图像中的细节信息,在实际应用过程中也缺乏一定的灵活性;而决策级图像融合,可以针对红外和可见光图像,各自通过不同的网络模型获取检测结果,再根据实际需要进行结果的融合,有较高的灵活性,尤其适用于本发明所涉及到的无人机目标侦察场景,通过基于可见光图像和红外图像决策级融合的目标检测算法,可以在目标搜索阶段充分利用红外与可见光这两种光谱下的图像数据,又能减小因为视角、焦距、时延以及分辨率等因素带来的融合问题,又能根据实际场景自由选择不同的模型,例如,当所要进行侦察的任务区域有着复杂的背景,则选择在复杂背景下鲁棒性更优的网络模型,而当所要侦察区域背景较简单时,又可以最大程度的提高小目标检测能力,以获得更大的可有效探测范围。Infrared and visible light image fusion target detection is mainly divided into three categories, namely pixel-level image fusion, feature-level image fusion, and decision-level image fusion. Pixel-level image fusion refers to the process of registering infrared and visible light before inputting into the network model, and processing the corresponding pixels to obtain the fused image. This method is more accurate, but it requires a large amount of calculation and is not practical. During the application process, due to reasons such as the focal length of infrared visible light and the processing delay of the algorithm, the two images obtained may not be obtained at the same time and at the same focal length, which brings greater difficulties to registration; feature-level fusion refers to the alignment through volume After extracting features, only important information is retained, and some redundant information is removed, and then the features are fused. This compresses the detailed information in the source image to a certain extent and lacks a certain degree of flexibility in practical applications. Decision-level image fusion can obtain detection results through different network models for infrared and visible light images, and then fuse the results according to actual needs. It has high flexibility and is especially suitable for the wireless sensors involved in the present invention. In human-machine target reconnaissance scenarios, through the target detection algorithm based on the decision-level fusion of visible light images and infrared images, the image data under the two spectrums of infrared and visible light can be fully utilized in the target search stage, and it can also reduce the problems caused by the angle of view, focal length, and time. It can solve the fusion problems caused by factors such as delay and resolution, and can freely choose different models according to the actual scene. For example, when the task area to be reconnoited has a complex background, choose a network with better robustness under the complex background. model, and when the background of the area to be detected is relatively simple, it can maximize the small target detection capability to obtain a larger effective detection range.
针对技术问题2,提出红外与可见光分别执行不同任务的策略,即红外图像凭借目标与背景差异更明显的优势以大视场保持对多目标的跟踪,而凭借可见光图像中拥有的更加丰富细节,进行更细粒度的目标识别,例如坦克型号,船只型号、舷号,浮标标识等并获得目标的位置信息,并由机动目标跟踪模型对其进行位置估计与预测,如图3所示。In response to technical issue 2, a strategy is proposed for infrared and visible light to perform different tasks respectively. That is, infrared images rely on the advantage of more obvious differences between targets and backgrounds to maintain tracking of multiple targets with a large field of view, while visible light images have richer details. Carry out more fine-grained target recognition, such as tank model, ship model, hull number, buoy identification, etc. and obtain the target's position information, and use the maneuvering target tracking model to estimate and predict its position, as shown in Figure 3.
机动目标跟踪模型可以选择对运行状态不确定性鲁棒性较好的交互式多模型IMM方法,具体来说,为避免具有机动检测的跟踪算法产生的估计时间延迟和机动检测过程中跟踪性能的降低,采用基于交互式多模型(IMM)的自适应机动目标跟踪算法,通过不同目标模型的交互作用来实现对目标机动状态的自适应估计。The maneuvering target tracking model can choose an interactive multi-model IMM method that is more robust to operating state uncertainty. Specifically, in order to avoid the estimation time delay caused by the tracking algorithm with maneuver detection and the degradation of tracking performance during the maneuver detection process To reduce the problem, an adaptive maneuvering target tracking algorithm based on interactive multi-model (IMM) is used to achieve adaptive estimation of the target maneuvering state through the interaction of different target models.
针对技术问题2a,在融合目标检测阶段的检测结果满足进入查证阶段的判定条件后,将融合目标检测结果叠加到红外图像上,并由基于红外图像的目标跟踪算法进行跟踪,为了防止由于红外图像中目标的特征不明显导致目标/>跟踪失败,在进行目标锁定查证时,优先锁定目标/>对其进行细粒度目标识别;若在进入查证阶段时,存在多个这样的目标,则查证第一个目标之后记录该目标位置,重新进入目标搜索阶段,直至进入查证阶段时,目标可基于红外图像被跟踪;针对技术问题2b,若存在目标/>重新进入红外视场,首先给定目标编号/>,在锁定目标进行细粒度识别时,首先根据目标定位结果判断该目标是否已经查证完毕,若满足Regarding technical issue 2a, after the detection results in the fusion target detection stage meet the judgment conditions for entering the verification stage, the fusion target detection results are superimposed on the infrared image and tracked by the target tracking algorithm based on the infrared image. In order to prevent Hit the target The characteristics of the target are not obvious/> Tracking failed. When performing target locking verification, priority is given to locking the target/> Perform fine-grained target identification on it; if there are multiple such targets when entering the verification stage, the target position will be recorded after verifying the first target, and the target search stage will be re-entered. Until the verification stage is entered, the target can be detected based on infrared Image is tracked; for technical issue 2b, if target exists/> To re-enter the infrared field of view, first give the target number/> , when locking the target for fine-grained identification, first judge whether the target has been verified based on the target positioning result. If
则认为为目标/>重新进入视场,否则,认为目标为新目标或者目标位置估计模型误差过大,目标编号不变,等待进一步查证。It is considered that for target/> Re-enter the field of view, otherwise, the target is considered to be a new target or the target position estimation model error is too large, and the target number remains unchanged, waiting for further verification.
针对技术问题3,为了无人机多任务侦察的灵活性,本发明中提到的深度学习算法可以由其他模型替换,可以根据不同的任务和不同的环境情况进行调整,以对某海面进行日常巡逻侦察场景为例,其中细粒度目标识别的模型根据侦察任务内容预先加载,可以通过加载开放场景字符识别模型、人员识别模型、人脸检测模型等,对进入侦察范围的船只进行查证,若检测到船上甲板等区域的人员,则进一步通过人脸检测模型,对人脸进行识别,对获取的船只图像、船只舷号、船上人员数量、人员人脸等信息进行保存并回传指挥中心。In response to technical question 3, in order to increase the flexibility of UAV multi-mission reconnaissance, the deep learning algorithm mentioned in the present invention can be replaced by other models, and can be adjusted according to different tasks and different environmental conditions to conduct daily monitoring of a certain sea surface. Take the patrol reconnaissance scenario as an example, in which the fine-grained target recognition model is pre-loaded according to the content of the reconnaissance mission. You can verify the ships entering the reconnaissance range by loading the open scene character recognition model, person recognition model, face detection model, etc. People arriving on the ship's deck and other areas will further use the face detection model to identify their faces, and the acquired information such as the ship image, ship's hull number, number of people on the ship, and their faces will be saved and sent back to the command center.
实施例二Embodiment 2
本发明实施例的基于红外可见光融合的多目标自动查证方法,包括以下步骤:The multi-target automatic verification method based on infrared visible light fusion according to the embodiment of the present invention includes the following steps:
目标搜索阶段,通过基于红外可见光融合的目标检测算法,为了有效利用红外与可见光的信息,降低计算成本同时考虑实用场景下的灵活性,本方法选择计算成本最低的决策级融合方法,而不是像素级或者特征级的图像融合,算法框架如图1所示。其中In the target search stage, through the target detection algorithm based on infrared and visible light fusion, in order to effectively utilize the information of infrared and visible light, reduce the computational cost and consider the flexibility in practical scenarios, this method chooses the decision-level fusion method with the lowest computational cost instead of pixel Level or feature level image fusion, the algorithm framework is shown in Figure 1. in
步骤1:可见光与红外镜头调至最大视场,进入目标搜索状态;Step 1: Adjust the visible light and infrared lenses to the maximum field of view and enter the target search state;
步骤2:分别获取可见光与红外图像与/>并且分别进行图像预处理,减小天气、传感器噪声带来的影响;Step 2: Obtain visible and infrared images separately with/> And perform image preprocessing separately to reduce the impact of weather and sensor noise;
步骤3:分别将图像与/>输入可见光目标检测模型/>与红外目标检测模型/>进行模型推理,获得检测框集合/>与/>然后通过分辨率对齐之后两者取并集,并对交集/>部分检测结果进行融合,即当红外与可见光图像都检测到目标时,对检测框进行融合,融合后的框的中心位置由红外与可见光结果加权表示,如图2所示,预测置信度分数同理,/>,其中/>根据具体情况预先设定;至此,已经检测到的目标分为三组,/>表示可见光图像中检测到,但红外图像未检测到的目标组,/>表示红外图像中检测到,但可见光图像中未检测到的目标组,/>表示两者都检测到的目标组;Step 3: Separate images with/> Enter the visible light target detection model/> and infrared target detection model/> Carry out model inference and obtain the detection frame set/> with/> Then take the union of the two after aligning them by resolution, and compare the intersection/> Part of the detection results are fused, that is, when the target is detected in both infrared and visible light images, the detection frame is fused. The center position of the fused frame is represented by the weighted infrared and visible light results, as shown in Figure 2. The prediction confidence score is the same Reason,/> , of which/> Preset according to specific situations; so far, the detected targets have been divided into three groups,/> Represents the target group detected in the visible light image but not detected in the infrared image, /> Represents a target group detected in the infrared image but not detected in the visible light image, /> Represents target groups detected by both;
步骤4:通过目标跟踪算法对视野中的多个目标进行跟踪,并对其进行排序编号,顺序为,对于每个组内的目标,按照置信度从高到低的进行排序。Step 4: Track multiple targets in the field of view through the target tracking algorithm, and sort and number them in the order of , for the targets in each group, sort them from high to low in terms of confidence.
步骤5:自适用的调整可见光与红外的焦距,重复步骤4,在连读多次检测稳定发现目标的次数大于阈值时,进入目标查证阶段。Step 5: Adjust the focal length of visible light and infrared adaptively, repeat step 4, and enter the target verification stage when the number of times the target is stably found after multiple consecutive readings is greater than the threshold.
目标查证阶段,红外相机始终保持较大的视场,通过目标跟踪算法对红外图像中的目标进行跟踪,保持对全局目标的感知,而可见光相机调整焦距使得视场减小,放大目标对其进行细粒度的识别,识别完成后跟踪红外图像中的目标信息更换锁定目标,依次进行细粒度识别。In the target verification phase, the infrared camera always maintains a large field of view and uses the target tracking algorithm to track the target in the infrared image to maintain the perception of the global target. The visible light camera adjusts the focal length to reduce the field of view and enlarge the target to detect it. Fine-grained recognition, after the recognition is completed, track the target information in the infrared image to replace the locked target, and perform fine-grained recognition in sequence.
步骤6:按照融合目标检测结果生成的目标ID,对ID为的目标进行锁定凝视(首次锁定时/>);Step 6: According to the target ID generated by the fusion target detection result, the ID is lock gaze on the target (when locking for the first time/> );
步骤7:在红外图像中,保持对目标的锁定跟踪,通过改变可见光镜头的焦距,放大号目标对其进行细粒度的目标识别,获得目标的具体类别,例如船只型号、船舷号、船上数量等信息,并在获取细粒度信息的同时,通过定位算法与构建的基于多模型粒子滤波的目标状态估计算法对目标的位置、速度信息进行估计,并对目标的接下来的位置进行预测;Step 7: In the infrared image, keep track of the target and zoom in by changing the focal length of the visible light lens Perform fine-grained target identification on the target to obtain the specific category of the target, such as ship model, shipboard number, number on board, etc., and while obtaining fine-grained information, use the positioning algorithm and the built-in multi-model particle filtering based The target state estimation algorithm estimates the target's position and speed information, and predicts the target's next position;
步骤8:获取ID为的目标信息之后,与红外图像的跟踪状态进行同步,切换锁定ID为/>的目标,重复步骤6、7。Step 8: Get the ID as After the target information is synchronized with the tracking status of the infrared image, switch the lock ID to/> target, repeat steps 6 and 7.
应当理解的是,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the sequence number of each step in the above embodiment does not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not constitute any influence on the implementation process of the embodiment of the present application. limited.
应当理解的是,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,而所有这些改进和变换都应属于本发明所附权利要求的保护范围。It should be understood that those skilled in the art can make improvements or changes based on the above description, and all these improvements and changes should fall within the protection scope of the appended claims of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310791996.1A CN116883297A (en) | 2023-06-30 | 2023-06-30 | Multi-target automatic verification method based on infrared and visible light fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310791996.1A CN116883297A (en) | 2023-06-30 | 2023-06-30 | Multi-target automatic verification method based on infrared and visible light fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116883297A true CN116883297A (en) | 2023-10-13 |
Family
ID=88256078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310791996.1A Pending CN116883297A (en) | 2023-06-30 | 2023-06-30 | Multi-target automatic verification method based on infrared and visible light fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116883297A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119152333A (en) * | 2024-11-13 | 2024-12-17 | 西北工业大学 | Multi-mode target re-identification method based on prompt learning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107105207A (en) * | 2017-06-09 | 2017-08-29 | 北京深瞐科技有限公司 | Target monitoring method, target monitoring device and video camera |
CN108665487A (en) * | 2017-10-17 | 2018-10-16 | 国网河南省电力公司郑州供电公司 | Substation's manipulating object and object localization method based on the fusion of infrared and visible light |
CN109815844A (en) * | 2018-12-29 | 2019-05-28 | 西安天和防务技术股份有限公司 | Object detection method and device, electronic equipment and storage medium |
CN110443776A (en) * | 2019-08-07 | 2019-11-12 | 中国南方电网有限责任公司超高压输电公司天生桥局 | A kind of Registration of Measuring Data fusion method based on unmanned plane gondola |
CN113033468A (en) * | 2021-04-13 | 2021-06-25 | 中国计量大学 | Specific person re-identification method based on multi-source image information |
CN113255521A (en) * | 2021-05-26 | 2021-08-13 | 青岛以萨数据技术有限公司 | Dual-mode target detection method and system for embedded platform |
-
2023
- 2023-06-30 CN CN202310791996.1A patent/CN116883297A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107105207A (en) * | 2017-06-09 | 2017-08-29 | 北京深瞐科技有限公司 | Target monitoring method, target monitoring device and video camera |
CN108665487A (en) * | 2017-10-17 | 2018-10-16 | 国网河南省电力公司郑州供电公司 | Substation's manipulating object and object localization method based on the fusion of infrared and visible light |
CN109815844A (en) * | 2018-12-29 | 2019-05-28 | 西安天和防务技术股份有限公司 | Object detection method and device, electronic equipment and storage medium |
CN110443776A (en) * | 2019-08-07 | 2019-11-12 | 中国南方电网有限责任公司超高压输电公司天生桥局 | A kind of Registration of Measuring Data fusion method based on unmanned plane gondola |
CN113033468A (en) * | 2021-04-13 | 2021-06-25 | 中国计量大学 | Specific person re-identification method based on multi-source image information |
CN113255521A (en) * | 2021-05-26 | 2021-08-13 | 青岛以萨数据技术有限公司 | Dual-mode target detection method and system for embedded platform |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119152333A (en) * | 2024-11-13 | 2024-12-17 | 西北工业大学 | Multi-mode target re-identification method based on prompt learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11900668B2 (en) | System and method for identifying an object in water | |
Zhang et al. | Survey on Deep Learning‐Based Marine Object Detection | |
CN115147594A (en) | Ship image trajectory tracking and predicting method based on ship bow direction identification | |
Yu et al. | Object detection-tracking algorithm for unmanned surface vehicles based on a radar-photoelectric system | |
Yi et al. | Research on underwater small target detection algorithm based on improved YOLOv7 | |
CN113705375A (en) | Visual perception device and method for ship navigation environment | |
CN109145747A (en) | A kind of water surface panoramic picture semantic segmentation method | |
Zhang et al. | A object detection and tracking method for security in intelligence of unmanned surface vehicles | |
Zhu et al. | Arbitrary-oriented ship detection based on retinanet for remote sensing images | |
CN106372590A (en) | Sea surface ship intelligent tracking system and method based on machine vision | |
CN118470299A (en) | ISSOD-based infrared ship small target detection method and ISSOD-based infrared ship small target detection equipment | |
CN116883297A (en) | Multi-target automatic verification method based on infrared and visible light fusion | |
CN115018883B (en) | Power transmission line unmanned aerial vehicle infrared autonomous inspection method based on optical flow and Kalman filtering | |
Fu et al. | Real-time infrared horizon detection in maritime and land environments based on hyper-laplace filter and convolutional neural network | |
Dong et al. | Visual detection algorithm for enhanced environmental perception of unmanned surface vehicles in complex marine environments | |
Cafaro et al. | Toward enhanced support for ship sailing | |
CN112307943B (en) | Water area man-boat target detection method, system, terminal and medium | |
Jin et al. | An occlusion-aware tracker with local-global features modeling in UAV videos | |
KR20210153989A (en) | Object recognition apparatus with customized object detection model | |
CN116224321A (en) | Unmanned ship automatic target detection method and system based on computer vision | |
Yao et al. | USVTrack: USV-Based 4D Radar-Camera Tracking Dataset for Autonomous Driving in Inland Waterways | |
CN119672325B (en) | Target identification tracking guiding method, detection equipment, program product and storage medium of photoelectric pod | |
CN119672324B (en) | Target tracking detection method, detection equipment, program product and storage medium of photoelectric pod | |
CN110895680A (en) | Unmanned ship water surface target detection method based on regional suggestion network | |
Xue et al. | FECI-RTDETR A lightweight unmanned aerial vehicle infrared small target detector Algorithm based on RT-DETR |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |