[go: up one dir, main page]

CN110490903B - Multi-target rapid capturing and tracking method in binocular vision measurement - Google Patents

Multi-target rapid capturing and tracking method in binocular vision measurement Download PDF

Info

Publication number
CN110490903B
CN110490903B CN201910738024.XA CN201910738024A CN110490903B CN 110490903 B CN110490903 B CN 110490903B CN 201910738024 A CN201910738024 A CN 201910738024A CN 110490903 B CN110490903 B CN 110490903B
Authority
CN
China
Prior art keywords
target
beacon
tracking
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910738024.XA
Other languages
Chinese (zh)
Other versions
CN110490903A (en
Inventor
陆文
严棚
徐智勇
魏宇星
左颢睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Optics and Electronics of CAS
Original Assignee
Institute of Optics and Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Optics and Electronics of CAS filed Critical Institute of Optics and Electronics of CAS
Priority to CN201910738024.XA priority Critical patent/CN110490903B/en
Publication of CN110490903A publication Critical patent/CN110490903A/en
Application granted granted Critical
Publication of CN110490903B publication Critical patent/CN110490903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种双目视觉测量中多目标快速捕获与跟踪方法,属于计算机视觉测量领域。所述多目标快速捕获与跟踪方法包括强杂波背景条件下图像预处理,目标捕获,多目标识别,多目标跟踪。本发明能够实现海量数据的实时处理,同时在太阳光照、天空云层、成像噪声、海平面反射等强杂波情况下能够稳定的捕获和跟踪物体上多个信标点,并将图像中每个信标点和其空间位置对应匹配,本发明能够提高双目视觉测量中多目标识别与跟踪的稳定性和鲁棒性。

Figure 201910738024

The invention provides a multi-target fast capturing and tracking method in binocular vision measurement, belonging to the field of computer vision measurement. The multi-target fast capture and tracking method includes image preprocessing, target capture, multi-target recognition, and multi-target tracking under the condition of strong clutter background. The present invention can realize real-time processing of massive data, and can stably capture and track multiple beacon points on objects under strong clutter conditions such as sunlight, sky clouds, imaging noise, sea level reflection, etc. Punctuation points and their spatial positions are correspondingly matched, and the invention can improve the stability and robustness of multi-target recognition and tracking in binocular vision measurement.

Figure 201910738024

Description

一种双目视觉测量中多目标快速捕获与跟踪方法A method for fast acquisition and tracking of multiple targets in binocular vision measurement

技术领域technical field

本发明涉及一种复杂背景下多个弱小目标的快速捕获与跟踪方法,特别是双目视觉测量中多个合作目标源(即下文所说的信标)捕获与跟踪方法。属于图像处理、计算机视觉测量领域。The invention relates to a method for fast capturing and tracking of multiple weak and small targets in a complex background, in particular to a method for capturing and tracking multiple cooperative target sources (beacons referred to below) in binocular vision measurement. It belongs to the fields of image processing and computer vision measurement.

背景技术Background technique

随着光电技术的发展以及和学科交叉之间的融合,采用计算机视觉测量方法进行非接触测量正在成为一种重要的测量方法。视觉测量技术不仅可以测量物体的距离同时可以测量物体的三维空间位置和姿态角度。视觉测量技术具有成本低、作用距离远等特点,只要测量相机能够有效提取物体上一定数量的特征点,理论上可以测量任意远的物体位置。现在视觉测量技术广泛应用于航天、航空、航海等领域,同时也广泛应用于机械制造、医学、生物学、建筑学等领域。With the development of optoelectronic technology and the integration of cross-disciplines, the use of computer vision measurement methods for non-contact measurement is becoming an important measurement method. Vision measurement technology can not only measure the distance of the object but also measure the three-dimensional space position and attitude angle of the object. Visual measurement technology has the characteristics of low cost and long working distance. As long as the measurement camera can effectively extract a certain number of feature points on the object, it can theoretically measure the position of an object at any distance. Now visual measurement technology is widely used in aerospace, aviation, navigation and other fields, as well as in machinery manufacturing, medicine, biology, architecture and other fields.

要快速实时测量空间中物体相对于某一参考坐标系的位置和姿态,首要的问题就是要快速捕获和准确跟踪物体上一定个数的合作信标点。通过提取信标点在图像中的坐标,结合对应信标点在被测物体上的坐标,可以建立投影映射方程组,通过摄影测量原理可以计算出被测物体相对于参考坐标系的旋转和平移矩阵,即测量出被测物体在空间中的位置和姿态角度。To quickly and real-time measure the position and attitude of an object in space relative to a certain reference coordinate system, the primary problem is to quickly capture and accurately track a certain number of cooperative beacon points on the object. By extracting the coordinates of the beacon points in the image and combining the coordinates of the corresponding beacon points on the measured object, the projection mapping equations can be established, and the rotation and translation matrix of the measured object relative to the reference coordinate system can be calculated through the principle of photogrammetry. That is, the position and attitude angle of the measured object in space are measured.

在实现本发明的过程中,发明人发现现有技术至少存在以下不足:In the process of realizing the present invention, the inventor finds that the prior art has at least the following deficiencies:

双目视觉测量系统中目标捕获和跟踪装置大都基于PC机,然而PC机体积大、功耗高、稳定性差,难以满足工程应用,迫切需要开发一种稳定的高速嵌入式处理平台进行双相机中大数据量的实时处理;传统的多目标捕获与跟踪方法大都孤立的对每个信标进行处理,没有考虑双目视觉测量中同一物体两侧不同信标点之间的刚性连接,以及有效利用双目测量系统特有的结果数据,即被测量物体的位置和姿态信息,以提高目标捕获和跟踪的稳定性。Most of the target capture and tracking devices in binocular vision measurement systems are based on PCs. However, PCs are large in size, high in power consumption, and poor in stability, which is difficult to meet engineering applications. It is urgent to develop a stable high-speed embedded processing platform for dual-camera Real-time processing of large amounts of data; traditional multi-target acquisition and tracking methods mostly process each beacon in isolation, without considering the rigid connection between different beacon points on both sides of the same object in binocular vision measurement, and the effective use of binocular The unique result data of the eye measurement system, that is, the position and attitude information of the measured object, in order to improve the stability of target capture and tracking.

发明内容Contents of the invention

本发明的目的在于提供一种双目视觉测量中多目标快速捕获跟踪方法,以实现海量数据的实时处理,同时在强杂波(太阳光照、天空云层、成像噪声、海平面反射等)情况下稳定的捕获和跟踪物体上多个信标点,并将跟踪到的每个图像信标和其物理空间位置对应匹配。The purpose of the present invention is to provide a multi-target fast capture and tracking method in binocular vision measurement, to realize the real-time processing of massive data, and at the same time under the condition of strong clutter (sun illumination, sky clouds, imaging noise, sea level reflection, etc.) Stably capture and track multiple beacon points on the object, and match each tracked image beacon with its physical space position.

本发明一种双目测量中多目标捕获与跟踪方法,包括:1)强杂波背景条件下图像预处理;2)目标捕获;3)多目标识别;4)多目标跟踪,下面分别对每一步骤进行介绍。A method for capturing and tracking multiple targets in binocular measurement of the present invention, comprising: 1) image preprocessing under strong clutter background conditions; 2) target capturing; 3) multiple target recognition; 4) multiple target tracking, each Introduce step by step.

1)强杂波背景条件下图像预处理。采用滤波方法抑制背景杂波的干扰,增强目标能量。1) Image preprocessing under strong clutter background conditions. The filter method is used to suppress the interference of background clutter and enhance the target energy.

所述背景杂波主要包括太阳光照、天空云层、海平面反射等。The background clutter mainly includes sunlight, sky clouds, sea level reflection and the like.

所述滤波方法是指采用一个5*5的高通滤波器模板对图像进行卷积,以增强图像中的信标点,抑制背景杂波。The filtering method refers to using a 5*5 high-pass filter template to convolve the image to enhance beacon points in the image and suppress background clutter.

2)目标捕获。通过提取图像中所有可能的候选目标,形成候选目标集,为下一步目标识别做准备。所述目标捕获包括目标自适应阈值分割和目标搜索与区域标记。2) Target capture. By extracting all possible candidate targets in the image, a candidate target set is formed to prepare for the next step of target recognition. The target acquisition includes target adaptive threshold segmentation and target search and region labeling.

所述目标自适应阈值分割是指根据全局图像特点,自动计算分割阈值,利用该阈值将图像分割成前景为1,背景为0的二值图像。The target adaptive threshold segmentation refers to automatically calculating the segmentation threshold according to the characteristics of the global image, and using the threshold to segment the image into a binary image with 1 in the foreground and 0 in the background.

所述目标搜索与区域标记是指对分割后的二值图像计算所有连通区域,并统计每个连通区域的属性特征,包括数学形态学处理;多目标区域标记;目标合并与分离;多目标特征提取。The target search and region marking refers to calculating all connected regions for the divided binary image, and counting the attribute characteristics of each connected region, including mathematical morphology processing; multi-target region marking; target merging and separation; multi-target features extract.

所述数学形态学处理是指利用数学形态学中的开操作对二值图像进行处理,即先对图像进行腐蚀,然后进行膨胀操作,去除面积较小的虚假连通区域,并填充面积较大的连通区域。The mathematical morphology processing refers to processing the binary image using the opening operation in mathematical morphology, that is, first corroding the image, and then performing an expansion operation to remove false connected regions with a smaller area and fill in larger areas. Connected area.

所述多目标区域标记是指将二值图像中符合某种连通规则(4邻域连通和8邻域连通)的像素用相同的标记表示,使得相连通的区域标记相同。The labeling of multi-target regions refers to representing the pixels in the binary image conforming to a certain connectivity rule (4-neighborhood connectivity and 8-neighborhood connectivity) with the same marker, so that the markers of the connected regions are the same.

所述目标合并与分离是指如果两个目标区域距离很近,则将这两个区域进行合并,形成一个目标区域;否则,将这两个区域进行分离。The target merging and separation means that if two target areas are very close, the two areas are combined to form one target area; otherwise, the two areas are separated.

所述多目标特征提取是指计算候选目标集中每个候选目标的属性特征,包括目标的面积、能量、长宽比、质心点坐标。The multi-object feature extraction refers to calculating the attribute features of each candidate object in the candidate object set, including the area, energy, aspect ratio, and centroid point coordinates of the object.

3)多目标识别。从候选目标集中识别出多个真实的信标点,包括目标属性特征识别;目标之间几何关系识别;目标运动连续性识别。3) Multi-target recognition. Identify multiple real beacon points from the candidate target set, including target attribute feature recognition; geometric relationship recognition between targets; target motion continuity recognition.

所述目标属性特征识别是指根据目标一定的属性特征从候选目标集中去除虚假目标,保留最有可能是真实信标点的目标集。The target attribute feature recognition refers to removing false targets from the candidate target set according to a certain attribute feature of the target, and retaining the target set that is most likely to be a real beacon point.

所述一定的属性特征是指候选目标的面积、能量和长宽比,如果候选目标的面积、能量和长宽比特征超过合理范围,则认为其为假目标,予以剔除。The certain attribute features refer to the area, energy and aspect ratio of the candidate target. If the area, energy and aspect ratio of the candidate target exceed a reasonable range, it will be considered as a false target and eliminated.

所述目标之间几何关系识别是指安装在被测物体每一侧的多个信标点投影在图像上都呈现近似正方形的关系,可以利用多个信标点之间的几何位置关系从候选目标集中找到真实的多个信标点。The recognition of the geometric relationship between the targets means that the projections of multiple beacon points installed on each side of the measured object all present an approximate square relationship on the image, and the geometric position relationship between multiple beacon points can be used to select from the candidate target set Find real multiple beacon points.

所述目标运动连续性识别是指通过每个目标运动轨迹进行分析,以便从候选目标集中剔除虚假的目标点。The recognition of target motion continuity refers to analyzing each target motion trajectory in order to eliminate false target points from the candidate target set.

所述目标运动轨迹是利用目标运动时在每帧图像的特征信息(包括目标面积信息和目标位置信息),通过统计目标在相邻图像序列之间变化的平滑性和目标坐标位置变化的连续性进行判断。The target motion trajectory is based on the feature information (including target area information and target position information) of each frame image when the target is moving, and the smoothness of the change of the target between adjacent image sequences and the continuity of the change of the target coordinate position through statistics judge.

4)多目标跟踪。在每帧图像中提取每个信标点的质心坐标,形成目标的运动轨迹,包括多目标初始跟踪;多目标轨迹预测;多目标锁定跟踪;目标丢失重新捕获。4) Multi-target tracking. The centroid coordinates of each beacon point are extracted from each frame image to form the trajectory of the target, including initial tracking of multiple targets; trajectory prediction of multiple targets; locking and tracking of multiple targets; and recapture of lost targets.

所述多目标初始跟踪是通过目标在相邻图像序列之间运动的连续性判断所要跟踪的多个信标点是否为真实的信标点,同时构建每个信标的运动轨迹,为目标轨迹预测作准备,是多目标识别步骤的补充。The multi-target initial tracking is to judge whether the plurality of beacon points to be tracked are real beacon points through the continuity of the target's movement between adjacent image sequences, and to construct the trajectory of each beacon at the same time to prepare for target trajectory prediction , is a complement to the multi-object recognition step.

所述多目标轨迹预测是指根据当前时刻目标的速度和位移预测目标在下一时刻的位移。The multi-target trajectory prediction refers to predicting the displacement of the target at the next moment according to the speed and displacement of the target at the current moment.

所述多目标锁定跟踪是指实时提取当前时刻每个信标的质心坐标,建立每个信标的运动轨迹。The multi-target locking and tracking refers to extracting the centroid coordinates of each beacon at the current moment in real time, and establishing the movement track of each beacon.

所述目标丢失重新捕获是指当目标丢失时,跟踪算法可以继续对剩下的目标点进行锁定跟踪,并利用剩下的目标点进行摄影测量,计算物体的位置和姿态,只要两个摄相机中剩下的信标个数之和大于等于四个即可;当丢失的目标重新出现时,目标丢失重新捕获方法能够立刻将丢失的目标捕获到,并加入到摄影测量的方程组中,提高方程求解精度,同时提高物体位置和姿态的计算精度。The target loss recapture means that when the target is lost, the tracking algorithm can continue to lock and track the remaining target points, and use the remaining target points to perform photogrammetry to calculate the position and attitude of the object, as long as two cameras The sum of the number of remaining beacons in the above can be greater than or equal to four; when the lost target reappears, the target loss recapture method can immediately capture the lost target and add it to the photogrammetry equation group to improve Equation solution accuracy, while improving the calculation accuracy of object position and attitude.

本发明与现有技术相比的优点在于:The advantage of the present invention compared with prior art is:

(1)本发明利用强杂波背景条件下图像预处理方法可以降低背景杂波、提高图像的信噪比,降低后续目标捕获和跟踪的复杂度和处理时间;(1) The present invention utilizes the image preprocessing method under the strong clutter background condition to reduce the background clutter, improve the signal-to-noise ratio of the image, and reduce the complexity and processing time of subsequent target capture and tracking;

(2)本发明利用多层递进的方式进行多目标识别。首先通过目标属性特征识别进行候选目标的初步筛查;然后利用目标之间几何关系进行进一步识别,将大量不满足几何关系的候选目标组合排除掉;最后利用目标运动连续性进行时域识别,通过目标运动的轨迹连续性进行最终判别。这样递进的识别方式一方面提高了识别的效率,减少了计算量,同时也可以降低误识别率;(2) The present invention performs multi-target recognition in a multi-layer progressive manner. Firstly, the preliminary screening of candidate targets is carried out through target attribute feature recognition; then, the geometric relationship between targets is used for further identification, and a large number of candidate target combinations that do not satisfy the geometric relationship are excluded; finally, the time domain recognition is carried out by using the continuity of target motion. The trajectory continuity of the target motion is used for final discrimination. This progressive recognition method improves the efficiency of recognition on the one hand, reduces the amount of calculation, and can also reduce the false recognition rate;

(3)本发明可以对多目标进行稳定跟踪,当某个目标丢失时仍可以对剩余目标进行继续跟踪,同时当丢失目标再次出现时,目标丢失重新捕获方法仍可以将其捕获并加入到跟踪列表,同时不改变目标的跟踪编号;(3) The present invention can stably track multiple targets, and when a certain target is lost, it can still continue to track the remaining targets, and when the lost target reappears, the target loss recapture method can still capture it and add it to the tracking list without changing the tracking number of the target;

(4)本发明方法可以广泛应用于双目视觉测量系统,提高系统中目标捕获和跟踪的可靠性和稳定性。(4) The method of the present invention can be widely applied to a binocular vision measurement system to improve the reliability and stability of target capture and tracking in the system.

附图说明Description of drawings

图1为本发明双目视觉测量中多目标快速捕获和跟踪方法流程示意图;Fig. 1 is a schematic flow chart of multi-target fast capture and tracking method in binocular vision measurement of the present invention;

图2为本发明目标捕获方法流程示意图;Fig. 2 is a schematic flow chart of the target capture method of the present invention;

图3为本发明目标搜索与区域标记方法流程示意图;Fig. 3 is a schematic flow chart of the target search and region labeling method of the present invention;

图4为本发明多目标识别方法流程示意图;4 is a schematic flow chart of the multi-target recognition method of the present invention;

图5为本发明图像上多个信标点编号示意图;Fig. 5 is a schematic diagram of multiple beacon point numbers on the image of the present invention;

图6为本发明被测量物体一侧多个信标点在图像上投影关系示意图;Fig. 6 is a schematic diagram of the projection relationship of multiple beacon points on the image on the side of the object to be measured according to the present invention;

图7为本发明多目标跟踪方法流程示意图;7 is a schematic flow chart of the multi-target tracking method of the present invention;

图8为本发明多目标快速捕获和跟踪方法不同构成单元状态转换示意图;Fig. 8 is a schematic diagram of the state transition of different constituent units of the multi-target fast acquisition and tracking method of the present invention;

图9为本发明双目视觉测量原理示意图。Fig. 9 is a schematic diagram of the principle of binocular vision measurement in the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方法作进一步详细描述。In order to make the object, technical solution and advantages of the present invention clearer, the implementation method of the present invention will be further described in detail below in conjunction with the accompanying drawings.

用于实施本发明的硬件环境是:图像处理平台;两台摄像机;被测物体。其中图像处理平台由4片主频为600MHz的32位浮点ADSP-TS201数字信号处理器以及1片超大规模高速可编程逻辑器件XC4VLX80组成。如图9所示,两台摄像机31、32分布在被测目标两侧,被测目标33两侧各安装4个信标灯。本发明实施流程为:首先通过两路摄像机同时获取被测量物体两侧各包含4个信标点的图像数据流,将两路数据流通过光纤实时传给图像处理平台;图像处理平台对图像进行预处理、多目标捕获与跟踪,提取每个信标点的质心坐标;然后利用信标点的质心坐标进行摄影测量,解算出物体相对于参考坐标系34的位置和姿态。The hardware environment for implementing the present invention is: an image processing platform; two cameras; and a measured object. The image processing platform is composed of four 32-bit floating-point ADSP-TS201 digital signal processors with a main frequency of 600MHz and one ultra-large-scale high-speed programmable logic device XC4VLX80. As shown in FIG. 9 , two cameras 31 and 32 are distributed on both sides of the target to be measured, and four beacon lights are installed on both sides of the target to be measured 33 . The implementation process of the present invention is as follows: firstly, the image data streams containing 4 beacon points on both sides of the object to be measured are simultaneously acquired through two cameras, and the two data streams are transmitted to the image processing platform in real time through optical fibers; the image processing platform performs pre-processing on the images processing, multi-target capture and tracking, extracting the centroid coordinates of each beacon point;

如图1所示,本发明一种双目视觉测量中多目标快速捕获和跟踪方法,用于在强杂波干扰条件下目标快速捕获、识别和跟踪。其方法的具体实施步骤是:1)强杂波背景条件下图像预处理;2)目标捕获;3)多目标识别;4)多目标跟踪。各步骤的详细说明如下:As shown in Fig. 1, the present invention is a multi-target rapid acquisition and tracking method in binocular vision measurement, which is used for rapid acquisition, identification and tracking of targets under strong clutter interference conditions. The specific implementation steps of the method are: 1) image preprocessing under the condition of strong clutter background; 2) target capture; 3) multi-target recognition; 4) multi-target tracking. The detailed description of each step is as follows:

1)强杂波背景条件下图像预处理1) Image preprocessing under strong clutter background conditions

为了后续目标捕获和跟踪的正确实施,必须选择有效的预处理方法对强杂波背景条件下图像进行预处理,以提高目标的能量、去除或削弱背景杂波的影响。经过实验数据分析发现,主要的杂波干扰为太阳光照、天空云层、海面杂波等,而物体在一定的测量范围内,其两侧的信标点成像大小为3*3~5*5像素左右,信标点在图像上的能量远远小于太阳光照的能量,信标点在图像上的成像是一个个孤立的奇异点。For the correct implementation of subsequent target acquisition and tracking, an effective preprocessing method must be selected to preprocess images under strong clutter background conditions to improve the energy of the target and remove or weaken the influence of background clutter. After the analysis of experimental data, it is found that the main clutter interference is sunlight, sky clouds, sea clutter, etc., and the object is within a certain measurement range, and the imaging size of the beacon points on both sides is about 3*3~5*5 pixels , the energy of the beacon points on the image is far less than the energy of the sun's light, and the imaging of the beacon points on the image is an isolated singular point.

有鉴于此,本发明提供了一个5*5的高通滤波器模板F对图像中的信标点进行增强,对背景杂波进行抑制。所述5*5的高通滤波器模板F为:In view of this, the present invention provides a 5*5 high-pass filter template F to enhance the beacon points in the image and suppress the background clutter. The high-pass filter template F of the 5*5 is:

Figure BDA0002162914950000051
Figure BDA0002162914950000051

通过将高通滤波器模板F对图像进行卷积运算,可以将图像中大部分连续分布的杂波干扰去除掉,如连续分布的云层、太阳光照等,但是对于不连续分布的云层干扰仍然无法去除,所以滤波后的图像中除了包含有用的目标点数据,还含有一部分滤波后的噪声点数据,就需要在后续步骤中进行处理。By convolving the image with the high-pass filter template F, most of the continuously distributed clutter interference in the image can be removed, such as continuously distributed clouds, sunlight, etc., but the discontinuously distributed cloud layer interference still cannot be removed , so in addition to the useful target point data, the filtered image also contains part of the filtered noise point data, which needs to be processed in the subsequent steps.

2)目标捕获2) Target capture

如图2所示,所述目标捕获包括目标自适应阈值分割和目标搜索与区域标记,下面对每一步骤进行详细介绍。As shown in FIG. 2 , the target acquisition includes target adaptive threshold segmentation, target search and region marking, and each step will be described in detail below.

所述目标自适应阈值分割是指将当前的灰度图像分割成前景为1、背景为0的二值图像,具体是通过如下方法进行:The target adaptive threshold segmentation refers to segmenting the current grayscale image into a binary image whose foreground is 1 and background is 0, specifically by the following method:

a).统计预处理后图像的均值:a). The mean value of the image after statistical preprocessing:

Figure BDA0002162914950000052
Figure BDA0002162914950000052

其中I(x,y)是坐标为(x,y)的像素灰度值,M和N是图像的长和宽,

Figure BDA0002162914950000053
是图像均值。Where I(x,y) is the pixel gray value with coordinates (x,y), M and N are the length and width of the image,
Figure BDA0002162914950000053
is the image mean.

b).统计图像的方差:b). The variance of the statistical image:

Figure BDA0002162914950000054
Figure BDA0002162914950000054

其中S是图像的方差值;图像的分割阈值选为:

Figure BDA0002162914950000061
其中a是调节因子,优选为1.75~2.45之间。where S is the variance value of the image; the segmentation threshold of the image is chosen as:
Figure BDA0002162914950000061
Wherein a is an adjustment factor, preferably between 1.75 and 2.45.

如图3所示,所述目标搜索与区域标记包括如下步骤:(a)数学形态学处理;(b)多目标区域标记;(c)目标合并与分离;(d)多目标特征提取。As shown in FIG. 3 , the target search and region marking includes the following steps: (a) mathematical morphology processing; (b) multi-target region marking; (c) target merging and separation; (d) multi-target feature extraction.

(a)数学形态学处理。对自适应阈值分割后的二值图像进行形态学开操作,即先对图像进行腐蚀,然后进行膨胀操作,以去除面积较小的虚假连通区域,并填充面积较大的连通区域。其中,腐蚀和膨胀所采用的结构元素都是3*3的模板,由于该部分的都是简单的逻辑操作,算法的并行度比较高,所以该部分运算放在现场可编程逻辑器件中进行,以提高运算的速度。(a) Mathematical Morphological Processing. The morphological opening operation is performed on the binary image after adaptive threshold segmentation, that is, the image is eroded first, and then the dilation operation is performed to remove the false connected regions with a small area and fill the connected regions with a large area. Among them, the structural elements used in erosion and expansion are all 3*3 templates. Since this part is a simple logic operation, the parallelism of the algorithm is relatively high, so this part of the operation is carried out in the field programmable logic device. To increase the speed of operation.

(b)多目标区域标记。将二值图像中符合某种连通规则(4邻域连通和8邻域连通)的像素用相同的标记表示,本发明选用8邻域连通的方式对数学形态学处理后的图像进行标记。本发明中图像行数为N,目标段定义为图像一行中像素连续为1的像素集合,并标记为L[i]={Ms,Me,line},其中Ms,Me分别表示line行中目标段的开始位置和终止位置。算法具体实施步骤为:(b) Multi-target region labeling. Pixels in the binary image conforming to certain connectivity rules (4-neighborhood connectivity and 8-neighborhood connectivity) are represented by the same mark, and the present invention uses 8-neighborhood connectivity to mark the image processed by mathematical morphology. In the present invention, the number of image lines is N, and the target segment is defined as a set of pixels in which the pixels in one line of the image are continuously 1, and marked as L[i]={Ms, Me, line}, where Ms and Me represent the target in the line row respectively Where the segment starts and ends. The specific implementation steps of the algorithm are:

第0步:label初始化为0,其记录连通区域的标记编号,line初始化为0;Step 0: label is initialized to 0, which records the label number of the connected area, and line is initialized to 0;

第1步:在当前行中寻找未标记过的目标段,如果有,则记为{Ms,Me,line},并且label加1,对该目标段中的像素标记赋值为label,进行第2~4步;如果没有,则进入第4步;Step 1: Find an unmarked target segment in the current line, if there is, record it as {Ms, Me, line}, and add 1 to the label, assign the pixel mark in the target segment to the label, and proceed to the second step ~4 steps; if not, go to step 4;

第2步:如果line>0,在上一行即line-1行寻找与当前目标段连通的所有未标记过的目标段,如果有,则对每一个目标段中的像素标记为label,同时,对每一个目标段重复第2~3步;如果没有或者line=0,则进入第3步;Step 2: If line>0, find all unmarked target segments connected to the current target segment in the previous line, that is, line-1. If there is, mark the pixels in each target segment as label, and at the same time, Repeat steps 2 to 3 for each target segment; if there is no line or line=0, go to step 3;

第3步:如果line+1<N,在下一行即line+1行寻找与当前目标段连通的所有未标记过的目标段,如果有,则对每一个目标段中的像素标记为label,同时,对每一个目标段重复第2~3步;如果没有,或者line=N-1,不进行任何操作;Step 3: If line+1<N, find all unmarked target segments connected to the current target segment in the next line, that is, line+1. If there is, mark the pixels in each target segment as label, and at the same time , repeat steps 2 to 3 for each target segment; if not, or line=N-1, do nothing;

第4步:如果当前行结束,则line加1(如果line=N,则结束;否则进入第1步);如果当前行没有结束,进入第1步。Step 4: If the current line ends, add 1 to line (if line=N, end; otherwise, enter step 1); if the current line does not end, enter step 1.

经过以上5步即可以完成多目标区域标记,每个连通的目标区域都赋予唯一的编号。After the above 5 steps, the multi-target area marking can be completed, and each connected target area is given a unique number.

(c)目标合并与分离。在上述多目标区域标记的基础之上,即如果两个目标区域距离很近,则将这两个区域进行合并,形成一个目标区域;否则,将这两个区域进行分离。其中两个目标区域相邻是指两个目标的区域边缘小于阈值T。(c) Target merging and separation. On the basis of the above multi-target area marking, that is, if two target areas are very close, the two areas are merged to form one target area; otherwise, the two areas are separated. Where two target areas are adjacent means that the area edges of the two targets are smaller than the threshold T.

所述阈值T优选为3~5个像素之间。The threshold T is preferably between 3 and 5 pixels.

(d)多目标特征提取。计算候选目标集中每个候选目标的属性特征,为下一步目标识别提供判据。(d) Multi-target feature extraction. Calculate the attribute characteristics of each candidate target in the candidate target set to provide criteria for the next target recognition.

所述候选目标集是指经过上一步目标合并与分离后,剩下的每个连通的目标区域。The candidate target set refers to each remaining connected target region after the target merging and separation in the previous step.

所述候选目标的属性特征包括:每个候选目标的面积、能量、长宽比、质心点坐标。对每个候选目标都建立一个特征结构体,这样有几个候选目标就建立几个特征结构体,最后输出特征结构体数组,即候选目标属性特征集。The attribute features of the candidate targets include: the area, energy, aspect ratio, and centroid point coordinates of each candidate target. A feature structure is established for each candidate target, so that several feature structures are created for several candidate targets, and finally an array of feature structures is output, which is the attribute feature set of candidate targets.

3)多目标识别3) Multi-target recognition

如图4所示,所述多目标识别包括如下步骤:(a)目标属性特征识别;(b)目标之间几何关系识别;(c)目标运动连续性识别。As shown in FIG. 4 , the multi-target recognition includes the following steps: (a) target attribute feature recognition; (b) geometric relationship recognition between targets; (c) target motion continuity recognition.

(a)目标属性特征识别。通过目标的属性特征从候选目标集中去除虚假目标,保留最有可能是真实信标点的目标集。具体方法为:首先,利用目标的面积和能量判据判断每个候选目标集中的目标是否是虚假目标。根据被测量物体在空间中运动时其相对于相机距离的变化范围,以及信标点的实际大小,通过小孔成像原理可以近似判断出信标点在图像上面积和能量变化范围,如果某个候选目标的面积或者能量超出此范围,则认为其为虚假目标,从候选目标集中删除;其次,利用目标的长宽比属性对剩下的候选目标集进行判别,由于真实的激光目标源是圆形光斑,其在图像上成像也近似为圆形,或椭圆形,所以如果某个候选目标的长宽比超过阈值T,则认为其为虚假目标,从候选目标集中删除,所述阈值T优选为1.2~1.5之间。(a) Target attribute feature recognition. False targets are removed from the candidate target set by the attribute characteristics of the target, and the target set that is most likely to be the real beacon point is retained. The specific method is as follows: first, use the area and energy criteria of the target to judge whether the target in each candidate target set is a false target. According to the change range of the distance of the measured object relative to the camera when it moves in space, and the actual size of the beacon point, the area and energy change range of the beacon point on the image can be approximately judged by the principle of pinhole imaging. If a candidate target If the area or energy exceeds this range, it will be considered as a false target and deleted from the candidate target set; secondly, use the aspect ratio attribute of the target to discriminate the remaining candidate target sets, because the real laser target source is a circular spot , its imaging on the image is also approximately circular, or elliptical, so if the aspect ratio of a candidate target exceeds the threshold T, it is considered to be a false target and deleted from the candidate target set. The threshold T is preferably 1.2 ~1.5.

以上即完成了目标属性特征识别步骤,通过该操作,可以从候选特征集中去除大量的虚假目标,减少了下一步目标之间几何关系识别的计算量。The above completes the target attribute feature recognition step. Through this operation, a large number of false targets can be removed from the candidate feature set, which reduces the amount of calculation for the next step of identifying the geometric relationship between targets.

(b)目标之间几何关系识别。利用安装在被测物体上多个信标点之间的几何关系进行识别。(b) Geometric relationship recognition between objects. The geometric relationship between multiple beacon points installed on the measured object is used for identification.

由于进行物体位置和姿态测量时,需要提供物体上至少4个信标点才可求解出物体的空间位置,本发明方法中被测物体左右两侧各安装4个信标点,共8个信标点,是为了保证测量的准确性和鲁棒性,同时需要图像上提取的信标点和其物体上的空间坐标点对应匹配。When measuring the position and attitude of an object, at least 4 beacon points on the object need to be provided to obtain the spatial position of the object. In the method of the present invention, 4 beacon points are respectively installed on the left and right sides of the measured object, and there are 8 beacon points in total. In order to ensure the accuracy and robustness of the measurement, it is necessary to match the beacon points extracted on the image with the spatial coordinate points on the object.

如图6所示,安装在被测物体每一侧的4个信标点都呈现近似正方形的关系,同时4个信标点也近似处在同一平面内,根据被测物体相对于相机运动的距离和角度,以及摄影几何知识可以判断投影在图像平面上的4个信标点几何关系。图5是4个信标在图像上投影编号,被测量物体左上角的信标点在图像上投影为编号I,右上角为编号II,左下角为编号III,右下角为编号IV。设候选目标集为O={o1,o2,...,on},其中n为目标集中候选目标个数,具体识别被测量物体空间中的4个信标点在图像上对应的4个点的步骤如下:As shown in Figure 6, the four beacon points installed on each side of the measured object are in an approximately square relationship, and the four beacon points are also approximately in the same plane. According to the distance of the measured object relative to the camera movement and Angle, as well as knowledge of photographic geometry can determine the geometric relationship of the four beacon points projected on the image plane. Figure 5 shows the projection number of 4 beacons on the image. The beacon point in the upper left corner of the measured object is projected on the image as number I, the upper right corner is number II, the lower left corner is number III, and the lower right corner is number IV. Let the candidate target set be O={o 1 ,o 2 ,...,o n }, where n is the number of candidate targets in the target set, and specifically identify the 4 beacon points corresponding to the 4 beacon points in the measured object space on the image. The steps for each point are as follows:

第1步:对候选目标集O={o1,o2,...,on}中目标质心坐标的纵坐标进行由小到大排序,这样保证图像视场上方的目标点出现在候选目标集的前列,同时判断候选目标集的个数n,如果n=4,则转入第6步;Step 1: Sort the vertical coordinates of the target centroid coordinates in the candidate target set O={o 1 ,o 2 ,...,o n } from small to large, so as to ensure that the target point above the image field of view appears in the candidate The forefront of the target set, while judging the number n of candidate target sets, if n=4, then turn to the 6th step;

第2步:选取候选目标集中第i候选目标点,其中i=1,...,n,并假设其为图5中编号为I的信标点,然后从剩下的候选目标集中判断是否存在图5中编号为II、III、IV的信标点,同时

Figure BDA0002162914950000083
然后转入下一步;Step 2: Select the i-th candidate target point in the candidate target set, where i=1,...,n, and assume it is the beacon point numbered I in Figure 5, and then judge whether there is Beacon points numbered II, III, and IV in Fig. 5, at the same time
Figure BDA0002162914950000083
Then go to the next step;

第3步:对于j=1,...,n且j≠i,判断如下条件是否满足:Step 3: For j=1,...,n and j≠i, judge whether the following conditions are satisfied:

①xi+t2>xj>xi+t1 ① x i +t 2 > x j > x i +t 1

Figure BDA0002162914950000081
Figure BDA0002162914950000081

其中,t1优选为10~15个像素之间;t2优选为30~35个像素之间;t3优选为0.75~0.80之间。如果第j候选目标点满足上述两个条件,则初步判断其为图5中编号为II的信标点,转入下一步,如果所有候选信标点都不满足上述两个条件,则转入第2步;Wherein, t 1 is preferably between 10-15 pixels; t 2 is preferably between 30-35 pixels; t 3 is preferably between 0.75-0.80. If the jth candidate target point satisfies the above two conditions, it is preliminarily judged to be the beacon point numbered II in Fig. step;

第4步:对于k=1,...,n且k≠i,k≠j,判断如下条件是否满足:Step 4: For k=1,...,n and k≠i, k≠j, judge whether the following conditions are satisfied:

①yi+t2>yk>yi+t1 ①y i +t 2 >y k >y i +t 1

Figure BDA0002162914950000082
Figure BDA0002162914950000082

如果第k候选目标点满足上述两个条件,则初步判断其为图5中编号为III的信标点,转入下一步,如果所有候选信标点都不满足上述两个条件,则转入第2步;If the kth candidate target point satisfies the above two conditions, it is preliminarily judged to be the beacon point numbered III in Fig. step;

第5步:对于l=1,...,n且l≠i,l≠j,l≠k,判断如下条件是否满足:Step 5: For l=1,...,n and l≠i, l≠j, l≠k, judge whether the following conditions are satisfied:

①xk+t2>xl>xk+t1 ①x k +t 2 >x l >x k +t 1

②yj+t2>yl>yj+t1 ②y j +t 2 >y l >y j +t 1

Figure BDA0002162914950000091
Figure BDA0002162914950000091

其中t4优选为0.5~0.65之间。如果第l候选目标点满足上述三个条件,则初步判断其为图5中编号为IV的信标点,转入下一步,如果所有候选信标点都不满足上述三个条件,则转入第2步;Among them, t 4 is preferably between 0.5 and 0.65. If the first candidate target point satisfies the above three conditions, then it is preliminarily judged to be the beacon point numbered IV in Fig. step;

第6步:对4个候选目标点进行几何成像关系判断。根据摄影几何知识计算和实验数据分析发现:4个信标点在图像上的成像不可能为凹四边形,只可能为凸四边形,同时四个信标点分布在以该4个信标点的中心点为原点,水平向右方向为横坐标,垂直向下方法为纵坐标的坐标系的四个象限内。具体判别步骤为:首先计算4个目标候选目标点的中心点,即,Step 6: Judging the geometric imaging relationship of the 4 candidate target points. According to the calculation of photographic geometry knowledge and the analysis of experimental data, it is found that the imaging of the four beacon points on the image cannot be a concave quadrilateral, only a convex quadrilateral, and the four beacon points are distributed at the center point of the four beacon points as the origin. , the horizontal right direction is the abscissa, and the vertical downward direction is the ordinate within the four quadrants of the coordinate system. The specific discriminative steps are as follows: first calculate the center points of the four target candidate target points, that is,

Figure BDA0002162914950000092
Figure BDA0002162914950000092

Figure BDA0002162914950000093
Figure BDA0002162914950000093

以该中心点为原点(X0,Y0),建立坐标系,首先在4个候选目标点中寻找是否有第一象限的点,即,如果某一候选目标点的坐标满足:xi<X0且yi<Y0,则该点为图5中编号为I的信标点;如果某一候选目标点的坐标满足:xi>X0且yi<Y0,则该点为图5中编号为II信标点;如果某一候选目标点的坐标满足:xi>X0且yi>Y0,则该点为图5中编号为III信标点;如果某一候选目标点的坐标满足:xi>X0且yi>Y0,则该点为图5中编号为IV信标点。如果4个候选目标点都对应到了图5中的4个信标点上,则转入下一步,否则返回第2步;With the center point as the origin (X 0 , Y 0 ), establish a coordinate system, and first find out whether there is a point in the first quadrant among the 4 candidate target points, that is, if the coordinates of a candidate target point satisfy: x i < X 0 and y i <Y 0 , then this point is the beacon point numbered I in Figure 5; if the coordinates of a candidate target point satisfy: x i >X 0 and y i <Y 0 , then this point is the 5 is the beacon point II; if the coordinates of a candidate target point satisfy: x i >X 0 and y i >Y 0 , then the point is the beacon point III in Figure 5; if a candidate target point’s The coordinates satisfy: x i >X 0 and y i >Y 0 , then this point is the beacon point numbered IV in FIG. 5 . If the 4 candidate target points all correspond to the 4 beacon points in Figure 5, then go to the next step, otherwise return to step 2;

第7步:判断4个候选信标点两两之间的距离。如图5所示,4个信标点之间可以串联成四边形,通过对四边形的四条边的长和宽进行计算,可以进一步判别所要识别的4个候选点是否为真实的信标点。即如果四边形的长和宽小于最小阈值,大于最大阈值则该4点不全是真实的信标点,返回第2步,否则寻找到4个真实信标点,结束判断。Step 7: Judge the distance between the four candidate beacon points. As shown in Figure 5, four beacon points can be connected in series to form a quadrilateral. By calculating the length and width of the four sides of the quadrilateral, it is possible to further determine whether the four candidate points to be identified are real beacon points. That is, if the length and width of the quadrilateral are less than the minimum threshold and greater than the maximum threshold, then the 4 points are not all real beacon points, and return to step 2, otherwise find 4 real beacon points and end the judgment.

其中,最小阈值优选为10~15之间,最大阈值优选为30~35之间。Wherein, the minimum threshold is preferably between 10-15, and the maximum threshold is preferably between 30-35.

(c)目标运动连续性识别。通过每个目标运动的轨迹进行分析,以便从候选目标集中剔除虚假的目标点。其中,目标运动的轨迹是利用目标运动时在每帧图像的特征信息(包括目标面积信息和目标位置信息),通过统计目标在相邻图像序列之间变化的平滑性和目标坐标位置变化的连续性进行判断。(c) Object motion continuity recognition. The trajectory of each target motion is analyzed to remove false target points from the candidate target set. Among them, the trajectory of the target movement is to use the feature information (including target area information and target position information) of each frame image when the target is moving, and through the statistics of the smoothness of the change of the target between adjacent image sequences and the continuity of the change of the target coordinate position sex to judge.

其中,统计目标面积在相邻图像序列之间变化的平滑性的方法为:设当前时刻为t,统计前n帧图像中目标面积的均值:Among them, the method of counting the smoothness of the change of the target area between adjacent image sequences is: set the current moment as t, and count the mean value of the target area in the previous n frames of images:

Figure BDA0002162914950000101
Figure BDA0002162914950000101

如果当前时刻t的目标面积满足:

Figure BDA0002162914950000102
时,则认为目标面积变化不平滑,将该目标从候选目标集中去除。所述n优选5~7之间。If the target area at the current moment t satisfies:
Figure BDA0002162914950000102
, it is considered that the change of the target area is not smooth, and the target is removed from the candidate target set. The n is preferably between 5-7.

所述统计目标坐标位置变化的连续性的方法是根据摄像机实时采集图像数据流特点,目标在相邻图像序列之间的坐标位置变化不会产生突变,同时目标距离相机的远近,其在图像上成像面积大小也会发生变化,如果目标距离相机比较近,则目标在图像上成像面积比较大,在相邻图像序列之间的运动距离也比较大;如果目标距离相机比较远,则在图像上面积比较小,在相邻图像序列之间的运动距离也比较小。利用该特性,将相邻图像序列之间目标的运动距离除以目标的面积,若该比率超过阈值T,则认为目标坐标位移变化不平稳,将该目标从候选目标集中去除。所述阈值T优选为20%~30%之间。The method for counting the continuity of the change of the target coordinate position is based on the characteristics of the image data stream collected by the camera in real time, the change of the coordinate position of the target between adjacent image sequences will not produce sudden changes, and at the same time the distance of the target from the camera is different from that on the image. The size of the imaging area will also change. If the target is relatively close to the camera, the target will have a larger imaging area on the image, and the movement distance between adjacent image sequences will also be relatively large; The area is relatively small, and the motion distance between adjacent image sequences is also relatively small. Using this characteristic, the moving distance of the target between adjacent image sequences is divided by the area of the target. If the ratio exceeds the threshold T, it is considered that the target coordinate displacement does not change smoothly, and the target is removed from the candidate target set. The threshold T is preferably between 20% and 30%.

4)多目标跟踪4) Multi-target tracking

所述多目标跟踪包括如下步骤:(a)多目标初始跟踪;(b)多目标轨迹预测;(c)多目标锁定跟踪;(d)目标丢失重新捕获。下面分别对每一步的具体操作进行介绍。The multi-target tracking includes the following steps: (a) multi-target initial tracking; (b) multi-target trajectory prediction; (c) multi-target locking and tracking; (d) reacquisition of lost targets. The specific operation of each step is introduced below.

(a)所述多目标初始跟踪是多目标识别步骤的补充,即通过目标在相邻图像序列之间运动的连续性判断所要跟踪的4个信标点是否为真实的信标点,根据要测量物体在空间中的真实运动速度,以及要测量物体相对于相机的距离可以估算出物体在相邻序列之间的运动距离,具体估算方法如下:(a) The multi-target initial tracking is a supplement to the multi-target recognition step, that is, judge whether the 4 beacon points to be tracked are real beacon points through the continuity of the movement of the target between adjacent image sequences, and according to the object to be measured The real speed of movement in space and the distance of the object to be measured relative to the camera can estimate the movement distance of the object between adjacent sequences. The specific estimation method is as follows:

Figure BDA0002162914950000103
Figure BDA0002162914950000103

其中,f为摄像机镜头的焦距;L为要测量物体相对于相机的距离;σ为摄像机CCD的像元尺寸;d为信标在相邻序列之间运动距离,即要估计的值;D为被测量物体运动速度,P为相机的频率。由于是近似估计所以没有考虑物体相对于相机光轴方向的运动分量。通过判断D的最大值,即可以估计出d的最大值。如果初始跟踪的信标点在相邻序列之间运动距离超过了d的最大值,即可判断该目标为假目标。如果连续T帧信标在相邻序列之间运动距离都在最大值以内,即进入下一步,其中T优选为4。Among them, f is the focal length of the camera lens; L is the distance of the object to be measured relative to the camera; σ is the pixel size of the camera CCD; d is the moving distance of the beacon between adjacent sequences, that is, the value to be estimated; D is The moving speed of the measured object, P is the frequency of the camera. Since it is an approximate estimate, the motion component of the object relative to the direction of the camera optical axis is not considered. By judging the maximum value of D, the maximum value of d can be estimated. If the initially tracked beacon points move more than the maximum value of d between adjacent sequences, it can be judged that the target is a false target. If the movement distances of beacons in consecutive T frames between adjacent sequences are all within the maximum value, enter the next step, where T is preferably 4.

(b)根据本发明,所述多目标轨迹预测是指根据当前时刻目标的速度和位移预测目标在下一时刻的位移。由于是实时数据采样,目标在相邻时刻运动位移较小,可以采用线性预测的方法来估计目标的位移。目标在下一时刻k+1的位移预测估计公式如下:(b) According to the present invention, the multi-target trajectory prediction refers to predicting the displacement of the target at the next moment according to the speed and displacement of the target at the current moment. Due to the real-time data sampling, the movement displacement of the target at adjacent moments is small, and the linear prediction method can be used to estimate the displacement of the target. The displacement prediction and estimation formula of the target at the next moment k+1 is as follows:

Figure BDA0002162914950000111
Figure BDA0002162914950000111

其中,f(k),f(k-1),f(k-2)分别为第k,k-1和k-2图像序列中目标的位移。Among them, f(k), f(k-1), f(k-2) are the displacement of the target in the kth, k-1 and k-2 image sequences respectively.

(c)根据本发明,所述多目标锁定跟踪是指实时提取当前时刻每个信标的质心位置,建立每个信标的运动轨迹。具体计算过程为:分别以目标轨迹预测的信标位置为中心,建立一个T*T大小的窗口区域,计算该区域的质心位置。其中T优选为

Figure BDA0002162914950000112
其中Si为第i信标点的面积。每个信标点的质心由下式计算:(c) According to the present invention, the multi-target locking and tracking refers to extracting the centroid position of each beacon at the current moment in real time, and establishing the movement trajectory of each beacon. The specific calculation process is as follows: take the predicted beacon position of the target trajectory as the center, establish a window area of T*T size, and calculate the centroid position of the area. where T is preferably
Figure BDA0002162914950000112
where S i is the area of the i-th beacon point. The centroid of each beacon point is calculated by the following formula:

Figure BDA0002162914950000113
Figure BDA0002162914950000113

Figure BDA0002162914950000114
Figure BDA0002162914950000114

其中,I(x,y)为位置(x,y)处图像的灰度值,(Xi,Yi)为第i信标点的质心。Among them, I(x,y) is the gray value of the image at position (x,y), and (X i ,Y i ) is the centroid of the i-th beacon point.

(d)根据本发明,所述目标丢失重新捕获是指当目标丢失时,跟踪算法可以继续对剩下的目标点进行锁定跟踪,并利用剩下的目标点进行摄影测量,计算物体的位置和姿态,只要两个摄相机中剩下的信标个数之和大于等于4个即可;当丢失的目标重新出现时,目标丢失重新捕获方法能够立刻将丢失的目标捕获到,并加入到摄影测量的方程组中,提高方程求解精度并提高物体位置和姿态的计算精度。(d) According to the present invention, the target loss recapture means that when the target is lost, the tracking algorithm can continue to lock and track the remaining target points, and use the remaining target points to perform photogrammetry to calculate the position and Attitude, as long as the sum of the number of remaining beacons in the two cameras is greater than or equal to 4; when the lost target reappears, the target loss recapture method can immediately capture the lost target and add it to the camera In the measured equations, the accuracy of solving the equations is improved and the calculation accuracy of the position and attitude of the object is improved.

目标丢失重新捕获方法的基本原理是基于投影误差最小化。如图6所示,空间中的被测量物体121某一侧上4个信标点经过摄影变换可以投影到图像平面122内,根据下列变换式:The basic principle of target loss reacquisition method is based on projection error minimization. As shown in Figure 6, the four beacon points on one side of the measured object 121 in space can be projected into the image plane 122 through photographic transformation, according to the following transformation formula:

Figure BDA0002162914950000115
Figure BDA0002162914950000115

可以计算被测物体上的空间三维坐标点相对于相机坐标系的旋转矩阵R1R2和平移向量R1T2+T,其中R1和T1是物体自身坐标系相对于参考坐标系的旋转和平移矩阵;R2和T2是参考坐标系相对于相机坐标系的旋转和平移矩阵;XO是物体坐标系;XH是参考坐标系;XC是相机坐标系。由于R1和T1是最终摄影测量结果,当目标丢失不超过4个时就可以进行计算,R2和T2是相机外参数标定结果,所以可以根据小孔成像原理将被测物体三维空间坐标{(X1,Y1),...,(X8,Y8)}变换到图像平面坐标{(x1,y1),...,(x8,y8)}。如果某一个信标点i在跟踪过程中丢失,则在跟踪其它未丢失的信标点的同时,在丢失信标点i可能出现的邻域继续搜索,即以丢失信标点i的投影点(xi,yi)为圆心,以阈值T为半径,搜索当前丢失的信标点i是否再次出现。阈值T优选为5~7个像素之间。The rotation matrix R 1 R 2 and the translation vector R 1 T 2 +T of the three-dimensional coordinate points on the measured object relative to the camera coordinate system can be calculated, where R 1 and T 1 are the object's own coordinate system relative to the reference coordinate system Rotation and translation matrix; R 2 and T 2 are the rotation and translation matrices of the reference coordinate system relative to the camera coordinate system; X O is the object coordinate system; X H is the reference coordinate system; X C is the camera coordinate system. Since R 1 and T 1 are the final photogrammetry results, they can be calculated when no more than 4 targets are lost, and R 2 and T 2 are the calibration results of the external parameters of the camera, so the three-dimensional space of the measured object can be calculated according to the principle of pinhole imaging. Coordinates {(X 1 ,Y 1 ),...,(X 8 ,Y 8 )} are transformed into image plane coordinates {(x 1 ,y 1 ),...,(x 8 ,y 8 )}. If a certain beacon point i is lost during the tracking process, while tracking other beacon points that are not lost, continue to search in the neighborhood where the missing beacon point i may appear, that is, use the projected point of the lost beacon point i ( xi , y i ) as the center of the circle, with the threshold T as the radius, search whether the currently lost beacon point i appears again. The threshold T is preferably between 5 and 7 pixels.

图7所示为多目标跟踪的流程图。首先进行目标初始跟踪,当初始跟踪的帧数超过4帧时,即转入目标轨迹预测过程,目标轨迹预测预测到下一时刻目标的位置后,进入目标锁定跟踪环节,目标锁定跟踪过程以目标预测位置为圆心,计算该区域的质心点,该点即为跟踪到的目标位置,同时判断是否有目标丢失,如果没有目标丢失则转入目标轨迹预测过程,否则进入目标丢失重新捕获环节,该过程循环往复,如果出现被测物体在运动过程中一侧4个信标全部丢失,则重新进行目标捕获。Figure 7 shows the flow chart of multi-target tracking. Firstly, the initial tracking of the target is carried out. When the number of initial tracking frames exceeds 4 frames, it is transferred to the target trajectory prediction process. After the target trajectory prediction predicts the position of the target at the next moment, it enters the target locking and tracking link. The predicted position is the center of the circle, and the center of mass of the area is calculated. This point is the tracked target position. At the same time, it is judged whether there is a target loss. If no target is lost, it will be transferred to the target trajectory prediction process. The process is repeated. If all four beacons on one side of the measured object are lost during the movement, the target acquisition will be carried out again.

图8为多目标快速捕获和跟踪方法中不同构成单元状态转换示意图。当图像数据流到来时,首先进行目标捕获,然后转入目标识别,如果目标识别单元没有发现真实的信标点,则重新返回目标捕获单元继续进行下一帧的目标捕获,否则进入目标初始跟踪单元。目标初始跟踪对每个信标点建立初始运动轨迹,并对轨迹连续性进行判断,如果不连续则返回目标捕获单元,否则,进入目标锁定跟踪单元。目标锁定跟踪通过一系列的过程完成目标质心点的提取,并判断目标是否丢失,如果目标全部丢失则再次进入目标捕获单元。Fig. 8 is a schematic diagram of the state transition of different constituent units in the multi-target fast acquisition and tracking method. When the image data stream arrives, first capture the target, and then transfer to the target recognition, if the target recognition unit does not find the real beacon point, then return to the target capture unit to continue the target capture of the next frame, otherwise enter the target initial tracking unit . The initial tracking of the target establishes an initial motion trajectory for each beacon point, and judges the continuity of the trajectory. If it is not continuous, it returns to the target acquisition unit, otherwise, it enters the target locking and tracking unit. Target locking and tracking completes the extraction of target centroid points through a series of processes, and judges whether the target is lost. If all targets are lost, it will enter the target acquisition unit again.

Claims (1)

1.一种双目视觉测量中多目标快速捕获与跟踪方法,其特点在于,所述方法包括:1. a kind of multi-target fast capture and tracking method in binocular vision measurement, it is characterized in that, described method comprises: 步骤一、强杂波背景条件下图像预处理;Step 1. Image preprocessing under strong clutter background conditions; 步骤二、目标捕获,所述目标捕获包括目标自适应阈值分割和目标搜索与区域标记;Step 2, target capture, the target capture includes target adaptive threshold segmentation and target search and area marking; 所述目标自适应阈值分割是指根据全局图像特点,自动计算分割阈值,利用该阈值将图像分割成前景为1,背景为0的二值图像;The target adaptive threshold segmentation refers to automatically calculating the segmentation threshold according to the characteristics of the global image, and using the threshold to segment the image into a binary image whose foreground is 1 and the background is 0; 所述目标搜索与区域标记是指对分割后的二值图像计算所有连通区域,并统计每个连通区域的属性特征,包括数学形态学处理、多目标区域标记、目标合并与分离、多目标特征提取;The target search and region marking refers to calculating all connected regions for the segmented binary image, and counting the attribute characteristics of each connected region, including mathematical morphology processing, multi-target region marking, target merging and separation, and multi-target features. extract; 步骤三、多目标识别,所述多目标识别包括目标属性特征识别、目标之间几何关系识别、目标运动连续性识别;Step 3, multi-target recognition, which includes target attribute feature recognition, geometric relationship recognition between targets, and target motion continuity recognition; 所述目标属性特征识别是指通过目标的属性特征从候选目标集中去除虚假目标,保留最有可能是真实信标点的目标集,如果候选目标的面积和能量特征超过合理范围,则认为其为假目标,予以剔除;The target attribute feature recognition refers to removing false targets from the candidate target set through the target attribute feature, and retaining the target set that is most likely to be a real beacon point. If the area and energy characteristics of the candidate target exceed a reasonable range, it is considered to be a false target. target, to be eliminated; 所述目标之间几何关系识别是指安装在被测物体每一侧的4个信标点投影在图像上都呈现近似正方形的关系,可以利用4个信标点之间的几何位置关系从候选目标集中找到真实的4个信标点;The recognition of the geometric relationship between the targets means that the projections of the four beacon points installed on each side of the measured object all present an approximate square relationship on the image, and the geometric positional relationship between the four beacon points can be used to select from the candidate target set Find the real 4 beacon points; 所述目标运动连续性识别是指通过每个目标运动轨迹进行分析,以便从候选目标集中剔除虚假的目标点;The recognition of target motion continuity refers to analyzing each target motion trajectory so as to remove false target points from the candidate target set; 所述目标运动轨迹是利用目标运动时在每帧图像的特征信息,包括目标面积信息和目标位置信息,通过统计目标在相邻图像序列之间变化的平滑性和目标坐标位置变化的连续性进行判断;The target motion trajectory is carried out by using the feature information of each frame image when the target is moving, including the target area information and target position information, through the smoothness of the change of the target between adjacent image sequences and the continuity of the change of the target coordinate position judge; 步骤四、多目标跟踪,所述多目标跟踪包括多目标初始跟踪、多目标轨迹预测、多目标锁定跟踪、目标丢失重新捕获;Step 4, multi-target tracking, the multi-target tracking includes multi-target initial tracking, multi-target trajectory prediction, multi-target locked tracking, and target loss recapture; 所述多目标初始跟踪是通过目标在相邻图像序列之间运动的连续性判断所要跟踪的4个信标点是否为真实的信标点,同时构建每个信标的运动轨迹,是多目标识别步骤的补充;The multi-target initial tracking is to judge whether the 4 beacon points to be tracked are real beacon points through the continuity of the target's motion between adjacent image sequences, and to construct the motion track of each beacon at the same time, which is the core of the multi-target recognition step. Replenish; 所述多目标轨迹预测是指根据当前时刻目标的速度和位移预测目标在下一时刻的位移;The multi-target trajectory prediction refers to predicting the displacement of the target at the next moment according to the speed and displacement of the target at the current moment; 所述多目标锁定跟踪是指实时提取当前时刻每个信标的质心坐标,建立每个信标的运动轨迹;The multi-target locking and tracking refers to extracting the centroid coordinates of each beacon at the current moment in real time, and establishing the trajectory of each beacon; 所述目标丢失重新捕获是指当目标丢失时,跟踪算法可以继续对剩下的目标点进行锁定跟踪,并利用剩下的目标点进行摄影测量,计算物体的三维空间位置和姿态,只要两个摄相机中剩下的信标个数之和大于等于4个即可;当图像中丢失的信标重新出现时,目标丢失重新捕获方法能够立刻将丢失的信标捕获到,并加入到摄影测量的方程组中,提高方程求解精度,同时提高物体位置和姿态的计算精度。The target loss recapture means that when the target is lost, the tracking algorithm can continue to lock and track the remaining target points, and use the remaining target points to perform photogrammetry to calculate the three-dimensional space position and attitude of the object, as long as two The sum of the number of remaining beacons in the camera is greater than or equal to 4; when the lost beacons reappear in the image, the target loss recapture method can immediately capture the lost beacons and add them to the photogrammetry In the equation group, the accuracy of solving the equation is improved, and the calculation accuracy of the position and attitude of the object is improved at the same time.
CN201910738024.XA 2019-08-12 2019-08-12 Multi-target rapid capturing and tracking method in binocular vision measurement Active CN110490903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910738024.XA CN110490903B (en) 2019-08-12 2019-08-12 Multi-target rapid capturing and tracking method in binocular vision measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910738024.XA CN110490903B (en) 2019-08-12 2019-08-12 Multi-target rapid capturing and tracking method in binocular vision measurement

Publications (2)

Publication Number Publication Date
CN110490903A CN110490903A (en) 2019-11-22
CN110490903B true CN110490903B (en) 2022-11-11

Family

ID=68550618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910738024.XA Active CN110490903B (en) 2019-08-12 2019-08-12 Multi-target rapid capturing and tracking method in binocular vision measurement

Country Status (1)

Country Link
CN (1) CN110490903B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724445B (en) * 2020-05-08 2023-05-26 华中科技大学 Recognition method and system for a large field of view and small size identification code
CN113327244B (en) * 2021-06-25 2024-09-13 南京爱奇艺智能科技有限公司 Computer vision-based positioning method and system for LED lamp of handle controller
CN114299112B (en) * 2021-12-24 2023-01-13 萱闱(北京)生物科技有限公司 Method, device, medium and computing device for trajectory recognition based on multi-target
CN114862790A (en) * 2022-05-05 2022-08-05 南京久盯自动化科技有限公司 Vision-based steel ball front side surface flaw correction method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007008715A2 (en) * 2005-07-07 2007-01-18 Ingenious Targeting Laboratory, Inc. System for 3d monitoring and analysis of motion behavior of targets
CN101609504A (en) * 2009-07-21 2009-12-23 华中科技大学 A method for detection, recognition and positioning of infrared image sea surface targets
CN102141398A (en) * 2010-12-28 2011-08-03 北京航空航天大学 Monocular vision-based method for measuring positions and postures of multiple robots
CN103839274A (en) * 2014-03-25 2014-06-04 中国科学院光电技术研究所 Extended target tracking method based on geometric proportion relation
CN107909604A (en) * 2017-11-07 2018-04-13 武汉科技大学 Dynamic object movement locus recognition methods based on binocular vision
CN108731587A (en) * 2017-04-14 2018-11-02 中交遥感载荷(北京)科技有限公司 A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN109978851A (en) * 2019-03-22 2019-07-05 北京航空航天大学 A kind of aerial weak moving target detection tracking of infrared video
CN110009682A (en) * 2019-03-29 2019-07-12 北京理工大学 A Target Recognition and Localization Method Based on Monocular Vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10204279B2 (en) * 2015-07-30 2019-02-12 Magna Electronics Inc. Vehicle vision system with object detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007008715A2 (en) * 2005-07-07 2007-01-18 Ingenious Targeting Laboratory, Inc. System for 3d monitoring and analysis of motion behavior of targets
CN101609504A (en) * 2009-07-21 2009-12-23 华中科技大学 A method for detection, recognition and positioning of infrared image sea surface targets
CN102141398A (en) * 2010-12-28 2011-08-03 北京航空航天大学 Monocular vision-based method for measuring positions and postures of multiple robots
CN103839274A (en) * 2014-03-25 2014-06-04 中国科学院光电技术研究所 Extended target tracking method based on geometric proportion relation
CN108731587A (en) * 2017-04-14 2018-11-02 中交遥感载荷(北京)科技有限公司 A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN107909604A (en) * 2017-11-07 2018-04-13 武汉科技大学 Dynamic object movement locus recognition methods based on binocular vision
CN109978851A (en) * 2019-03-22 2019-07-05 北京航空航天大学 A kind of aerial weak moving target detection tracking of infrared video
CN110009682A (en) * 2019-03-29 2019-07-12 北京理工大学 A Target Recognition and Localization Method Based on Monocular Vision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Research on Target Tracking Algorithm Based on Parallel Binocular Camera;Ya’nan Xu等;《2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC)》;20190805;第1483-1486页 *
基于双目立体视觉的多目标检测跟踪算法研究;张芳冰;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190215(第02期);I138-2096 *
基于立体视觉的多目标跟踪方法研究;杨昱;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20110415(第04期);I138-1003 *
复杂背景中的多目标检测与跟踪;刘雨辰;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20151215(第12期);I138-481 *
特征融合的核相关滤波跟踪算法;吴昭童等;《国外电子测量技术》;20190215;第40卷(第01期);第108-111页 *

Also Published As

Publication number Publication date
CN110490903A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110490903B (en) Multi-target rapid capturing and tracking method in binocular vision measurement
Su et al. Real-time dynamic SLAM algorithm based on deep learning
CN113223045B (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN105335986B (en) Method for tracking target based on characteristic matching and MeanShift algorithm
CN109949361A (en) An Attitude Estimation Method for Rotor UAV Based on Monocular Vision Positioning
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN111461023A (en) Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar
CN113506318A (en) A 3D object perception method in vehicle edge scene
CN108597009B (en) Method for detecting three-dimensional target based on direction angle information
CN110097584A (en) The method for registering images of combining target detection and semantic segmentation
WO2015096507A1 (en) Method for recognizing and locating building using constraint of mountain contour region
CN102831617A (en) Method and system for detecting and tracking moving object
CN110084830B (en) Video moving object detection and tracking method
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN112818905B (en) Finite pixel vehicle target detection method based on attention and spatio-temporal information
CN104992453A (en) Target tracking method under complicated background based on extreme learning machine
CN110245566B (en) A long-distance tracking method for infrared targets based on background features
CN103778645A (en) Circular target real-time tracking method based on images
CN117036404B (en) A monocular thermal imaging simultaneous positioning and mapping method and system
CN106372590A (en) Sea surface ship intelligent tracking system and method based on machine vision
Liangjun et al. MSFA-YOLO: A multi-scale SAR ship detection algorithm based on fused attention
CN116468786B (en) Semantic SLAM method based on point-line combination and oriented to dynamic environment
CN114677531A (en) Water surface unmanned ship target detection and positioning method fusing multi-mode information
Feng et al. Intelligent segmentation and quantification of tunnel lining cracks via computer vision
CN114998437A (en) A method for centroid location of space target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant