[go: up one dir, main page]

CN118628533B - Target tracking method and computer equipment - Google Patents

Target tracking method and computer equipment Download PDF

Info

Publication number
CN118628533B
CN118628533B CN202411109434.5A CN202411109434A CN118628533B CN 118628533 B CN118628533 B CN 118628533B CN 202411109434 A CN202411109434 A CN 202411109434A CN 118628533 B CN118628533 B CN 118628533B
Authority
CN
China
Prior art keywords
target
sensor
fusion
view
bird
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411109434.5A
Other languages
Chinese (zh)
Other versions
CN118628533A (en
Inventor
陆超
王政军
郑幽娴
巫立峰
李加琛
周仁杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202411109434.5A priority Critical patent/CN118628533B/en
Publication of CN118628533A publication Critical patent/CN118628533A/en
Application granted granted Critical
Publication of CN118628533B publication Critical patent/CN118628533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请涉及一种目标跟踪方法和计算机设备。所述方法包括:基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果;基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息;基于各个传感器的帧同步后的输出信息,共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,得到各个融合目标在目标路段的连续显示结果。采用本方法能够解决现有车辆跟踪方法中存在的持续跟踪困难的问题。

The present application relates to a target tracking method and computer equipment. The method comprises: matching the targets in the common view area in the bird's-eye view of the target road section based on the frame-synchronized output information of each sensor to obtain the target matching result; updating the sensor identification and target identification corresponding to each fused target in the common view area based on the target matching result, and updating the position information of each fused target in the common view area in the bird's-eye view of the target road section; obtaining the continuous display result of each fused target in the target road section based on the frame-synchronized output information of each sensor, the updated sensor identification and updated target identification corresponding to each fused target in the common view area, and the updated position information of each fused target in the common view area in the bird's-eye view of the target road section. The use of this method can solve the problem of continuous tracking difficulty existing in the existing vehicle tracking methods.

Description

一种目标跟踪方法和计算机设备A target tracking method and computer device

技术领域Technical Field

本申请涉及智能交通技术领域,特别是涉及一种目标跟踪方法和计算机设备。The present application relates to the field of intelligent transportation technology, and in particular to a target tracking method and computer equipment.

背景技术Background Art

随着电子技术的发展,智能交通技术逐渐受到人们的重视。在自动驾驶等智能交通管理系统中,高效且准确的跟踪目标车辆在预设路段或隧道中的轨迹,生成全路段车辆行驶轨迹,是至关重要的。With the development of electronic technology, intelligent transportation technology has gradually attracted people's attention. In intelligent traffic management systems such as autonomous driving, it is crucial to efficiently and accurately track the trajectory of the target vehicle in a preset section or tunnel and generate the vehicle driving trajectory of the entire section.

现有的车辆跟踪方法,主要是采用匹配优先级的跨相机多目标级联匹配方法,通过获取各个相机的影像信息,估计跟踪目标的运动状态,并采集所有跟踪目标的外观特征,通过所有跟踪目标的外观特征,进行跨相机多目标之间的匹配关联,进行全路段车辆跟踪。但是,这种方法仅依赖于车辆的外观特征进行跨相机的关联,使得在相机部署距离远、车辆遮挡以及车牌识别等的情况下,会出现误匹配、漏匹配或无法匹配的情况,导致持续跟踪困难。The existing vehicle tracking method mainly adopts the cross-camera multi-target cascade matching method with matching priority. By acquiring the image information of each camera, the motion state of the tracking target is estimated, and the appearance features of all tracking targets are collected. Through the appearance features of all tracking targets, the matching association between cross-camera multi-targets is performed to track the vehicle on the entire road section. However, this method only relies on the appearance features of the vehicle for cross-camera association, which may lead to mismatching, missing matching or failure to match when the camera is deployed at a long distance, the vehicle is blocked, and the license plate is recognized, making continuous tracking difficult.

针对现有车辆跟踪方法中存在的因相机部署距离远、车辆遮挡以及车牌识别等问题而导致的持续跟踪困难的问题,目前还没有提出有效的解决方案。There is currently no effective solution to the problem of continuous tracking difficulties in existing vehicle tracking methods due to problems such as long camera deployment distance, vehicle occlusion, and license plate recognition.

发明内容Summary of the invention

基于此,有必要针对上述技术问题,提供一种目标跟踪方法和计算机设备。Based on this, it is necessary to provide a target tracking method and computer device to address the above technical issues.

第一方面,本申请提供了一种目标跟踪方法。所述方法包括以下步骤:In a first aspect, the present application provides a target tracking method. The method comprises the following steps:

基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果;所述共视区域,为所述目标路段中设置的相邻传感器,在所述目标路段的鸟瞰图中的不同视野区域之间的重叠区域;Based on the output information of each sensor after frame synchronization, matching the target in the common view area in the bird's-eye view of the target road section to obtain a target matching result; the common view area is an overlapping area between different view areas of adjacent sensors set in the target road section in the bird's-eye view of the target road section;

基于所述目标匹配结果,更新所述共视区域内的各个融合目标所对应的传感器标识和目标标识,更新所述共视区域内的各个所述融合目标在所述目标路段的鸟瞰图中的位置信息;所述目标标识,为各个目标在各个所述传感器的帧同步后的所述输出信息中的标识;Based on the target matching result, the sensor identifier and target identifier corresponding to each fused target in the common view area are updated, and the position information of each fused target in the common view area in the bird's-eye view of the target road section is updated; the target identifier is the identifier of each target in the output information after frame synchronization of each sensor;

基于各个所述传感器的帧同步后的输出信息,所述共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和所述共视区域内的各个所述融合目标在所述目标路段的鸟瞰图中的更新后的位置信息,得到各个所述融合目标在所述目标路段的连续显示结果。Based on the output information of each of the sensors after frame synchronization, the updated sensor identifiers and updated target identifiers corresponding to each of the fused targets in the common viewing area, and the updated position information of each of the fused targets in the common viewing area in the bird's-eye view of the target road section, a continuous display result of each of the fused targets in the target road section is obtained.

在其中一个实施例中,在基于各个所述传感器的帧同步后的输出信息,对所述目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果之前,包括以下步骤:In one embodiment, before matching the target within the common view area in the bird's-eye view image of the target road section based on the output information of each sensor after frame synchronization and obtaining the target matching result, the following steps are included:

对位于所述目标路段的所有传感器的输出信息进行帧同步,得到各个所述传感器的帧同步后的输出信息;所述传感器的输出信息,包括所述传感器所拍摄到的视野区域内的各个目标和各个目标的属性信息;Performing frame synchronization on the output information of all sensors located on the target road section to obtain the frame-synchronized output information of each sensor; the output information of the sensor includes each target in the field of view captured by the sensor and the attribute information of each target;

基于各个所述传感器的帧同步后的输出信息,以及各个所述传感器在所述目标路段的鸟瞰图中的视野区域,确定所述目标路段内的所有融合目标,得到融合目标集合;Based on the output information of each sensor after frame synchronization and the field of view of each sensor in the bird's-eye view of the target road section, all fusion targets in the target road section are determined to obtain a fusion target set;

基于所述融合目标集合,确定各个所述融合目标与传感器标识和目标标识三者之间的映射关系,以及各个所述融合目标在所述目标路段的鸟瞰图中的位置信息。Based on the fusion target set, a mapping relationship between each of the fusion targets, the sensor identifier and the target identifier, and position information of each of the fusion targets in the bird's-eye view of the target road section are determined.

在其中一个实施例中,在基于各个所述传感器的帧同步后的输出信息,以及各个所述传感器在所述目标路段的鸟瞰图中的视野区域,确定所述目标路段内的所有融合目标,得到融合目标集合之前,包括以下步骤:In one embodiment, before all fused targets in the target road section are determined based on the output information of each sensor after frame synchronization and the field of view of each sensor in the bird's-eye view of the target road section to obtain the fused target set, the following steps are included:

基于位于所述目标路段的各个传感器的位置信息和标定信息,确定各个所述传感器在所述目标路段的鸟瞰图中的视野区域。Based on the position information and calibration information of each sensor located on the target road section, the field of view area of each sensor in the bird's-eye view of the target road section is determined.

在其中一个实施例中,在基于位于所述目标路段的各个传感器的位置信息和标定信息,确定各个所述传感器在所述目标路段的鸟瞰图中的视野区域之前,包括以下步骤:In one of the embodiments, before determining the field of view area of each sensor in the bird's-eye view of the target road section based on the position information and calibration information of each sensor located on the target road section, the following steps are included:

基于所述传感器种类和所述目标路段的路段环境,确定位于所述目标路段的各个所述传感器的位置信息。Based on the sensor type and the road environment of the target road section, position information of each of the sensors located on the target road section is determined.

在其中一个实施例中,所述基于各个所述传感器的帧同步后的输出信息,以及各个所述传感器在所述目标路段的鸟瞰图中的视野区域,确定所述目标路段内的所有融合目标,得到融合目标集合,包括以下步骤:In one embodiment, determining all fused targets in the target road section based on the frame-synchronized output information of each sensor and the field of view of each sensor in the bird's-eye view of the target road section to obtain a fused target set includes the following steps:

基于各个所述传感器的帧同步后的输出信息,将目标路段的首个传感器在所述目标路段的鸟瞰图中的视野区域内的各个目标,作为第一融合目标;所述目标路段的首个传感器,为目标进入所述目标路段的第一个传感器;Based on the frame-synchronized output information of each of the sensors, each target in the field of view of the first sensor of the target road section in the bird's-eye view of the target road section is taken as the first fusion target; the first sensor of the target road section is the first sensor of the target entering the target road section;

将所述目标路段的所有传感器中除去所述首个传感器以外的其他传感器在所述目标路段的鸟瞰图中的视野区域除去所述共视区域以外的区域内首次出现的目标,作为第二融合目标;The target that first appears in the field of view of the bird's-eye view of the target road section by sensors other than the first sensor in the target road section, excluding the common view area, is used as the second fusion target;

基于所述第一融合目标和所述第二融合目标,生成所述融合目标集合。The fusion target set is generated based on the first fusion target and the second fusion target.

在其中一个实施例中,所述基于各个所述传感器的帧同步后的输出信息,对所述目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果,包括:In one embodiment, matching targets within a common viewing area in a bird's-eye view of the target road section based on the frame-synchronized output information of each sensor to obtain a target matching result includes:

基于各个所述传感器的帧同步后的输出信息,在鸟瞰图视角下,对相邻传感器在所述目标路段的鸟瞰图中的所述共视区域内的各个目标进行交并比计算,得到第一交并比计算结果;基于所述第一交并比计算结果,对所述目标路段的相邻传感器在共视区域内的目标进行匹配,得到所述目标匹配结果;Based on the output information of each sensor after frame synchronization, from a bird's-eye view perspective, perform intersection-and-union calculation on each target in the common viewing area of the bird's-eye view of the target road section by the adjacent sensors to obtain a first intersection-and-union calculation result; based on the first intersection-and-union calculation result, perform matching on the targets in the common viewing area of the adjacent sensors of the target road section to obtain the target matching result;

或,基于各个所述传感器的帧同步后的输出信息,在鸟瞰图视角下,确定所述相邻传感器所确定的共视区域内的各个目标,与所述相邻传感器中的第一传感器在所述目标路段的鸟瞰图中的视野区域内的各个目标之间的第一点集特征;确定所述相邻传感器所确定的共视区域内的各个目标,与所述相邻传感器中的第二传感器在所述目标路段的鸟瞰图中的视野区域内的各个目标之间的第二点集特征;基于所述第一点集特征和第二点集特征,对所述目标路段的鸟瞰图中的共视区域内的目标进行相似度匹配,得到相似度匹配结果;基于所述相似度匹配结果,得到所述目标匹配结果;Or, based on the output information of each of the sensors after frame synchronization, in the bird's-eye view perspective, determine the first point set features between each target in the common view area determined by the adjacent sensors and each target in the field of view of the first sensor among the adjacent sensors in the bird's-eye view of the target road section; determine the second point set features between each target in the common view area determined by the adjacent sensors and each target in the field of view of the second sensor among the adjacent sensors in the bird's-eye view of the target road section; based on the first point set features and the second point set features, perform similarity matching on the targets in the common view area in the bird's-eye view of the target road section to obtain a similarity matching result; based on the similarity matching result, obtain the target matching result;

或,基于各个所述传感器的帧同步后的输出信息,将所述相邻传感器中的第一传感器在鸟瞰图中的视野区域内的各个目标,投影到所述相邻传感器中的第二传感器在鸟瞰图中的视野区域中,得到投影后的各个目标;对投影后的各个目标与所述相邻传感器中的第二传感器在鸟瞰图中的视野区域中的各个目标进行交并比计算,得到第二交并比计算结果;基于所述第二交并比计算结果,对所述目标路段的相邻传感器在共视区域内的目标进行匹配,得到所述目标匹配结果。Or, based on the output information after frame synchronization of each of the sensors, each target in the field of view of the first sensor among the adjacent sensors in the bird's-eye view is projected onto the field of view of the second sensor among the adjacent sensors in the bird's-eye view to obtain each projected target; an intersection-and-union ratio calculation is performed on each projected target and each target in the field of view of the second sensor among the adjacent sensors in the bird's-eye view to obtain a second intersection-and-union ratio calculation result; based on the second intersection-and-union ratio calculation result, the targets in the common viewing area of the adjacent sensors of the target section are matched to obtain the target matching result.

在其中一个实施例中,所述基于所述目标匹配结果,更新所述共视区域内的各个所述融合目标所对应的传感器标识和目标标识,更新所述共视区域内的各个所述融合目标在所述目标路段的鸟瞰图中的位置信息,包括:In one of the embodiments, based on the target matching result, updating the sensor identifier and the target identifier corresponding to each of the fused targets in the common view area, and updating the position information of each of the fused targets in the common view area in the bird's-eye view of the target road section include:

当所述相邻传感器中的第一传感器在鸟瞰图中的视野区域内的目标存在于融合目标列表时,基于所述目标匹配结果,更新所述共视区域内的各个所述融合目标所对应的传感器标识和目标标识,更新所述共视区域内的各个所述融合目标在鸟瞰图中的位置信息。When a target in the field of view of the first sensor among the adjacent sensors in the bird's-eye view exists in the fusion target list, based on the target matching result, the sensor identifier and the target identifier corresponding to each of the fusion targets in the common view area are updated, and the position information of each of the fusion targets in the common view area in the bird's-eye view is updated.

在其中一个实施例中,所述当所述相邻传感器中的第一传感器在鸟瞰图中的视野区域内的目标存在于融合目标列表时,基于所述目标匹配结果,更新所述共视区域内的各个所述融合目标所对应的传感器标识和目标标识,更新所述共视区域内的各个所述融合目标在鸟瞰图中的位置信息,包括:In one of the embodiments, when a target in the field of view of a first sensor among the adjacent sensors in the bird's-eye view exists in the fusion target list, based on the target matching result, updating the sensor identifier and the target identifier corresponding to each of the fusion targets in the common view area, and updating the position information of each of the fusion targets in the common view area in the bird's-eye view, includes:

当所述相邻传感器中的第一传感器在鸟瞰图中的视野区域内的第一目标存在于融合目标列表时,基于所述目标匹配结果,确定相邻传感器中的第二传感器在鸟瞰图中的视野区域内,与所述相邻传感器中的所述第一传感器在鸟瞰图中的视野区域内的第一目标所匹配的第二目标;When a first target in a field of view of a first sensor among the adjacent sensors in the bird's-eye view exists in the fusion target list, determining, based on the target matching result, a second target in a field of view of a second sensor among the adjacent sensors in the bird's-eye view that matches the first target in the field of view of the first sensor among the adjacent sensors;

更新所述共视区域内的所述融合目标所对应的传感器标识为所述相邻传感器中的所述第二传感器所对应的标识,更新所述共视区域内的所述融合目标所对应的目标标识为所述相邻传感器中的所述第二传感器在鸟瞰图中的视野区域内的第二目标的标识,更新所述共视区域内的所述融合目标在鸟瞰图中的位置信息为所述相邻传感器中的所述第二传感器在鸟瞰图中的视野区域内的所述第二目标的位置信息。The sensor identifier corresponding to the fused target in the common viewing area is updated to the identifier corresponding to the second sensor in the adjacent sensors, the target identifier corresponding to the fused target in the common viewing area is updated to the identifier of the second target in the field of view of the second sensor in the adjacent sensors in the bird's-eye view, and the position information of the fused target in the common viewing area in the bird's-eye view is updated to the position information of the second target in the field of view of the second sensor in the adjacent sensors in the bird's-eye view.

在其中一个实施例中,在基于所述目标匹配结果,更新所述共视区域内的各个所述融合目标所对应的传感器标识和目标标识,更新所述共视区域内的各个所述融合目标在鸟瞰图中的位置信息之后,还包括:In one of the embodiments, after updating the sensor identifier and the target identifier corresponding to each of the fused targets in the common view area based on the target matching result, and updating the position information of each of the fused targets in the common view area in the bird's eye view, the method further includes:

当融合目标丢失时,遍历丢失的融合目标所在的视野区域所对应的第三传感器的帧同步后的输出信息,判断所述第三传感器的所述帧同步后的输出信息中是否存在与所述丢失的融合目标的属性信息一致的第三目标;When a fusion target is lost, traverse the frame-synchronized output information of the third sensor corresponding to the field of view where the lost fusion target is located, and determine whether there is a third target having attribute information consistent with the lost fusion target in the frame-synchronized output information of the third sensor;

当所述第三传感器的所述帧同步后的输出信息中存在与所述丢失的融合目标的属性信息一致的所述第三目标时,更新所述融合目标所对应的传感器标识为第三传感器所对应的标识,更新所述融合目标所对应的目标标识为第三传感器在鸟瞰图中的视野区域内的第三目标的标识,更新所述融合目标在鸟瞰图中的位置信息为第三传感器在鸟瞰图中的视野区域内的第三目标的位置信息;When the third target having attribute information consistent with the lost fusion target exists in the output information of the third sensor after frame synchronization, the sensor identifier corresponding to the fusion target is updated to the identifier corresponding to the third sensor, the target identifier corresponding to the fusion target is updated to the identifier of the third target in the field of view of the third sensor in the bird's-eye view, and the position information of the fusion target in the bird's-eye view is updated to the position information of the third target in the field of view of the third sensor in the bird's-eye view;

当所述第三传感器的所述帧同步后的输出信息中不存在与所述丢失的融合目标的属性信息一致的所述第三目标时,遍历第四传感器的所述帧同步后的输出信息,判断所述第四传感器的所述帧同步后的输出信息中是否存在与丢失的融合目标的属性一致的第四目标;所述第四传感器,为目标远离所述第三传感器的视野区域所进入的下一视野区域所对应的传感器;When the third target having the same attribute information as the lost fusion target does not exist in the output information of the third sensor after frame synchronization, traverse the output information of the fourth sensor after frame synchronization to determine whether the fourth target having the same attribute as the lost fusion target exists in the output information of the fourth sensor after frame synchronization; the fourth sensor is a sensor corresponding to the next field of view area that the target enters after leaving the field of view area of the third sensor;

当所述第四传感器的所述帧同步后的输出信息中存在与所述丢失的融合目标的属性一致的所述第四目标时,更新所述融合目标所对应的传感器标识为第四传感器所对应的标识,更新所述融合目标所对应的目标标识为第四传感器在鸟瞰图中的视野区域内的第四目标的标识,更新所述融合目标在鸟瞰图中的位置信息为第四传感器在鸟瞰图中的视野区域内的第四目标的位置信息;When the fourth target having the same attribute as the lost fusion target exists in the output information of the fourth sensor after frame synchronization, the sensor identifier corresponding to the fusion target is updated to the identifier corresponding to the fourth sensor, the target identifier corresponding to the fusion target is updated to the identifier of the fourth target in the field of view of the fourth sensor in the bird's-eye view, and the position information of the fusion target in the bird's-eye view is updated to the position information of the fourth target in the field of view of the fourth sensor in the bird's-eye view;

当所述第四传感器的所述帧同步后的输出信息中不存在与所述丢失的融合目标的属性信息一致的所述第四目标时,根据所述融合目标未丢失时与多个相邻融合目标之间的位置关系,确定所述第四传感器的所述帧同步后的输出信息中与多个所述相邻融合目标所对应的多个目标;基于所述第四传感器的所述帧同步后的输出信息中与多个相邻融合目标所对应的多个目标,以及所述融合目标未丢失时与多个所述相邻融合目标之间的位置关系,预估丢失的融合目标在所述第四传感器在鸟瞰图中的视野区域内的第一位置信息;更新融合目标在鸟瞰图中的位置信息为第四传感器在鸟瞰图中的视野区域内的所述第一位置信息。When the fourth target whose attribute information is consistent with that of the lost fusion target does not exist in the output information of the fourth sensor after frame synchronization, multiple targets corresponding to the multiple adjacent fusion targets in the output information of the fourth sensor after frame synchronization are determined according to the positional relationship between the fusion target and the multiple adjacent fusion targets when the fusion target is not lost; based on the multiple targets corresponding to the multiple adjacent fusion targets in the output information of the fourth sensor after frame synchronization and the positional relationship between the fusion target and the multiple adjacent fusion targets when the fusion target is not lost, the first position information of the lost fusion target in the field of view of the fourth sensor in the bird's-eye view is estimated; and the position information of the fusion target in the bird's-eye view is updated to the first position information in the field of view of the fourth sensor in the bird's-eye view.

第二方面,本申请还提供了一种计算机设备。所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述第一方面所述的目标跟踪方法。In a second aspect, the present application further provides a computer device, wherein the computer device comprises a memory and a processor, wherein the memory stores a computer program, and when the processor executes the computer program, the target tracking method described in the first aspect is implemented.

上述目标跟踪方法和计算机设备,通过基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,其通过对目标路段的鸟瞰图中的共视区域内的目标进行匹配,建立前后两个传感器不同视角之间的关联,并根据目标匹配结果,更新共视区域内的融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,使各个传感器不同的视野区域内的同一融合目标进行关联融合,进而,根据各个所述传感器的帧同步后的输出信息,与共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,实现对同一目标在同一鸟瞰图的持续显示,实现对目标的持续跟踪,解决了现有车辆跟踪方法中存在的因相机部署距离远、车辆遮挡以及车牌识别等问题而导致的持续跟踪困难的问题。The target tracking method and computer device match the targets in the common viewing area in the bird's-eye view of the target road section based on the output information of each sensor after frame synchronization to obtain the target matching result, and based on the target matching result, update the sensor identification and target identification corresponding to each fused target in the common viewing area, and update the position information of each fused target in the common viewing area in the bird's-eye view of the target road section. It matches the targets in the common viewing area in the bird's-eye view of the target road section to establish the association between the different perspectives of the front and rear sensors, and according to the target matching result, update the sensor identification and target identification corresponding to the fused target in the common viewing area, and update the common viewing area. The position information of each fused target in the bird's-eye view of the target road section is used to associate and fuse the same fused target in different field of view of each sensor, and then, based on the output information of each sensor after frame synchronization, the updated sensor identifier and the updated target identifier corresponding to each fused target in the common view area, and the updated position information of each fused target in the common view area in the bird's-eye view of the target road section, the continuous display of the same target in the same bird's-eye view is realized, and the continuous tracking of the target is realized, which solves the problem of continuous tracking difficulty caused by the long camera deployment distance, vehicle occlusion and license plate recognition in the existing vehicle tracking method.

本申请的一个或多个实施例的细节在以下附图和描述中提出,以使本申请的其他特征、目的和优点更加简明易懂。Details of one or more embodiments of the present application are set forth in the following drawings and description to make other features, objects, and advantages of the present application more readily apparent.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described herein are used to provide a further understanding of the present application and constitute a part of the present application. The illustrative embodiments of the present application and their descriptions are used to explain the present application and do not constitute an improper limitation on the present application. In the drawings:

图1为本申请一实施例提供的目标跟踪方法的终端的硬件结构框图;FIG1 is a hardware structure block diagram of a terminal of a target tracking method provided by an embodiment of the present application;

图2为本申请一实施例提供的目标跟踪方法的流程图;FIG2 is a flow chart of a target tracking method provided by an embodiment of the present application;

图3为本申请一实施例提供的目标路段的各个相机的示意图;FIG3 is a schematic diagram of various cameras of a target road section provided by an embodiment of the present application;

图4为本申请一实施例提供的相机D在目标路段的鸟瞰图中的视野区域示意图;FIG4 is a schematic diagram of the field of view of a camera D in a bird's-eye view of a target road section provided by an embodiment of the present application;

图5为本申请一实施例提供的融合目标在目标路段的鸟瞰图中的位置信息示意图;FIG5 is a schematic diagram of location information of a fusion target in a bird's-eye view of a target road section provided by an embodiment of the present application;

图6为本申请一实施例提供的目标跟踪结果示意图;FIG6 is a schematic diagram of target tracking results provided by an embodiment of the present application;

图7为本申请一优选实施例提供的目标跟踪方法的流程图;FIG7 is a flow chart of a target tracking method provided by a preferred embodiment of the present application;

图8为本申请一实施例提供的目标跟踪装置的结构框图。FIG8 is a structural block diagram of a target tracking device provided in an embodiment of the present application.

具体实施方式DETAILED DESCRIPTION

为更清楚地理解本申请的目的、技术方案和优点,下面结合附图和实施例,对本申请进行了描述和说明。In order to more clearly understand the purpose, technical solutions and advantages of the present application, the present application is described and illustrated below in conjunction with the accompanying drawings and embodiments.

除另作定义外,本申请所涉及的技术术语或者科学术语应具有本申请所属技术领域具备一般技能的人所理解的一般含义。在本申请中的“一”、“一个”、“一种”、“该”、“这些”等类似的词并不表示数量上的限制,它们可以是单数或者复数。在本申请中所涉及的术语“包括”、“包含”、“具有”及其任何变体,其目的是涵盖不排他的包含;例如,包含一系列步骤或模块(单元)的过程、方法和系统、产品或设备并未限定于列出的步骤或模块(单元),而可包括未列出的步骤或模块(单元),或者可包括这些过程、方法、产品或设备固有的其他步骤或模块(单元)。在本申请中所涉及的“连接”、“相连”、“耦接”等类似的词语并不限定于物理的或机械连接,而可以包括电气连接,无论是直接连接还是间接连接。在本申请中所涉及的“多个”是指两个或两个以上。“和/或”描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。通常情况下,字符“/”表示前后关联的对象是一种“或”的关系。在本申请中所涉及的术语“第一”、“第二”、“第三”等,只是对相似对象进行区分,并不代表针对对象的特定排序。Unless otherwise defined, the technical terms or scientific terms involved in this application shall have the general meaning understood by people with general skills in the technical field to which this application belongs. The words "one", "a", "the", "these" and the like in this application do not indicate a quantitative limitation, and they may be singular or plural. The terms "include", "comprise", "have" and any variants thereof involved in this application are intended to cover non-exclusive inclusions; for example, a process, method and system, product or device comprising a series of steps or modules (units) is not limited to the listed steps or modules (units), but may include unlisted steps or modules (units), or may include other steps or modules (units) inherent to these processes, methods, products or devices. The words "connect", "connected", "coupled" and the like involved in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The "multiple" involved in this application refers to two or more. "And/or" describes the association relationship of associated objects, indicating that there may be three relationships, for example, "A and/or B" may mean: A exists alone, A and B exist at the same time, and B exists alone. Generally, the character "/" indicates that the objects associated with each other are in an "or" relationship. The terms "first", "second", "third", etc. in this application are only used to distinguish similar objects and do not represent a specific ordering of the objects.

在本实施例中提供的方法实施例可以在终端、计算机或者类似的运算装置中执行。比如在终端上运行,图1是本实施例的目标跟踪方法的终端的硬件结构框图。如图1所示,终端可以包括一个或多个(图1中仅示出一个)处理器102和用于存储数据的存储器104,其中,处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置。上述终端还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述终端的结构造成限制。例如,终端还可以包括比图1中所示更多或者更少的组件,或者具有与图1所示出的不同配置。The method embodiment provided in this embodiment can be executed in a terminal, a computer or a similar computing device. For example, when running on a terminal, FIG1 is a hardware structure block diagram of a terminal of the target tracking method of this embodiment. As shown in FIG1 , the terminal may include one or more (only one is shown in FIG1 ) processors 102 and a memory 104 for storing data, wherein the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA. The above-mentioned terminal may also include a transmission device 106 and an input and output device 108 for communication functions. It can be understood by those skilled in the art that the structure shown in FIG1 is only for illustration and does not limit the structure of the above-mentioned terminal. For example, the terminal may also include more or fewer components than those shown in FIG1 , or have a different configuration than that shown in FIG1 .

存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如在本实施例中的目标跟踪方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the target tracking method in this embodiment. The processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, that is, to implement the above method. The memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include a memory remotely arranged relative to the processor 102, and these remote memories may be connected to the terminal via a network. Examples of the above-mentioned network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.

传输设备106用于经由一个网络接收或者发送数据。上述的网络包括终端的通信供应商提供的无线网络。在一个实例中,传输设备106包括一个网络适配器(NetworkInterface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输设备106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。The transmission device 106 is used to receive or send data via a network. The above network includes a wireless network provided by the communication provider of the terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, referred to as NIC), which can be connected to other network devices through a base station so as to communicate with the Internet. In one example, the transmission device 106 can be a radio frequency (Radio Frequency, referred to as RF) module, which is used to communicate with the Internet wirelessly.

在本实施例中提供了一种目标跟踪方法,图2是本实施例的目标跟踪方法的流程图,如图2所示,该流程包括如下步骤:In this embodiment, a target tracking method is provided. FIG. 2 is a flow chart of the target tracking method of this embodiment. As shown in FIG. 2 , the process includes the following steps:

步骤S210,基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果;共视区域,为目标路段中设置的相邻传感器,在目标路段的鸟瞰图中的不同视野区域之间的重叠区域。Step S210, based on the output information of each sensor after frame synchronization, the target in the common view area in the bird's-eye view of the target section is matched to obtain a target matching result; the common view area is the overlapping area between different field of view areas of adjacent sensors set in the target section in the bird's-eye view of the target section.

上述传感器,为按照预设的位置设置于目标路段中的各个传感器。上述传感器,可以是电感环传感器、超声波传感器、红外传感器、雷达传感器、视频传感器等其中的一种。为了方便区分,可以对当前路段的各个传感器进行标号,还可以设置每个传感器的传感器标识。例如,可以设置每个传感器的ID。例如,传感器可以为相机,并按照目标进入目标路段的前后顺序利用英文字母对各个相机进行标号。图3为本申请一实施例提供的目标路段的各个相机的示意图。如图3所示,按照顺序在目标路段中设置多个相机,分别为相机A、相机B、相机C、相机D等,各个相机均存在一个盲区,其中,过相机的点的虚线与横切路面的实线所构成的小三角形为相机的盲区。The above-mentioned sensors are sensors that are set in the target road section according to preset positions. The above-mentioned sensors may be one of an inductive loop sensor, an ultrasonic sensor, an infrared sensor, a radar sensor, a video sensor, and the like. For easy distinction, the sensors of the current road section may be numbered, and a sensor identifier of each sensor may be set. For example, an ID of each sensor may be set. For example, the sensor may be a camera, and each camera may be numbered with English letters according to the order in which the target enters the target road section. FIG3 is a schematic diagram of each camera of the target road section provided in an embodiment of the present application. As shown in FIG3 , a plurality of cameras are set in the target road section in order, namely, camera A, camera B, camera C, camera D, and the like, and each camera has a blind spot, wherein the small triangle formed by the dotted line passing through the point of the camera and the solid line crossing the road surface is the blind spot of the camera.

上述帧同步后的输出信息,为对设置于目标路段的所有传感器所拍摄到的图像信息进行帧同步后,所得到的各个传感器同一时刻所拍摄到的各个图像帧信息。上述目标路段,可以是桥梁、隧道或其他需要对目标车辆进行持续跟踪的路段。上述目标路段的鸟瞰图,可以为目标路段上方任意一点,向目标路段俯视,所形成的立体图。其中,形成鸟瞰图的目标路段的上方的点,可以根据具体情况而具体设定,只要能够保证目标路段的鸟瞰图中的每个目标均能清晰显示即可。The output information after the above frame synchronization is the image frame information captured by each sensor at the same time after the image information captured by all sensors set on the target road section is frame synchronized. The above target road section can be a bridge, tunnel or other road section that needs to continuously track the target vehicle. The bird's-eye view of the above target road section can be a stereoscopic image formed by looking down at the target road section from any point above the target road section. Among them, the point above the target road section that forms the bird's-eye view can be specifically set according to the specific situation, as long as it can ensure that each target in the bird's-eye view of the target road section can be clearly displayed.

上述对目标路段的鸟瞰图中的共视区域内的目标进行匹配,可以是利用预设的匹配算法,对目标路段的鸟瞰图中的共视区域内的目标进行匹配。上述预设的匹配算法,可以是基于鸟瞰图的三维视角下的IOU(Intersection over Union,交并比)匹配、基于鸟瞰图的三维视角下的点集匹配、基于二维视角下的目标匹配或基于轨迹匹配的一种或多种。上述目标匹配结果,可以是相邻两个传感器同一时刻所拍摄到的共视区域内的同一目标的匹配结果。The above-mentioned matching of the targets in the common view area in the bird's-eye view of the target road section may be performed by using a preset matching algorithm to match the targets in the common view area in the bird's-eye view of the target road section. The above-mentioned preset matching algorithm may be one or more of IOU (Intersection over Union) matching based on the three-dimensional perspective of the bird's-eye view, point set matching based on the three-dimensional perspective of the bird's-eye view, target matching based on a two-dimensional perspective, or trajectory matching. The above-mentioned target matching result may be the matching result of the same target in the common view area captured by two adjacent sensors at the same time.

为了保证在目标路段内所拍摄到的目标物体的连续性,需要保证所有传感器的拍摄区域能够覆盖目标路段,即,需要保证各个相邻传感器的拍摄区域存在重叠区域。又因为每个传感器的拍摄区域,为每个传感器在目标路段的鸟瞰图中的视野区域。例如,图4为本申请一实施例提供的相机D在目标路段的鸟瞰图中的视野区域示意图。基于此,任意相邻两个传感器在目标路段的鸟瞰图中的不同视野之间,存在重叠区域,可以将这个重叠区域,作为相邻两个传感器的共视区域。In order to ensure the continuity of the target object captured in the target section, it is necessary to ensure that the shooting areas of all sensors can cover the target section, that is, it is necessary to ensure that there is an overlapping area between the shooting areas of each adjacent sensor. And because the shooting area of each sensor is the field of view area of each sensor in the bird's-eye view of the target section. For example, Figure 4 is a schematic diagram of the field of view area of camera D in the bird's-eye view of the target section provided in an embodiment of the present application. Based on this, there is an overlapping area between the different fields of view of any two adjacent sensors in the bird's-eye view of the target section, and this overlapping area can be used as the common viewing area of the two adjacent sensors.

因为位于共视区域内的同一目标,可以同时被相邻两个传感器拍摄到。又因为各个传感器的位置不同,拍摄到同一目标的视角不同,因此,同一时刻相邻两个传感器所拍摄到的同一目标的显示信息并不完全相同,需要对目标路段的鸟瞰图中的共视区域内的目标进行匹配,将不同传感器同一时刻所拍摄到的同一目标进行匹配,即将不同视角所拍摄到的同一目标关联匹配到同一融合目标中。上述融合目标,为在当前路段的各个不同传感器所拍摄到的同一目标的统一表示。具体地,可以是对每个融合目标分配融合目标标识,并将每个融合目标与各个传感器中与融合目标对应的目标进行关联,使得当前路段不同视角看到的同一目标对应相同的融合目标标识,以实现各个目标在当前路段的轨迹的完整连续且唯一。Because the same target in the common view area can be photographed by two adjacent sensors at the same time. And because the positions of each sensor are different, the viewing angles of the same target are different. Therefore, the display information of the same target photographed by two adjacent sensors at the same time is not exactly the same. It is necessary to match the targets in the common view area in the bird's-eye view of the target section, and match the same target photographed by different sensors at the same time, that is, to associate and match the same target photographed from different viewing angles into the same fusion target. The above fusion target is a unified representation of the same target photographed by different sensors in the current section. Specifically, a fusion target identifier can be assigned to each fusion target, and each fusion target can be associated with the target corresponding to the fusion target in each sensor, so that the same target seen from different viewing angles in the current section corresponds to the same fusion target identifier, so as to achieve the complete, continuous and unique trajectory of each target in the current section.

步骤S220,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息;目标标识,为目标在各个传感器中的帧同步后的输出信息中的标识。Step S220, based on the target matching result, update the sensor identifier and target identifier corresponding to each fused target in the common view area, and update the position information of each fused target in the common view area in the bird's-eye view of the target road section; the target identifier is the identifier of the target in the output information after frame synchronization in each sensor.

为了方便区分,需要对各个传感器所拍摄到的图像帧中的各个目标设置一个目标标识。对于同一个目标来说,在不同的传感器所拍摄到的图像帧信息中的目标标识不同。需要将相邻传感器的相同目标进行目标关联。因此,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,可以为基于融合目标在前一视角所对应的传感器的图像帧中所对应的第一目标,以及目标匹配结果,确定后一视角所对应的传感器的图像帧中的与第一目标所匹配的第二目标,则后一视角所对应的传感器的图像帧中的第二目标也与融合目标所对应,因此,可以更新共视区域内的各个融合目标所对应的传感器标识为后一视角所对应的传感器的传感器标识,更新共视区域内的各个融合目标所对应的目标标识为第二目标标识,根据第二目标在目标路段的鸟瞰图中的位置信息更新共视区域内的融合目标在目标路段的鸟瞰图中的位置信息,其通过对不同视角所拍摄到的同一目标进行关联,实现对融合目标进行视角切换,保证在目标通过目标路段的整个过程中目标的轨迹连续且唯一。In order to facilitate the distinction, it is necessary to set a target identifier for each target in the image frame captured by each sensor. For the same target, the target identifiers in the image frame information captured by different sensors are different. The same targets of adjacent sensors need to be associated. Therefore, based on the target matching result, the sensor identifier and target identifier corresponding to each fused target in the common viewing area are updated, and the position information of each fused target in the common viewing area in the bird's-eye view of the target section is updated. It can be based on the first target corresponding to the fused target in the image frame of the sensor corresponding to the previous perspective, and the target matching result, determine the second target matching with the first target in the image frame of the sensor corresponding to the latter perspective, then the second target in the image frame of the sensor corresponding to the latter perspective also corresponds to the fused target. Therefore, the sensor identifier corresponding to each fused target in the common viewing area can be updated to the sensor identifier of the sensor corresponding to the latter perspective, and the target identifier corresponding to each fused target in the common viewing area can be updated to the second target identifier. The position information of the fused target in the common viewing area in the bird's-eye view of the target section is updated according to the position information of the second target in the bird's-eye view of the target section. It realizes the perspective switching of the fused target by associating the same target photographed from different perspectives, and ensures that the trajectory of the target is continuous and unique during the entire process of the target passing through the target section.

步骤S230,基于各个传感器的帧同步后的输出信息,以及共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,得到各个融合目标在目标路段的连续显示结果。Step S230, based on the output information of each sensor after frame synchronization, the updated sensor identifier and the updated target identifier corresponding to each fusion target in the common viewing area, and the updated position information of each fusion target in the common viewing area in the bird's-eye view of the target section, obtain the continuous display result of each fusion target in the target section.

上述基于各个传感器的帧同步后的输出信息,以及共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,得到各个融合目标在目标路段的连续显示结果,可以是基于各个传感器的帧同步后的输出信息,确定共视区域外的各个融合目标所对应的传感器标识和目标标识,进而,基于共视区域外的各个融合目标所对应的传感器标识和目标标识,共视区域外的各个融合目标在目标路段的鸟瞰图中的位置信息,以及共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,来得到各个融合目标在目标路段的连续显示结果。The above-mentioned continuous display result of each fused target in the target section is obtained based on the output information after frame synchronization of each sensor, the updated sensor identifier and the updated target identifier corresponding to each fused target in the common view area, and the updated position information of each fused target in the common view area in the bird's-eye view of the target section. It can be based on the output information after frame synchronization of each sensor to determine the sensor identifier and the target identifier corresponding to each fused target outside the common view area, and then, based on the sensor identifier and the target identifier corresponding to each fused target outside the common view area, the position information of each fused target outside the common view area in the bird's-eye view of the target section, the updated sensor identifier and the updated target identifier corresponding to each fused target in the common view area, and the updated position information of each fused target in the common view area in the bird's-eye view of the target section, to obtain the continuous display result of each fused target in the target section.

上述步骤S210至步骤S230,通过基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,其通过对目标路段的鸟瞰图中的共视区域内的目标进行匹配,建立前后两个传感器不同视角之间的关联,并根据目标匹配结果,更新共视区域内的融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,使各个传感器不同的视野区域内的同一融合目标进行关联融合,进而,根据各个所述传感器的帧同步后的输出信息,与共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,实现对同一目标在同一鸟瞰图的持续显示,实现目标的持续跟踪,解决了现有车辆跟踪方法中存在的因相机部署距离远、车辆遮挡以及车牌识别等问题而导致的持续跟踪困难的问题。In the above steps S210 to S230, the targets in the common viewing area in the bird's-eye view of the target road section are matched based on the output information of each sensor after frame synchronization to obtain a target matching result, and based on the target matching result, the sensor identifier and target identifier corresponding to each fused target in the common viewing area are updated, and the position information of each fused target in the common viewing area in the bird's-eye view of the target road section is updated. By matching the targets in the common viewing area in the bird's-eye view of the target road section, an association between different perspectives of the front and rear sensors is established, and according to the target matching result, the sensor identifier and target identifier corresponding to the fused target in the common viewing area are updated, and the common viewing area is updated. The position information of each fused target in the domain in the bird's-eye view of the target road section is used to associate and fuse the same fused target in different field of view of each sensor, and then, according to the output information of each sensor after frame synchronization, the updated sensor identifier and the updated target identifier corresponding to each fused target in the common view area, and the updated position information of each fused target in the common view area in the bird's-eye view of the target road section, the continuous display of the same target in the same bird's-eye view is realized, and the continuous tracking of the target is realized, which solves the problem of continuous tracking difficulty caused by the long camera deployment distance, vehicle occlusion and license plate recognition in the existing vehicle tracking method.

在一个实施例中,在步骤S210之前,包括以下步骤:In one embodiment, before step S210, the following steps are included:

步骤S202,对位于目标路段的所有传感器的输出信息进行帧同步,得到各个传感器的帧同步后的输出信息;传感器的输出信息,包括传感器所拍摄到的视野区域内的各个目标和各个目标的属性信息。Step S202, frame synchronization is performed on the output information of all sensors located on the target road section to obtain the frame-synchronized output information of each sensor; the output information of the sensor includes each target in the field of view captured by the sensor and the attribute information of each target.

在本步骤中,上述对位于目标路段的所有传感器的输出信息进行帧同步,可以是利用PTP(Precision Time Protocol,高精度时间同步协议)、NTP(Network TimeProtocol,网络时间协议)等其中的一种或多种方式,对位于目标路段的所有传感器的输出信息进行帧同步。需要说明的是,具体的帧同步的方式本实施例在此不做具体限定,只要能够保证采用帧同步的方式,实现对位于目标路段的所有传感器的输出信息为同一时刻拍摄到的图像帧即可。上述各个目标,可以包括各个目标的目标标识,上述各个目标的属性,可以包括各个目标在目标路段的鸟瞰图中的坐标、车色、车牌、车辆类型、车辆检测的二维框和车辆检测的三维框等信息。其中,确定各个目标的属性,可以是根据立体视觉技术或使用深度学习方法预估车辆的三维形态,基于车辆的三维形态,确定各个目标的属性。其中,上述采用立体视觉技术预估车辆的三维形态,可以为使用多个摄像头以不同角度拍摄车辆图片,通过不同角度拍摄的车辆图片,得到车辆的三维信息。In this step, the frame synchronization of the output information of all sensors located on the target road section may be performed by using one or more methods such as PTP (Precision Time Protocol), NTP (Network Time Protocol), etc. to perform frame synchronization on the output information of all sensors located on the target road section. It should be noted that the specific frame synchronization method is not specifically limited in this embodiment, as long as the frame synchronization method can be used to ensure that the output information of all sensors located on the target road section is an image frame captured at the same time. The above-mentioned targets may include target identifications of the above-mentioned targets, and the attributes of the above-mentioned targets may include information such as the coordinates of the targets in the bird's-eye view of the target road section, the color of the vehicle, the license plate, the type of vehicle, the two-dimensional frame of the vehicle detection, and the three-dimensional frame of the vehicle detection. Among them, determining the attributes of each target may be to estimate the three-dimensional form of the vehicle based on stereo vision technology or using a deep learning method, and determine the attributes of each target based on the three-dimensional form of the vehicle. Among them, the above-mentioned three-dimensional form of the vehicle estimated by using stereo vision technology may be to use multiple cameras to take vehicle pictures at different angles, and obtain the three-dimensional information of the vehicle through the vehicle pictures taken at different angles.

步骤S204,基于各个传感器的帧同步后的输出信息,以及各个传感器在目标路段的鸟瞰图中的视野区域,确定目标路段内的所有融合目标,得到融合目标集合。Step S204, based on the frame-synchronized output information of each sensor and the field of view of each sensor in the bird's-eye view of the target road section, all fused targets in the target road section are determined to obtain a fused target set.

可以通过对各个传感器在目标路段的鸟瞰图中的视野区域中的各个目标进行融合目标标识的分配,保障在目标路段的鸟瞰图中的视野区域中的同一目标所对应的融合目标唯一。上述基于各个传感器的帧同步后的输出信息,以及各个传感器在目标路段的鸟瞰图中的视野区域,确定目标路段内的所有融合目标,可以是基于各个传感器的帧同步后的输出信息,将首个传感器在目标路段的鸟瞰图中的视野区域内的所有目标,均作为融合目标,进而将目标路段的所有传感器中除去首个传感器以外的其他传感器在目标路段的鸟瞰图中的视野区域除去共视区域以外的区域内的首次出现的目标,作为融合目标。By assigning fusion target identifiers to each target in the field of view of each sensor in the bird's-eye view of the target section, it can be ensured that the fusion target corresponding to the same target in the field of view of the bird's-eye view of the target section is unique. The above-mentioned determination of all fusion targets in the target section based on the output information of each sensor after frame synchronization and the field of view of each sensor in the bird's-eye view of the target section can be based on the output information of each sensor after frame synchronization, taking all targets in the field of view of the first sensor in the bird's-eye view of the target section as fusion targets, and then taking the first appearing targets in the field of view of the other sensors in the bird's-eye view of the target section except the first sensor except the common view area as fusion targets.

步骤S206,基于融合目标集合,确定各个融合目标与传感器标识和目标标识三者之间的映射关系,以及各个融合目标在目标路段的鸟瞰图中的位置信息。Step S206, based on the fusion target set, determining the mapping relationship between each fusion target and the sensor identifier and the target identifier, as well as the position information of each fusion target in the bird's-eye view of the target road section.

下面通过实例表示融合目标与传感器标识和目标标识三者之间的映射关系,例如,传感器标识为view_id,目标标识为obj_id,融合目标标识为fusion_obj_id,可以用{fusion_obj_id,{view_id,obj_id}}的形式,表示融合目标与传感器标识和目标标识三者之间的映射关系。The following example shows the mapping relationship between the fusion target, the sensor identifier and the target identifier. For example, the sensor identifier is view_id, the target identifier is obj_id, and the fusion target identifier is fusion_obj_id. The mapping relationship between the fusion target, the sensor identifier and the target identifier can be expressed in the form of {fusion_obj_id, {view_id, obj_id}}.

上述各个融合目标在目标路段的鸟瞰图中的位置信息,可以包括各个融合目标在鸟瞰图中所对应的车道线信息,以及各个融合目标在鸟瞰图中与相邻融合目标之间的位置关系。例如,图5为本申请一实施例提供的融合目标在目标路段的鸟瞰图中的位置信息示意图。如图5所示,融合目标15从相机A向相机B的方向移动,融合目标15在目标路段的鸟瞰图中的位置信息,可以是融合目标15在鸟瞰图中所对应的车道线信息,以及以融合目标15为中心的前后左右最近的其他的融合目标的标识(如,前面的融合目标的标识为3,后面的融合目标的标识为35,左面的融合目标的标识为10,右面的融合目标的标识为25)以及融合目标15和其他融合目标之间的距离。The position information of each fusion target in the bird's-eye view of the target road section may include the lane line information corresponding to each fusion target in the bird's-eye view, and the positional relationship between each fusion target and the adjacent fusion target in the bird's-eye view. For example, FIG5 is a schematic diagram of the position information of the fusion target in the bird's-eye view of the target road section provided by an embodiment of the present application. As shown in FIG5, the fusion target 15 moves from camera A to camera B. The position information of the fusion target 15 in the bird's-eye view of the target road section may be the lane line information corresponding to the fusion target 15 in the bird's-eye view, and the identifiers of other fusion targets closest to the fusion target 15 in front, behind, left and right (e.g., the identifier of the front fusion target is 3, the identifier of the rear fusion target is 35, the identifier of the left fusion target is 10, and the identifier of the right fusion target is 25) and the distance between the fusion target 15 and other fusion targets.

上述步骤S202至步骤S204,通过对位于目标路段的所有传感器的输出信息进行帧同步,得到各个传感器的帧同步后的输出信息,进而确定目标路段内的所有融合目标,并确定各个融合目标与传感器标识和目标标识三者之间的映射关系,以及各个融合目标在目标路段的鸟瞰图中的位置信息。便于通过各个融合目标与传感器标识和目标标识三者之间的映射关系,以及相邻传感器在共视区域的目标匹配结果,更新与融合目标对应的传感器标识和目标标识,实现同一目标在不同视角摄像头下的关联与切换。通过获取各个融合目标在目标路段的鸟瞰图中的位置信息,便于后续根据各个融合目标在目标路段的鸟瞰图中的位置信息,预估丢失的融合目标的位置信息,实现在无法识别融合目标的情况下,仍能够根据丢失的融合目标的位置信息的预估结果,实现连续的跟踪融合目标的目的。In the above steps S202 to S204, the output information of all sensors located in the target section is frame synchronized to obtain the output information of each sensor after frame synchronization, thereby determining all fusion targets in the target section, and determining the mapping relationship between each fusion target and the sensor identifier and the target identifier, as well as the position information of each fusion target in the bird's-eye view of the target section. It is convenient to update the sensor identifier and the target identifier corresponding to the fusion target through the mapping relationship between each fusion target and the sensor identifier and the target identifier, as well as the target matching results of adjacent sensors in the common viewing area, so as to realize the association and switching of the same target under different viewing angle cameras. By obtaining the position information of each fusion target in the bird's-eye view of the target section, it is convenient to estimate the position information of the lost fusion target according to the position information of each fusion target in the bird's-eye view of the target section, so as to realize the purpose of continuous tracking of the fusion target according to the estimated result of the position information of the lost fusion target when the fusion target cannot be identified.

另外,在一个实施例中,在步骤S204之前,包括以下步骤:In addition, in one embodiment, before step S204, the following steps are included:

步骤S203,基于位于目标路段的各个传感器的位置信息和标定信息,确定各个传感器在目标路段的鸟瞰图中的视野区域。Step S203, based on the position information and calibration information of each sensor located on the target road section, determining the field of view area of each sensor in the bird's-eye view of the target road section.

其中,传感器的标定信息,可以为传感器的内参和外参。例如,可以是相机的内参和外参。The calibration information of the sensor may be the intrinsic parameters and extrinsic parameters of the sensor, for example, the intrinsic parameters and extrinsic parameters of a camera.

进一步地,在一个实施例中,在步骤S203之前,包括以下步骤:Furthermore, in one embodiment, before step S203, the following steps are included:

基于传感器种类和目标路段的路段环境,确定位于目标路段的各个传感器的位置信息。Based on the sensor type and the road environment of the target road section, the position information of each sensor located on the target road section is determined.

在对各个传感器进行部署时,除了考虑传感器的种类,还需要考虑目标路段的路段环境,具体可以是对目标路段进行环境的调研和分析,在保证各个传感器的视野的和能够覆盖所有目标路段的范围的情况下,确定目标路段的各个传感器的位置信息。需要说明的是,对目标路段的各个传感器的位置信息的确定,还需要考虑目标路段中各个位置的视野,以及各个传感器的摆放角度,在尽可能减小传感器盲区且能够保证相邻传感器的视野区域之间存在共视区域的情况下,对传感器进行部署。When deploying each sensor, in addition to considering the type of sensor, it is also necessary to consider the road environment of the target section. Specifically, it can be to conduct an environmental survey and analysis of the target section, and determine the location information of each sensor in the target section while ensuring that the field of view of each sensor can cover the range of all target sections. It should be noted that the determination of the location information of each sensor in the target section also needs to consider the field of view of each position in the target section, as well as the placement angle of each sensor. The sensors are deployed while minimizing the blind spots of the sensors as much as possible and ensuring that there is a common view area between the field of view areas of adjacent sensors.

其中,在一个实施例中,上述步骤S204,基于各个传感器的帧同步后的输出信息,以及各个传感器在目标路段的鸟瞰图中的视野区域,确定目标路段内的所有融合目标,得到融合目标集合,包括以下步骤:In one embodiment, the above step S204, based on the output information of each sensor after frame synchronization and the field of view of each sensor in the bird's-eye view of the target road section, determines all fused targets in the target road section to obtain a fused target set, including the following steps:

步骤S2042,基于各个传感器的帧同步后的输出信息,将目标路段的首个传感器在目标路段的鸟瞰图中的视野区域内的各个目标,作为第一融合目标;目标路段的首个传感器,为目标进入目标路段的第一个传感器。Step S2042, based on the output information of each sensor after frame synchronization, each target in the field of view of the first sensor of the target section in the bird's-eye view of the target section is taken as the first fusion target; the first sensor of the target section is the first sensor of the target entering the target section.

为了保证同一目标在目标路段的跟踪连续且唯一,需要通过融合目标来表示同一目标在不同传感器视野中的显示状态。例如,某一车辆,当车辆进入首个传感器在目标路段的鸟瞰图中的视野区域内时,对此车辆进行融合目标标记为fusion_obj_001,当第一个传感器拍摄到此车辆时,在第一个传感器view_001的输出信息中,此车辆的目标标识为obj_001,在第二个传感器view_002的输出信息中,此车辆的目标标识为obj_002,关于此车辆,融合目标与传感器标识和目标标识三者之间的映射关系可以表示为{fusion_obj_001,{view_001,obj_001}}和{fusion_obj_001,{view_002,obj_002}},可以通过融合目标和传感器标识和目标标识三者之间的映射关系,表示同一目标在不同传感器视野中的显示状态。In order to ensure that the tracking of the same target on the target road section is continuous and unique, it is necessary to use fusion targets to represent the display status of the same target in the field of view of different sensors. For example, when a vehicle enters the field of view of the first sensor in the bird's-eye view of the target road section, the fusion target of the vehicle is marked as fusion_obj_001. When the first sensor captures the vehicle, the target identifier of the vehicle is obj_001 in the output information of the first sensor view_001, and the target identifier of the vehicle is obj_002 in the output information of the second sensor view_002. For this vehicle, the mapping relationship between the fusion target, the sensor identifier and the target identifier can be expressed as {fusion_obj_001, {view_001, obj_001}} and {fusion_obj_001, {view_002, obj_002}}. The display status of the same target in the field of view of different sensors can be represented by the mapping relationship between the fusion target, the sensor identifier and the target identifier.

步骤S2044,将目标路段的所有传感器中除去首个传感器以外的其他传感器在目标路段的鸟瞰图中的视野区域除去共视区域以外的区域内首次出现的目标,作为第二融合目标。Step S2044, taking the target that first appears in the field of view of the other sensors of the target road section except the first sensor in the bird's-eye view of the target road section except the common view area as the second fusion target.

目标路段内,可能存在从中间路段进入的新的目标,此新的目标并没有进行目标融合的标记。需要将新并入目标路段内的新的目标作为新的融合目标。其中,从中间路段进入的新的目标,可以是目标路段的所有传感器中除去首个传感器以外的其他传感器在目标路段的鸟瞰图中的视野区域除去共视区域以外的区域内首次出现的目标。In the target section, there may be new targets entering from the middle section, and these new targets are not marked for target fusion. The new targets newly merged into the target section need to be taken as new fusion targets. Among them, the new targets entering from the middle section can be the targets that first appear in the field of view of the target section's bird's-eye view by all sensors except the first sensor, excluding the common view area.

步骤S2046,基于第一融合目标和第二融合目标,生成融合目标集合。Step S2046: Generate a fusion target set based on the first fusion target and the second fusion target.

上述步骤S2042至步骤S2046,通过确定目标路段内的所有融合目标,便于后续通过融合目标来表示同一目标在不同传感器视野中的显示状态。In the above steps S2042 to S2046, by determining all fused targets within the target road section, it is convenient to subsequently represent the display status of the same target in the field of view of different sensors through the fused targets.

在一个实施例中,基于步骤S210,基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果,包括以下步骤:In one embodiment, based on step S210, based on the output information of each sensor after frame synchronization, matching the targets in the common viewing area in the bird's-eye view of the target road section to obtain the target matching result includes the following steps:

步骤S212,基于各个传感器的帧同步后的输出信息,在鸟瞰图视角下,对相邻传感器在目标路段的鸟瞰图中的共视区域内的各个目标进行交并比计算,得到第一交并比计算结果;基于第一交并比计算结果,对目标路段的相邻传感器在共视区域内的目标进行匹配,得到目标匹配结果。Step S212, based on the output information of each sensor after frame synchronization, from the bird's-eye view perspective, perform intersection-and-union (IOR) calculation on each target in the common viewing area of the bird's-eye view of the target road section of adjacent sensors to obtain a first IOR calculation result; based on the first IOR calculation result, match the targets in the common viewing area of the adjacent sensors of the target road section to obtain a target matching result.

上述基于第一交并比计算结果,对目标路段的相邻传感器在共视区域内的目标进行匹配,得到目标匹配结果,可以是基于第一交并比计算结果,与预设的交并比阈值,得到目标匹配结果。具体可以是当第一交并比计算结果大于预设的交并比阈值时,判定目标路段的相邻传感器在共视区域内的目标匹配,当第一交并比计算结果小于或等于预设的交并比阈值时,判定目标路段的相邻传感器在共视区域内的目标不匹配。上述预设的交并比阈值,可以根据具体情况而具体设定,只要能够通过预设的交并比阈值,来判定目标路段的相邻传感器在共视区域内的目标匹配是否成功即可。上述目标匹配结果,可以是目标路段的相邻传感器在共视区域内所拍摄到的不同视角的目标之间的匹配关系、匹配次数和匹配的时间等。其中,匹配关系,可以包括匹配和不匹配。例如,车辆S,在传感器A所拍摄到的共视区域中表示的目标标识为obj_003,在传感器B所拍摄到的共视区域中表示的目标标识为obj_004,则得到的目标匹配结果为传感器A中的obj_003与传感器B中的obj_004匹配。上述目标匹配结果的表示,可以举例进行说明,例如,可以用{src_view_id,{dst_view_id,{src_obj_id,{dst_obj_id,count}}}}来表示目标匹配结果。其中,src_view_id为原传感器标识,dst_view_id为用来匹配的传感器标识,src_obj_id为原传感器标识下的目标标识,dst_obj_id为用来匹配的传感器标识下的目标标识,count为src_obj_id所表示的目标与dst_obj_id所表示的目标的匹配次数。The target matching result is obtained by matching the targets of the adjacent sensors of the target road section in the common viewing area based on the first intersection-and-parallel calculation result, and the target matching result can be obtained based on the first intersection-and-parallel calculation result and the preset intersection-and-parallel threshold. Specifically, when the first intersection-and-parallel calculation result is greater than the preset intersection-and-parallel threshold, it is determined that the targets of the adjacent sensors of the target road section in the common viewing area are matched, and when the first intersection-and-parallel calculation result is less than or equal to the preset intersection-and-parallel threshold, it is determined that the targets of the adjacent sensors of the target road section in the common viewing area are not matched. The preset intersection-and-parallel threshold can be specifically set according to the specific situation, as long as the preset intersection-and-parallel threshold can be used to determine whether the target matching of the adjacent sensors of the target road section in the common viewing area is successful. The target matching result can be the matching relationship, matching number and matching time between the targets of different perspectives captured by the adjacent sensors of the target road section in the common viewing area. Among them, the matching relationship can include matching and mismatching. For example, for vehicle S, the target identifier represented in the common view area captured by sensor A is obj_003, and the target identifier represented in the common view area captured by sensor B is obj_004. Then the target matching result obtained is that obj_003 in sensor A matches obj_004 in sensor B. The representation of the above target matching result can be illustrated by an example, for example, the target matching result can be represented by {src_view_id, {dst_view_id, {src_obj_id, {dst_obj_id, count}}}}. Among them, src_view_id is the original sensor identifier, dst_view_id is the sensor identifier used for matching, src_obj_id is the target identifier under the original sensor identifier, dst_obj_id is the target identifier under the sensor identifier used for matching, and count is the number of matches between the target represented by src_obj_id and the target represented by dst_obj_id.

步骤S214,基于各个传感器的帧同步后的输出信息,在鸟瞰图视角下,确定相邻传感器所确定的共视区域内的各个目标,与相邻传感器中的第一传感器在目标路段的鸟瞰图中的视野区域内的各个目标之间的第一点集特征;确定相邻传感器所确定的共视区域内的各个目标,与相邻传感器中的第二传感器在目标路段的鸟瞰图中的视野区域内的各个目标之间的第二点集特征;基于第一点集特征和第二点集特征,对目标路段的鸟瞰图中的共视区域内的目标进行相似度匹配,得到相似度匹配结果;基于相似度匹配结果,得到目标匹配结果。Step S214, based on the output information of each sensor after frame synchronization, in the bird's-eye view perspective, determine the first point set features between each target in the common view area determined by the adjacent sensors and each target in the field of view of the first sensor in the bird's-eye view of the target road section; determine the second point set features between each target in the common view area determined by the adjacent sensors and each target in the field of view of the second sensor in the bird's-eye view of the target road section; based on the first point set features and the second point set features, perform similarity matching on the targets in the common view area in the bird's-eye view of the target road section to obtain a similarity matching result; based on the similarity matching result, obtain a target matching result.

上述基于相似度匹配结果,得到目标匹配结果,可以是基于相似度匹配结果,与预设的相似度阈值,得到目标匹配结果。具体可以是当相似度匹配结果大于预设的相似度阈值时,判定目标路段的相邻传感器在共视区域内的目标匹配,当相似度匹配结果小于或等于预设的相似度阈值时,判定目标路段的相邻传感器在共视区域内的目标不匹配。上述相似度阈值,可以根据具体情况而具体设定,只要能够通过预设的相似度阈值,来判定目标路段的相邻传感器在共视区域内的目标匹配是否成功即可。The target matching result is obtained based on the similarity matching result, and the target matching result can be obtained based on the similarity matching result and a preset similarity threshold. Specifically, when the similarity matching result is greater than the preset similarity threshold, it is determined that the target of the adjacent sensors of the target section in the common viewing area is matched, and when the similarity matching result is less than or equal to the preset similarity threshold, it is determined that the target of the adjacent sensors of the target section in the common viewing area is not matched. The above-mentioned similarity threshold can be specifically set according to the specific situation, as long as it can pass the preset similarity threshold to determine whether the target matching of the adjacent sensors of the target section in the common viewing area is successful.

步骤S216,基于各个传感器的帧同步后的输出信息,将相邻传感器中的第一传感器在鸟瞰图中的视野区域内的各个目标,投影到相邻传感器中的第二传感器在鸟瞰图中的视野区域中,得到投影后的各个目标;对投影后的各个目标与相邻传感器中的第二传感器在鸟瞰图中的视野区域中的各个目标进行交并比计算,得到第二交并比计算结果;基于第二交并比计算结果,对目标路段的相邻传感器在共视区域内的目标进行匹配,得到目标匹配结果。Step S216, based on the output information of each sensor after frame synchronization, each target in the field of view of the first sensor among the adjacent sensors in the bird's-eye view is projected into the field of view of the second sensor among the adjacent sensors in the bird's-eye view to obtain each projected target; perform intersection-and-union (IOR) calculation on each projected target and each target in the field of view of the second sensor among the adjacent sensors in the bird's-eye view to obtain a second IOR calculation result; based on the second IOR calculation result, match the targets in the common view area of the adjacent sensors of the target section to obtain a target matching result.

其中,上述将相邻传感器中的第一传感器在鸟瞰图中的视野区域内的各个目标,投影到相邻传感器中的第二传感器在鸟瞰图中的视野区域中,可以举例进行说明。The above-mentioned projecting of each target in the field of view of a first sensor among the adjacent sensors in the bird's-eye view into the field of view of a second sensor among the adjacent sensors in the bird's-eye view may be explained by way of example.

例如,将相机B上的目标的二维框,投影到相机A的图像坐标系中,其中,相机B中的目标的坐标点变成齐次坐标。For example, the two-dimensional frame of the target on camera B is projected into the image coordinate system of camera A, wherein the coordinate points of the target in camera B become homogeneous coordinates.

矩阵变换如下:The matrix transformation is as follows:

其中,(u,v)为相机A所拍摄的图像中的目标坐标点的坐标,(x,y)为相机B所拍摄的图像中的目标坐标点的坐标,HAB表示从相机B视角到相机A视角的投影变换矩阵。Among them, (u, v) is the coordinate of the target coordinate point in the image taken by camera A, (x, y) is the coordinate of the target coordinate point in the image taken by camera B, and H AB represents the projection transformation matrix from the perspective of camera B to the perspective of camera A.

通过上述公式,得到相机B所拍摄的图像的目标坐标点,投影到相机A所拍摄的图像中的目标坐标点的位置。Through the above formula, the target coordinate point of the image taken by camera B is obtained, and the position of the target coordinate point in the image taken by camera A is projected.

本步骤通过基于鸟瞰图的三维视角下的匹配和基于二维视角的目标的匹配相结合的方法,得到目标匹配结果,使得目标匹配更准确。This step obtains the target matching result by combining the matching under the three-dimensional perspective based on the bird's-eye view with the matching of the target based on the two-dimensional perspective, so that the target matching is more accurate.

上述步骤S212至步骤S216,通过利用三维视角下的匹配方法,或二维视角与三维视角结合的匹配方式,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果,便于根据匹配结果,对不同视角所拍摄到的同一目标进行关联,实现对融合目标进行视角切换,保证在目标通过目标路段的整个过程中目标的轨迹连续且唯一。In the above steps S212 to S216, the targets in the common viewing area in the bird's-eye view of the target road section are matched by utilizing a matching method under a three-dimensional perspective, or a matching method combining a two-dimensional perspective and a three-dimensional perspective, to obtain a target matching result, so as to facilitate associating the same target photographed from different perspectives based on the matching result, realize perspective switching of the fused target, and ensure that the trajectory of the target is continuous and unique throughout the process of the target passing through the target road section.

另外,在一个实施例中,上述步骤S220,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,包括以下步骤:In addition, in one embodiment, the above step S220, based on the target matching result, updates the sensor identifier and the target identifier corresponding to each fused target in the common view area, and updates the position information of each fused target in the common view area in the bird's-eye view of the target road section, including the following steps:

步骤S222,当相邻传感器中的第一传感器在鸟瞰图中的视野区域内的目标存在于融合目标列表时,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在鸟瞰图中的位置信息。Step S222, when the target in the field of view of the first sensor among the adjacent sensors in the bird's-eye view exists in the fusion target list, based on the target matching result, the sensor identifier and target identifier corresponding to each fusion target in the common view area are updated, and the position information of each fusion target in the common view area in the bird's-eye view is updated.

在目标匹配结束之后,需要对相邻传感器在不同视角下所拍摄的同一目标进行目标关联,进而根据关联结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在鸟瞰图中的位置信息。实现对融合目标进行从第一传感器视角向第二传感器的视角的切换,保证在目标通过目标路段的整个过程中目标的轨迹连续且唯一。上述第一传感器,为相邻两个传感器中目标先进入的视野区域所对应的传感器,第二传感器,为目标从第一传感器的视野区域离开,所进入的视野区域所对应的传感器。After the target matching is completed, it is necessary to associate the same target photographed by adjacent sensors at different viewing angles, and then update the sensor identifiers and target identifiers corresponding to each fused target in the common viewing area according to the association results, and update the position information of each fused target in the common viewing area in the bird's-eye view. The fused target is switched from the perspective of the first sensor to the perspective of the second sensor to ensure that the trajectory of the target is continuous and unique throughout the process of the target passing through the target section. The above-mentioned first sensor is the sensor corresponding to the field of view that the target enters first among the two adjacent sensors, and the second sensor is the sensor corresponding to the field of view that the target enters after leaving the field of view of the first sensor.

进一步地,在一个实施例中,上述步骤S222,当相邻传感器中的第一传感器在鸟瞰图中的视野区域内的目标存在于融合目标列表时,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在鸟瞰图中的位置信息,包括以下步骤:Further, in one embodiment, the above step S222, when the target in the field of view of the first sensor among the adjacent sensors in the bird's-eye view is present in the fusion target list, based on the target matching result, updating the sensor identifier and the target identifier corresponding to each fusion target in the common view area, and updating the position information of each fusion target in the common view area in the bird's-eye view, includes the following steps:

步骤S2222,当相邻传感器中的第一传感器在鸟瞰图中的视野区域内的第一目标存在于融合目标列表时,基于目标匹配结果,确定相邻传感器中的第二传感器在鸟瞰图中的视野区域内,与相邻传感器中的第一传感器在鸟瞰图中的视野区域内的第一目标所匹配的第二目标。Step S2222, when the first target in the field of view of the first sensor among the adjacent sensors in the bird's-eye view exists in the fusion target list, based on the target matching result, determine the second target in the field of view of the second sensor among the adjacent sensors in the bird's-eye view that matches the first target in the field of view of the first sensor among the adjacent sensors in the bird's-eye view.

上述基于目标匹配结果,确定相邻传感器中的第二传感器在鸟瞰图中的视野区域内,与相邻传感器中的第一传感器在鸟瞰图中的视野区域内的第一目标所匹配的第二目标,可以是根据目标匹配结果,确定相邻传感器中的第二传感器在鸟瞰图中的视野区域内,与相邻传感器中的第一传感器在鸟瞰图中的视野区域内的第一目标所匹配的各个目标的匹配次数,确定匹配次数最多的目标,以及最多的匹配次数,当最多的匹配次数大于或等于预设的匹配次数阈值时,将最多的匹配次数所对应的第二传感器在鸟瞰图中的视野区域内的目标,作为与相邻传感器中的第一传感器在鸟瞰图中的视野区域内的第一目标所匹配的第二目标。当最多的匹配次数小于预设的匹配次数阈值时,按照第一传感器在鸟瞰图中的视野区域内的位置信息,更新共视区域内的融合目标在鸟瞰图中的位置信息。上述预设的匹配次数阈值,可以根据具体情况而设定,只要能够通过预设的匹配次数阈值来判断两个目标是否匹配即可。The above-mentioned determination of the second target that matches the first target in the field of view of the first sensor in the bird's-eye view in the field of view of the second sensor in the adjacent sensor based on the target matching result may be to determine the number of matches of each target that matches the first target in the field of view of the first sensor in the adjacent sensor in the field of view of the bird's-eye view in the field of view of the second sensor in the adjacent sensor according to the target matching result, determine the target with the largest number of matches and the largest number of matches, and when the largest number of matches is greater than or equal to a preset number of matches threshold, the target in the field of view of the second sensor in the bird's-eye view corresponding to the largest number of matches is used as the second target that matches the first target in the field of view of the first sensor in the adjacent sensor in the bird's-eye view. When the largest number of matches is less than the preset number of matches threshold, the position information of the fused target in the common view area in the bird's-eye view is updated according to the position information of the first sensor in the field of view of the bird's-eye view. The above-mentioned preset number of matches threshold can be set according to the specific situation, as long as it can be determined whether the two targets match by the preset number of matches threshold.

步骤S2224,更新共视区域内的融合目标所对应的传感器标识为相邻传感器中的第二传感器所对应的标识,更新共视区域内的融合目标所对应的目标标识为相邻传感器中的第二传感器在鸟瞰图中的视野区域内的第二目标的标识,更新共视区域内的融合目标在鸟瞰图中的位置信息为相邻传感器中的第二传感器在鸟瞰图中的视野区域内的第二目标的位置信息。Step S2224, update the sensor identifier corresponding to the fused target in the common view area to the identifier corresponding to the second sensor in the adjacent sensors, update the target identifier corresponding to the fused target in the common view area to the identifier of the second target in the field of view of the second sensor in the adjacent sensors in the bird's-eye view, and update the position information of the fused target in the common view area in the bird's-eye view to the position information of the second target in the field of view of the second sensor in the adjacent sensors in the bird's-eye view.

上述步骤S2222至步骤S2224,通过当相邻传感器中的第一传感器在鸟瞰图中的视野区域内的目标存在于融合目标列表时,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在鸟瞰图中的位置信息,实现对融合目标进行从第一传感器视角向第二传感器的视角的切换,通过对每个相邻传感器的融合目标的视角切换,保证在目标通过目标路段的整个过程中目标的轨迹连续且唯一。In the above steps S2222 to S2224, when a target in the field of view of the first sensor among the adjacent sensors in the bird's-eye view exists in the fusion target list, based on the target matching result, the sensor identifier and target identifier corresponding to each fusion target in the common view area are updated, and the position information of each fusion target in the common view area in the bird's-eye view is updated, so as to switch the fusion target from the perspective of the first sensor to the perspective of the second sensor. By switching the perspective of the fusion target of each adjacent sensor, it is ensured that the trajectory of the target is continuous and unique throughout the process of the target passing through the target section.

在一个实施例中,在步骤S220之后,还包括以下步骤:In one embodiment, after step S220, the following steps are further included:

步骤S2201,当融合目标丢失时,遍历丢失的融合目标所在的视野区域所对应的第三传感器的帧同步后的输出信息,判断第三传感器的帧同步后的输出信息中是否存在与丢失的融合目标的属性信息一致的第三目标。Step S2201, when a fusion target is lost, traverse the frame-synchronized output information of the third sensor corresponding to the field of view where the lost fusion target is located, and determine whether there is a third target with attribute information consistent with the lost fusion target in the frame-synchronized output information of the third sensor.

在目标行驶过程中,可能会因为遮挡、识别错误、目标的标识更改等问题,造成融合目标丢失的情况。因此,需要在融合目标丢失时,通过遍历丢失的融合目标所在的视野区域所对应的第三传感器的帧同步后的输出信息,判断第三传感器的帧同步后的输出信息中是否存在与丢失的融合目标的属性信息一致的第三目标。具体地,可以是当融合目标丢失时,根据丢失的融合目标在上一帧图像中的属性信息,遍历丢失的融合目标所在的视野区域所对应的第三传感器的帧同步后的输出信息,判断第三传感器的帧同步后的输出信息中是否存在与丢失的融合目标在上一帧图像中的属性信息一致的第三目标。During the driving process of the target, the fusion target may be lost due to occlusion, recognition error, target identification change and other problems. Therefore, when the fusion target is lost, it is necessary to traverse the output information of the third sensor after frame synchronization corresponding to the field of view where the lost fusion target is located, and determine whether there is a third target whose attribute information is consistent with the lost fusion target in the output information of the third sensor after frame synchronization. Specifically, when the fusion target is lost, according to the attribute information of the lost fusion target in the previous frame image, traverse the output information of the third sensor after frame synchronization corresponding to the field of view where the lost fusion target is located, and determine whether there is a third target whose attribute information is consistent with the lost fusion target in the previous frame image in the output information of the third sensor after frame synchronization.

步骤S2202,当第三传感器的帧同步后的输出信息中存在与丢失的融合目标的属性信息一致的第三目标时,更新融合目标所对应的传感器标识为第三传感器所对应的标识,更新融合目标所对应的目标标识为第三传感器在鸟瞰图中的视野区域内的第三目标的标识,更新融合目标在鸟瞰图中的位置信息为第三传感器在鸟瞰图中的视野区域内的第三目标的位置信息。Step S2202, when there is a third target consistent with the attribute information of the lost fusion target in the output information of the third sensor after frame synchronization, the sensor identifier corresponding to the fusion target is updated to the identifier corresponding to the third sensor, the target identifier corresponding to the fusion target is updated to the identifier of the third target in the field of view of the third sensor in the bird's-eye view, and the position information of the fusion target in the bird's-eye view is updated to the position information of the third target in the field of view of the third sensor in the bird's-eye view.

步骤S2203,当第三传感器的帧同步后的输出信息中不存在与丢失的融合目标的属性信息一致的第三目标时,遍历第四传感器的帧同步后的输出信息,判断第四传感器的帧同步后的输出信息中是否存在与丢失的融合目标的属性一致的第四目标;第四传感器,为目标远离第三传感器的视野区域所进入的下一视野区域所对应的传感器。Step S2203, when there is no third target having attribute information consistent with the lost fusion target in the output information of the third sensor after frame synchronization, traverse the output information of the fourth sensor after frame synchronization to determine whether there is a fourth target having attributes consistent with the lost fusion target in the output information of the fourth sensor after frame synchronization; the fourth sensor is a sensor corresponding to the next field of view area that the target enters when it moves away from the field of view area of the third sensor.

步骤S2204,当第四传感器的帧同步后的输出信息中存在与丢失的融合目标的属性一致的第四目标时,更新融合目标所对应的传感器标识为第四传感器所对应的标识,更新融合目标所对应的目标标识为第四传感器在鸟瞰图中的视野区域内的第四目标的标识,更新融合目标在鸟瞰图中的位置信息为第四传感器在鸟瞰图中的视野区域内的第四目标的位置信息。Step S2204, when there is a fourth target with attributes consistent with the lost fusion target in the output information of the fourth sensor after frame synchronization, the sensor identifier corresponding to the fusion target is updated to the identifier corresponding to the fourth sensor, the target identifier corresponding to the fusion target is updated to the identifier of the fourth target in the field of view of the fourth sensor in the bird's-eye view, and the position information of the fusion target in the bird's-eye view is updated to the position information of the fourth target in the field of view of the fourth sensor in the bird's-eye view.

步骤S2205,当第四传感器的帧同步后的输出信息中不存在与丢失的融合目标的属性信息一致的第四目标时,根据融合目标未丢失时与多个相邻融合目标之间的位置关系,确定第四传感器的帧同步后的输出信息中与多个相邻融合目标所对应的多个目标;基于第四传感器的帧同步后的输出信息中与多个相邻融合目标所对应的多个目标,以及融合目标未丢失时与多个相邻融合目标之间的位置关系,预估丢失的融合目标在第四传感器在鸟瞰图中的视野区域内的第一位置信息;更新融合目标在鸟瞰图中的位置信息为第四传感器在鸟瞰图中的视野区域内的第一位置信息。Step S2205, when there is no fourth target whose attribute information is consistent with the lost fusion target in the output information of the fourth sensor after frame synchronization, determine the multiple targets corresponding to the multiple adjacent fusion targets in the output information of the fourth sensor after frame synchronization according to the positional relationship between the fusion target and the multiple adjacent fusion targets when the fusion target is not lost; estimate the first position information of the lost fusion target in the field of view of the fourth sensor in the bird's-eye view based on the multiple targets corresponding to the multiple adjacent fusion targets in the output information of the fourth sensor after frame synchronization, and the positional relationship between the fusion target and the multiple adjacent fusion targets when the fusion target is not lost; update the position information of the fusion target in the bird's-eye view to the first position information of the fourth sensor in the field of view in the bird's-eye view.

上述步骤S2201至步骤S2205,通过当融合目标丢失时,遍历丢失的融合目标所在的视野区域所对应的第三传感器的帧同步后的输出信息,是否存在丢失的融合目标的属性信息一致的第三目标,当第三传感器的帧同步后的输出信息中存在与丢失的融合目标的属性信息一致的第三目标时或当第三传感器的帧同步后的输出信息中不存在与丢失的融合目标的属性信息一致的第三目标时,如何获取丢失的融合目标的位置信息。其通过丢失的融合目标的属性信息,以及丢失的融合目标的位置信息,与当前的传感器的帧同步后的输出信息或下一视角下的传感器的帧同步后的属性信息进行比对,得到丢失的融合目标所对应的传感器标识、目标标识以及位置信息。能够在因相机部署距离远、车辆遮挡以及车牌识别等问题而导致的融合目标丢失时,根据融合目标的前后左右融合目标所构建的图,找到当前视角的这些目标,基于相对位置关系和高精地图的车道线信息,还有该融合目标的历史轨迹信息等,进行建模,预测出在鸟瞰图视角下该目标的位置,更新融合目标状态。解决了现有车辆跟踪方法中存在的因相机部署距离远、车辆遮挡以及车牌识别等问题而导致的持续跟踪困难的问题。In the above steps S2201 to S2205, when the fusion target is lost, the output information of the third sensor after frame synchronization corresponding to the field of view where the lost fusion target is located is traversed to determine whether there is a third target with the same attribute information as the lost fusion target, and how to obtain the position information of the lost fusion target when there is a third target with the same attribute information as the lost fusion target in the output information of the third sensor after frame synchronization or when there is no third target with the same attribute information as the lost fusion target in the output information of the third sensor after frame synchronization. The sensor identifier, target identifier and position information corresponding to the lost fusion target are obtained by comparing the attribute information of the lost fusion target and the position information of the lost fusion target with the output information of the current sensor after frame synchronization or the attribute information of the sensor after frame synchronization at the next viewing angle. When the fusion target is lost due to problems such as long camera deployment distance, vehicle occlusion, and license plate recognition, the target in the current perspective can be found based on the graph constructed by the fusion targets in front, behind, left, and right of the fusion target. Based on the relative position relationship and lane line information of the high-precision map, as well as the historical trajectory information of the fusion target, modeling is performed to predict the position of the target from the bird's-eye view and update the fusion target state. This solves the problem of continuous tracking difficulties caused by long camera deployment distance, vehicle occlusion, and license plate recognition in existing vehicle tracking methods.

在一个实施例中,图6为本申请一实施例提供的目标跟踪结果示意图,如图6所示,左侧区域显示的是在高精地图上的鸟瞰图视角的示意图,其中各个点代表车辆在路段中的轨迹点,左上角的框代表当前箭头指向的车辆位置状态信息,如车辆的中心点位置、长宽和行驶角度等。右边则是传感器标识为01的传感器和传感器标识为02的传感器输出的帧同步后的图片信息,两个框圈中的车辆为同一车辆在不同视角的情况,其中右上角的时间信息代表他们处于同一时刻,传感器标识为01的目标车辆驶入到传感器标识为02的区域中,融合目标51从传感器标识为01的传感器所输出的图片中的框切换至传感器标识为02的传感器所输出的图片的框中,此时融合目标51的视角完成切换,将由传感器标识为02的传感器主要负责对融合目标51的状态的监视,目标会按相机的顺序进行车辆的目标匹配和接力,保证目标在路段内是完整和连续的。In one embodiment, FIG6 is a schematic diagram of target tracking results provided by an embodiment of the present application. As shown in FIG6, the left area shows a schematic diagram of a bird's-eye view perspective on a high-precision map, wherein each point represents a track point of a vehicle in a road section, and the box in the upper left corner represents the vehicle position status information pointed to by the current arrow, such as the center point position, length, width and driving angle of the vehicle. On the right is the picture information after frame synchronization output by the sensor identified as 01 and the sensor identified as 02. The vehicles in the two frames are the same vehicle in different perspectives, wherein the time information in the upper right corner represents that they are at the same time. The target vehicle identified as 01 drives into the area identified as 02, and the fusion target 51 switches from the box in the picture output by the sensor identified as 01 to the box in the picture output by the sensor identified as 02. At this time, the perspective of the fusion target 51 is switched, and the sensor identified as 02 will be mainly responsible for monitoring the status of the fusion target 51. The target will match and relay the vehicle in the order of the camera to ensure that the target is complete and continuous in the road section.

下面通过优选实施例对本实施例进行描述和说明。The present embodiment is described and illustrated below through preferred embodiments.

图7是本申请一优选实施例提供的目标跟踪方法的流程图。如图7所示,该目标跟踪方法包括以下步骤:FIG7 is a flow chart of a target tracking method provided by a preferred embodiment of the present application. As shown in FIG7 , the target tracking method includes the following steps:

步骤S701,基于传感器种类和目标路段的路段环境,确定位于目标路段的各个传感器的位置信息;Step S701, determining the position information of each sensor located on the target road section based on the sensor type and the road section environment of the target road section;

步骤S702,基于位于目标路段的各个传感器的位置信息和标定信息,确定各个传感器在目标路段的鸟瞰图中的视野区域;Step S702, determining the field of view of each sensor in the bird's-eye view of the target road section based on the position information and calibration information of each sensor located on the target road section;

步骤S703,对位于目标路段的所有传感器的输出信息进行帧同步,得到各个传感器的帧同步后的输出信息;Step S703, performing frame synchronization on the output information of all sensors located on the target road section to obtain the frame-synchronized output information of each sensor;

步骤S704,基于各个传感器的帧同步后的输出信息,以及各个传感器在目标路段的鸟瞰图中的视野区域,确定目标路段内的所有融合目标,得到融合目标集合;Step S704, based on the frame-synchronized output information of each sensor and the field of view of each sensor in the bird's-eye view of the target road section, all fusion targets in the target road section are determined to obtain a fusion target set;

步骤S705,基于融合目标集合,确定各个融合目标与传感器标识和目标标识三者之间的映射关系,以及各个融合目标在目标路段的鸟瞰图中的位置信息;Step S705, based on the fusion target set, determining the mapping relationship between each fusion target and the sensor identifier and the target identifier, as well as the position information of each fusion target in the bird's-eye view of the target road section;

步骤S706,基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果;Step S706, matching the targets in the common viewing area in the bird's-eye view of the target road section based on the frame-synchronized output information of each sensor to obtain a target matching result;

步骤S707,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息;Step S707, based on the target matching result, updating the sensor identifier and target identifier corresponding to each fused target in the common view area, and updating the position information of each fused target in the common view area in the bird's-eye view of the target road section;

步骤S708,基于各个传感器的帧同步后的输出信息,以及共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,得到各个融合目标在目标路段的连续显示结果。Step S708, based on the frame synchronized output information of each sensor, the updated sensor identifier and the updated target identifier corresponding to each fused target in the common viewing area, and the updated position information of each fused target in the common viewing area in the bird's-eye view of the target section, obtain the continuous display result of each fused target in the target section.

上述步骤S701至步骤S708,通过确定位于目标路段的各个传感器的位置信息,并基于位于目标路段的各个传感器的位置信息和标定信息,确定各个传感器在目标路段的鸟瞰图中的视野区域,进而基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,其通过对目标路段的鸟瞰图中的共视区域内的目标进行匹配,建立前后两个传感器不同视角之间的关联,并根据目标匹配结果,更新共视区域内的融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,使各个传感器不同的视野区域内的同一融合目标进行关联融合,进而,根据各个所述传感器的帧同步后的输出信息,与共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,实现对同一目标在同一鸟瞰图的持续显示,实现目标的持续跟踪,解决了现有车辆跟踪方法中存在的因相机部署距离远、车辆遮挡以及车牌识别等问题而导致的持续跟踪困难的问题。In the above steps S701 to S708, the position information of each sensor located on the target road section is determined, and based on the position information and calibration information of each sensor located on the target road section, the field of view of each sensor in the bird's-eye view of the target road section is determined, and then based on the output information of each sensor after frame synchronization, the targets in the common viewing area in the bird's-eye view of the target road section are matched to obtain the target matching result, and based on the target matching result, the sensor identifier and target identifier corresponding to each fused target in the common viewing area are updated, and the position information of each fused target in the common viewing area in the bird's-eye view of the target road section is updated. By matching the targets in the common viewing area in the bird's-eye view of the target road section, an association between different viewing angles of the two sensors is established, and the target matching result is obtained according to the target matching result. According to the matching results, the sensor identifier and the target identifier corresponding to the fusion target in the common viewing area are updated, and the position information of each fusion target in the common viewing area in the bird's-eye view of the target road section is updated, so that the same fusion target in different field of view of each sensor is associated and fused, and then, according to the output information after frame synchronization of each sensor, the updated sensor identifier and the updated target identifier corresponding to each fusion target in the common viewing area, and the updated position information of each fusion target in the common viewing area in the bird's-eye view of the target road section, the continuous display of the same target in the same bird's-eye view is realized, and the continuous tracking of the target is realized, which solves the problem of continuous tracking difficulty caused by the long camera deployment distance, vehicle occlusion and license plate recognition in the existing vehicle tracking method.

应该理解的是,虽然如上所述的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上所述的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that, although the various steps in the flowcharts involved in the above-mentioned embodiments are displayed in sequence according to the indication of the arrows, these steps are not necessarily executed in sequence according to the order indicated by the arrows. Unless there is a clear explanation in this article, the execution of these steps does not have a strict order restriction, and these steps can be executed in other orders. Moreover, at least a part of the steps in the flowcharts involved in the above-mentioned embodiments can include multiple steps or multiple stages, and these steps or stages are not necessarily executed at the same time, but can be executed at different times, and the execution order of these steps or stages is not necessarily to be carried out in sequence, but can be executed in turn or alternately with other steps or at least a part of the steps or stages in other steps.

基于同样的发明构思,在本实施例中还提供了一种目标跟踪装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。以下所使用的术语“模块”、“单元”、“子单元”等可以实现预定功能的软件和/或硬件的组合。尽管在以下实施例中所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。Based on the same inventive concept, a target tracking device is also provided in this embodiment, which is used to implement the above embodiments and preferred implementation modes, and the descriptions that have been made will not be repeated. The terms "module", "unit", "subunit", etc. used below can implement a combination of software and/or hardware of predetermined functions. Although the devices described in the following embodiments are preferably implemented in software, the implementation of hardware, or a combination of software and hardware, is also possible and conceivable.

在一个实施例中,图8是本申请一实施例提供的目标跟踪装置的结构框图,如图8所示,该目标跟踪装置,包括:In one embodiment, FIG8 is a structural block diagram of a target tracking device provided in an embodiment of the present application. As shown in FIG8 , the target tracking device includes:

匹配模块82,用于基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果;共视区域,为目标路段中设置的相邻传感器,在目标路段的鸟瞰图中的不同视野区域之间的重叠区域;The matching module 82 is used to match the target in the common view area in the bird's-eye view of the target road section based on the output information of each sensor after frame synchronization, and obtain the target matching result; the common view area is the overlapping area between different view areas of adjacent sensors set in the bird's-eye view of the target road section;

更新模块84,用于基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息;An updating module 84 is used to update the sensor identifier and target identifier corresponding to each fused target in the common viewing area based on the target matching result, and update the position information of each fused target in the common viewing area in the bird's-eye view of the target road section;

以及显示模块86,用于基于各个传感器的帧同步后的输出信息,以及共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,得到各个融合目标在目标路段的连续显示结果。And a display module 86 is used to obtain continuous display results of each fused target on the target section based on the output information of each sensor after frame synchronization, the updated sensor identifier and the updated target identifier corresponding to each fused target in the common viewing area, and the updated position information of each fused target in the common viewing area in the bird's-eye view of the target section.

上述目标跟踪装置,通过基于各个传感器的帧同步后的输出信息,对目标路段的鸟瞰图中的共视区域内的目标进行匹配,得到目标匹配结果,基于目标匹配结果,更新共视区域内的各个融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,其通过对目标路段的鸟瞰图中的共视区域内的目标进行匹配,建立前后两个传感器不同视角之间的关联,并根据目标匹配结果,更新共视区域内的融合目标所对应的传感器标识和目标标识,更新共视区域内的各个融合目标在目标路段的鸟瞰图中的位置信息,使各个传感器不同的视野区域内的同一融合目标进行关联融合,进而,根据各个所述传感器的帧同步后的输出信息,与共视区域内的各个融合目标所对应的更新后的传感器标识和更新后的目标标识,和共视区域内的各个融合目标在目标路段的鸟瞰图中的更新后的位置信息,实现对同一目标在同一鸟瞰图的持续显示,实现目标的持续跟踪,解决了现有车辆跟踪方法中存在的因相机部署距离远、车辆遮挡以及车牌识别等问题而导致的持续跟踪困难的问题。The target tracking device matches the targets in the common viewing area of the bird's-eye view of the target section based on the output information of each sensor after frame synchronization to obtain the target matching result, updates the sensor identification and target identification corresponding to each fused target in the common viewing area based on the target matching result, updates the position information of each fused target in the common viewing area in the bird's-eye view of the target section, matches the targets in the common viewing area in the bird's-eye view of the target section, establishes the association between the different perspectives of the two sensors, and updates the sensor identification and target identification corresponding to the fused target in the common viewing area according to the target matching result, and updates the position information of each fused target in the common viewing area in the bird's-eye view of the target section. The position information of each fused target in the bird's-eye view of the target road section enables the same fused target in different field of view areas of each sensor to be associated and fused, and then, based on the output information of each sensor after frame synchronization, the updated sensor identifier and the updated target identifier corresponding to each fused target in the common view area, and the updated position information of each fused target in the common view area in the bird's-eye view of the target road section, the continuous display of the same target in the same bird's-eye view is realized, and the continuous tracking of the target is realized, which solves the problem of continuous tracking difficulty caused by the long camera deployment distance, vehicle occlusion and license plate recognition in the existing vehicle tracking method.

需要说明的是,上述各个模块可以是功能模块也可以是程序模块,既可以通过软件来实现,也可以通过硬件来实现。对于通过硬件来实现的模块而言,上述各个模块可以位于同一处理器中;或者上述各个模块还可以按照任意组合的形式分别位于不同的处理器中。It should be noted that the above modules can be functional modules or program modules, and can be implemented by software or hardware. For modules implemented by hardware, the above modules can be located in the same processor; or the above modules can be located in different processors in any combination.

在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述实施例中的任意一种目标跟踪方法。In one embodiment, a computer device is provided, including a memory and a processor, wherein a computer program is stored in the memory, and when the processor executes the computer program, any one of the target tracking methods in the above embodiments is implemented.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-OnlyMemory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random AccessMemory,SRAM)或动态随机存取存储器(Dynamic RandomAccess Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。Those skilled in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a non-volatile computer-readable storage medium. When the computer program is executed, it can include the processes of the embodiments of the above-mentioned methods. Among them, any reference to the memory, database or other medium used in the embodiments provided in this application can include at least one of non-volatile and volatile memory. Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive random access memory (ReRAM), magnetoresistive random access memory (MRAM), ferroelectric random access memory (FRAM), phase change memory (PCM), graphene memory, etc. Volatile memory can include random access memory (RAM) or external cache memory, etc. As an illustration and not limitation, RAM can be in various forms, such as static random access memory (SRAM) or dynamic random access memory (DRAM). The database involved in each embodiment provided in this application may include at least one of a relational database and a non-relational database. Non-relational databases may include distributed databases based on blockchains, etc., but are not limited to this. The processor involved in each embodiment provided in this application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic device, a data processing logic device based on quantum computing, etc., but are not limited to this.

以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments may be combined arbitrarily. To make the description concise, not all possible combinations of the technical features in the above embodiments are described. However, as long as there is no contradiction in the combination of these technical features, they should be considered to be within the scope of this specification.

以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。The above-described embodiments only express several implementation methods of the present application, and the descriptions thereof are relatively specific and detailed, but they cannot be construed as limiting the scope of the present application. It should be noted that, for a person of ordinary skill in the art, several modifications and improvements can be made without departing from the concept of the present application, and these all belong to the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the attached claims.

Claims (9)

1. A method of target tracking, the method comprising:
Performing frame synchronization on the output information of all the sensors positioned on the target road section to obtain output information of each sensor after frame synchronization; the output information of the sensor comprises all targets in the visual field area shot by the sensor and attribute information of all targets; determining all fusion targets in the target road section based on the output information of each sensor after frame synchronization and the visual field area of each sensor in the aerial view of the target road section to obtain a fusion target set; determining mapping relations among the fusion targets, the sensor identifications and the target identifications and position information of the fusion targets in the aerial view of the target road section based on the fusion target set; the fusion target is a unified representation of the same target shot by different sensors in the target section; based on the output information of each sensor after frame synchronization, matching targets in a common area in the aerial view of the target road section to obtain a target matching result; the common view area is an overlapping area between different view areas in the bird's eye view of the target road section, and is an adjacent sensor arranged in the target road section;
Based on the target matching result, updating the sensor identifier and the target identifier corresponding to each fusion target in the common area, and updating the position information of each fusion target in the common area in the aerial view of the target road section; the target identification is the identification of each target in the output information of each sensor after frame synchronization;
Based on the output information of the sensors after frame synchronization, updated sensor identifications and updated target identifications corresponding to the fusion targets in the common area and updated position information of the fusion targets in the aerial view of the target road section in the common area, and a continuous display result of the fusion targets in the target road section is obtained.
2. The target tracking method according to claim 1, wherein before determining all the fusion targets in the target section based on the output information of each sensor after frame synchronization and the field of view of each sensor in the bird's eye view of the target section, obtaining a fusion target set, comprising:
And determining the visual field area of each sensor in the aerial view of the target road section based on the position information and the calibration information of each sensor positioned on the target road section.
3. The target tracking method according to claim 2, characterized by, before determining a field of view of each sensor in a bird's eye view of the target section based on position information and calibration information of each sensor located on the target section, comprising:
position information of each of the sensors located at the target link is determined based on the sensor type and the link environment of the target link.
4. The method according to claim 1, wherein the determining all the fusion targets in the target section based on the output information of each sensor after frame synchronization and the field of view of each sensor in the aerial view of the target section to obtain the fusion target set includes:
Based on the output information of the sensors after frame synchronization, taking each target of the first sensor of the target road section in a visual field area in the aerial view of the target road section as a first fusion target; the first sensor of the target road section is the first sensor of the target entering the target road section;
the method comprises the steps that targets, which are first appeared in the area except the common view area, of the visual field area in the aerial view of the target road section except the first sensor in all the sensors of the target road section are used as second fusion targets;
and generating the fusion target set based on the first fusion target and the second fusion target.
5. The method for tracking a target according to claim 1, wherein the matching the target in the common area in the bird's eye view of the target section based on the output information after the frame synchronization of each sensor to obtain a target matching result includes:
Based on the output information of each sensor after frame synchronization, under the view angle of the aerial view, performing cross-over ratio calculation on each target of the adjacent sensor in the common area in the aerial view of the target road section to obtain a first cross-over ratio calculation result; based on the first intersection ratio calculation result, matching targets of adjacent sensors of the target road section in a common area to obtain a target matching result;
Or, based on the output information of the sensors after frame synchronization, determining a first point set feature between each target in the common area determined by the adjacent sensor and each target in the visual field area of the first sensor in the adjacent sensor in the aerial view of the target road section under the aerial view angle; determining a second point set characteristic between each target in the common area determined by the adjacent sensor and each target in the visual field area of the second sensor in the adjacent sensor in the aerial view of the target road section; performing similarity matching on targets in a common area in the aerial view of the target road section based on the first point set features and the second point set features to obtain a similarity matching result; obtaining the target matching result based on the similarity matching result;
Or, based on the output information of the sensors after frame synchronization, projecting each target in the visual field area in the bird's eye view of the first sensor in the adjacent sensor to each target in the visual field area in the bird's eye view of the second sensor in the adjacent sensor, so as to obtain each projected target; performing cross-over ratio calculation on each projected target and each target in a visual field area of a second sensor in the adjacent sensor in the aerial view to obtain a second cross-over ratio calculation result; and matching targets of adjacent sensors of the target road section in a common area based on the second intersection ratio calculation result to obtain the target matching result.
6. The method according to claim 5, wherein updating the sensor identifier and the target identifier corresponding to each of the fused targets in the common area based on the target matching result, and updating the position information of each of the fused targets in the common area in the aerial view of the target section includes:
When targets of the first sensor in the adjacent sensors in the visual field area in the aerial view exist in a fusion target list, based on the target matching result, updating sensor identifiers and target identifiers corresponding to the fusion targets in the common area, and updating position information of the fusion targets in the aerial view in the common area.
7. The object tracking method according to claim 6, wherein updating the sensor identification and the object identification corresponding to each of the fusion objects in the common area when the object in the field of view of the first sensor in the adjacent sensor exists in the fused object list, based on the object matching result, comprises:
When a first target in the view field area of the adjacent sensor in the bird's eye view exists in a fused target list, determining a second target in the view field area of the adjacent sensor in the bird's eye view based on the target matching result, wherein the second target is matched with the first target in the view field area of the first sensor in the adjacent sensor in the bird's eye view;
Updating a sensor identifier corresponding to the fusion target in the common area as an identifier corresponding to the second sensor in the adjacent sensor, updating a target identifier corresponding to the fusion target in the common area as an identifier of a second target in a field-of-view area of the second sensor in the adjacent sensor in the bird's-eye view, and updating position information of the fusion target in the common area in the bird's-eye view as position information of the second target in the field-of-view area of the second sensor in the adjacent sensor in the bird's-eye view.
8. The object tracking method according to any one of claims 1 to 7, characterized by, after updating the sensor identifications and the object identifications corresponding to the respective fusion objects in the common area based on the object matching result, updating the position information of the respective fusion objects in the common area in a bird's eye view, further comprising:
When the fusion target is lost, traversing output information after frame synchronization of a third sensor corresponding to a field of view area where the lost fusion target is located, and judging whether a third target consistent with attribute information of the lost fusion target exists in the output information after frame synchronization of the third sensor;
When the output information of the third sensor after the frame synchronization has the third target consistent with the attribute information of the lost fusion target, updating the sensor identifier corresponding to the fusion target to be the identifier corresponding to the third sensor, updating the target identifier corresponding to the fusion target to be the identifier of the third target in the visual field area of the third sensor in the bird's-eye view, and updating the position information of the fusion target in the bird's-eye view to be the position information of the third target of the third sensor in the visual field area of the bird's-eye view;
When the third target consistent with the attribute information of the lost fusion target does not exist in the output information after frame synchronization of the third sensor, traversing the output information after frame synchronization of a fourth sensor, and judging whether a fourth target consistent with the attribute of the lost fusion target exists in the output information after frame synchronization of the fourth sensor; the fourth sensor is a sensor corresponding to a next visual field area which is entered by a visual field area of a target far away from the third sensor;
When the fourth target consistent with the attribute of the lost fusion target exists in the output information of the fourth sensor after the frame synchronization, updating the identifier corresponding to the fusion target as the identifier corresponding to the fourth sensor, updating the identifier corresponding to the fusion target as the identifier of the fourth target in the view area of the fourth sensor in the bird's-eye view, and updating the position information of the fusion target in the bird's-eye view as the position information of the fourth target in the view area of the fourth sensor in the bird's-eye view;
When the fourth target consistent with the attribute information of the lost fusion target does not exist in the output information after frame synchronization of the fourth sensor, determining a plurality of targets corresponding to a plurality of adjacent fusion targets in the output information after frame synchronization of the fourth sensor according to the position relation between the fusion target and the adjacent fusion targets when the fusion target is not lost; estimating first position information of a lost fusion target in a visual field area of the fourth sensor in a bird's eye view based on a plurality of targets corresponding to a plurality of adjacent fusion targets in the output information of the fourth sensor after frame synchronization and the position relation between the fusion target and the adjacent fusion targets when the fusion target is not lost; and updating the position information of the fusion target in the aerial view as the first position information of the fourth sensor in the visual field area in the aerial view.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the object tracking method of any one of claims 1 to 8 when the computer program is executed.
CN202411109434.5A 2024-08-13 2024-08-13 Target tracking method and computer equipment Active CN118628533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411109434.5A CN118628533B (en) 2024-08-13 2024-08-13 Target tracking method and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411109434.5A CN118628533B (en) 2024-08-13 2024-08-13 Target tracking method and computer equipment

Publications (2)

Publication Number Publication Date
CN118628533A CN118628533A (en) 2024-09-10
CN118628533B true CN118628533B (en) 2024-10-22

Family

ID=92596547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411109434.5A Active CN118628533B (en) 2024-08-13 2024-08-13 Target tracking method and computer equipment

Country Status (1)

Country Link
CN (1) CN118628533B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359334A (en) * 2020-09-30 2022-04-15 北京万集科技股份有限公司 Target tracking method, apparatus, computer equipment and storage medium
CN115457288A (en) * 2022-09-26 2022-12-09 北京易航远智科技有限公司 Multi-target tracking method and device based on aerial view angle, storage medium and equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977362A (en) * 2022-04-20 2023-10-31 深圳市普渡科技有限公司 Target tracking method, device, computer equipment and storage medium
CN115984586A (en) * 2022-12-12 2023-04-18 北京易航远智科技有限公司 Multi-target tracking method and device under aerial view angle
CN116229118A (en) * 2023-05-04 2023-06-06 泉州装备制造研究所 Bird's eye view target detection method based on manifold matching
CN117635657A (en) * 2023-11-28 2024-03-01 浙江海康智联科技有限公司 BEV fusion sensing and multi-target tracking method based on road side end
CN117994748A (en) * 2024-01-15 2024-05-07 智道网联科技(北京)有限公司 Road side aerial view target detection method and device, computing equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359334A (en) * 2020-09-30 2022-04-15 北京万集科技股份有限公司 Target tracking method, apparatus, computer equipment and storage medium
CN115457288A (en) * 2022-09-26 2022-12-09 北京易航远智科技有限公司 Multi-target tracking method and device based on aerial view angle, storage medium and equipment

Also Published As

Publication number Publication date
CN118628533A (en) 2024-09-10

Similar Documents

Publication Publication Date Title
WO2020063139A1 (en) Face modeling method and apparatus, electronic device and computer-readable medium
CN111292352B (en) Multi-target tracking method, device, equipment and storage medium
JP7278414B2 (en) Digital restoration method, apparatus and system for traffic roads
CN113947766B (en) Real-time license plate detection method based on convolutional neural network
CN111260779A (en) Map construction method, device and system and storage medium
CN110132242B (en) Triangularization method for multi-camera instant positioning and map construction and moving body thereof
CN114639032A (en) Vehicle detection and tracking method, device and equipment for quasi-real-time digital twin display
CN115272493B (en) Abnormal target detection method and device based on continuous time sequence point cloud superposition
CN114116933B (en) A semantic-topological joint mapping method based on monocular images
CN111784730A (en) Object tracking method and device, electronic equipment and storage medium
WO2020006685A1 (en) Method for establishing map, and terminal and computer readable storage medium
CN116977362A (en) Target tracking method, device, computer equipment and storage medium
CN118628533B (en) Target tracking method and computer equipment
CN106570883A (en) People counting method based on RGB-D camera
WO2023087860A1 (en) Method and apparatus for generating trajectory of target, and electronic device and medium
CN114445494A (en) Image acquisition and processing method, image acquisition device and robot
CN113987228A (en) Database construction method, positioning method and related equipment thereof
Han et al. Fully convolutional neural networks for road detection with multiple cues integration
CN116698007B (en) Fusion map construction method, device, robot and storage medium
CN116912427B (en) Three-dimensional scan reconstruction method and system based on triangular feature clustering of marker points
CN117314960A (en) Vehicle target tracking method, device, equipment and storage medium
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN116912877A (en) Monitoring method and system for spatial and temporal contact behavior sequences of crowds in urban public spaces
WO2022237210A1 (en) Obstacle information generation
CN115731287A (en) Moving target retrieval method based on set and topological space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant