[go: up one dir, main page]

CN113763435A - Tracking and Shooting Method Based on Multiple Cameras - Google Patents

Tracking and Shooting Method Based on Multiple Cameras Download PDF

Info

Publication number
CN113763435A
CN113763435A CN202010493499.XA CN202010493499A CN113763435A CN 113763435 A CN113763435 A CN 113763435A CN 202010493499 A CN202010493499 A CN 202010493499A CN 113763435 A CN113763435 A CN 113763435A
Authority
CN
China
Prior art keywords
target
camera
information
cameras
network topology
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010493499.XA
Other languages
Chinese (zh)
Inventor
吴德佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingbiao Technology Group Co ltd
Original Assignee
Jingbiao Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingbiao Technology Group Co ltd filed Critical Jingbiao Technology Group Co ltd
Priority to CN202010493499.XA priority Critical patent/CN113763435A/en
Publication of CN113763435A publication Critical patent/CN113763435A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tracking shooting method based on multiple cameras, which relates to the technical field of camera monitoring, solves the problem that a suspicious target cannot be effectively tracked in time at present, and provides the following technical scheme, wherein the tracking shooting method based on the multiple cameras comprises the following steps: constructing a monitoring network of a plurality of cameras, and determining information of each path of camera; determining a global network topology structure based on the information of each path of camera; identifying and tracking a target, and acquiring a motion track of the target under a single-path camera; predicting the motion trail of the target based on the global network topology; and fusing the motion tracks under the single-path camera to obtain the global motion track of the target.

Description

Tracking shooting method based on multiple cameras
Technical Field
The invention relates to the technical field of camera monitoring, in particular to a tracking shooting method based on multiple cameras.
Background
With the continuous development of monitoring networks, a large number of cameras are installed in more and more places to ensure the safety of a monitoring area. Because the position of the camera is relatively fixed and the visual field range is limited, a monitoring blind area can be avoided, and the installation of multiple cameras can enlarge the monitoring visual field range under certain conditions, but monitoring personnel are required to track a moving object through a video monitoring system, so that the moving object cannot be effectively tracked in time due to the factors of large number of monitors, complicated information and the like.
Disclosure of Invention
The invention mainly aims to provide a tracking shooting method based on multiple cameras, and aims to solve the problem that a suspicious target cannot be effectively tracked in time at present.
In order to achieve the above object, the present invention provides the following technical solution, a tracking shooting method based on multiple cameras, comprising the following steps:
constructing a monitoring network of a plurality of cameras, and determining information of each path of camera;
determining a global network topology structure based on the information of each path of camera;
identifying and tracking a target, and acquiring a motion track of the target under a single-path camera;
predicting the motion trail of the target based on a global network topology structure;
and fusing the motion tracks under the single-path camera to obtain the global motion track of the target.
By adopting the technical scheme, the position of the camera is reasonably set, so that the monitoring range of the camera covers the whole area to be monitored; constructing a global network topology structure based on the monitoring information acquired by each path of camera, and associating the images acquired by each path of camera subsequently; carrying out target identification on images acquired by each camera, starting tracking when the target is identified, and recording the motion track of the target in the scene corresponding to the corresponding camera; predicting the motion trail of the target based on the space-time constraint of the global network topological structure, and sending the predicted position information of the target to a corresponding camera, so that the camera locks the position of the monitored target in advance to achieve the aim of improving the tracking rate; and fusing the motion tracks of the targets under the cameras to obtain the global motion track of the targets, so that the working personnel can know the action route of the targets.
In an embodiment of the present application, the information of each camera includes: monitoring areas of the cameras and/or monitoring angles of the cameras.
By adopting the technical scheme, the positions of the cameras are adjusted according to the information of all the cameras so as to carry out overall monitoring on the monitored area and further ensure the safety of the monitored area.
In an embodiment of the present application, determining a global network topology based on the information of the cameras further includes the following steps:
acquiring color histogram information of a monitoring area corresponding to each path of camera;
based on the color histogram information, clustering image information acquired by each path of camera by adopting a Mean Shift algorithm to obtain image segmentation results corresponding to each path of camera;
determining a global network topology based on the corresponding image segmentation results.
By adopting the technical scheme, the network topology structure is constructed through the color histogram information, compared with the construction of the network topology structure according to target tracking, the requirement on a camera is lower, and when a target is shielded, the network topology structure can be constructed more accurately. Determining the corresponding area of the camera for monitoring the adjacent images, clustering the image information monitored by each camera by adopting a Mean Shift algorithm, extracting the characteristic points of the images, segmenting the images by using an optical flow method, performing space transformation on the images monitored by the cameras according to the image segmentation result so as to calibrate the image overlapping area, and further splicing the images based on the overlapping area so as to obtain a global network topological structure.
In an embodiment of the present application, identifying and tracking a target, and acquiring a motion trajectory of the target under a one-way camera further includes the following steps:
detecting images acquired by each camera based on an OTSU segmentation method to obtain a moving target and obtain initial information of the target;
extracting local features of the target based on an initial image of the target acquired by the camera, and constructing an initial model of the target;
updating the initial model based on the images acquired by the cameras in real time to obtain a real-time model of the target;
and searching the target in the images collected by the cameras on the basis of the real-time model of the target.
By adopting the technical scheme, when the target is detected, the initial model of the target is deeply learned, namely the model of the target is continuously updated to obtain the real-time model of the target, and further searching and tracking are carried out based on the real-time model, so that the accuracy of target tracking is effectively improved.
In an embodiment of the present application, predicting the motion trajectory of the target based on the global network topology further includes the following steps: and predicting the motion trail of the target by combining a Camshift algorithm and a Kalman filtering algorithm.
By adopting the technical scheme, the Camshift algorithm is adopted, and the Kalman filtering algorithm is introduced to predict the movement track of the target, so that the method has high real-time performance, can effectively eliminate other interference, and has stronger robustness.
The invention has the following beneficial effects: updating the initial model of the target by adopting a deep learning algorithm to obtain a real-time target model, thereby effectively improving the tracking precision of the camera; the method has the advantages that the target is identified by adopting a mode of combining local feature identification with space-time constraint of a global network topology structure, when one part of the target is shielded, the target can be correctly identified and tracked, and the applicability is strong.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only exemplary embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without inventive effort,
wherein:
fig. 1 is a flow chart of a multi-camera based tracking shooting method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only exemplary embodiments of the present invention, and not exclusive embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The multi-camera based tracking shooting method, as shown in fig. 1, includes the following steps:
s1, a multi-camera monitoring network is constructed, and information of each camera is determined.
Specifically, the connection relation between the camera and the switch is reasonably configured, so that the real-time performance of monitoring is ensured; and planning the installation position, the installation number and the monitoring view angle of the cameras based on the terrain of the monitoring area and the physical parameters of the cameras, ensuring that the monitoring range comprises the whole area to be monitored, and acquiring the information of each path of cameras, including the monitoring area of each path of cameras, the monitoring angle of each path of cameras, the connection relation of the monitoring area of each path of cameras and the like. There may be some degree of overlap in the images acquired by the cameras.
S2, based on the information of each path of camera, determining a global network topology.
S21, acquiring color histogram information of the monitoring area corresponding to each path of camera;
specifically, images collected by each camera are decomposed, and a corresponding color histogram is generated.
S22, based on the color histogram information, clustering the image information collected by each path of camera by using a Mean Shift algorithm to obtain image segmentation results corresponding to each path of camera;
specifically, based on color information in the color histogram, clustering processing is performed on image information monitored by each camera by using a Mean Shift algorithm, and feature points of the images are extracted.
S23 determines a global network topology based on the corresponding image segmentation results.
Specifically, an optical flow method is used for segmenting images, the images monitored by the camera are subjected to space transformation according to the image segmentation result, so that image overlapping areas are calibrated, and further splicing is carried out based on the overlapping areas so as to obtain a global network topology structure.
And S3, identifying and tracking the target, and acquiring the motion track of the target under the single-path camera.
S31, detecting the images collected by each camera based on an OTSU segmentation method to identify the moving target and obtain the initial information of the target;
specifically, images acquired by each camera are preprocessed, that is, the images acquired by each camera are processed through a preset normalized vegetation index to obtain a corresponding gray-scale map; and processing the gray-scale image by adopting an OTSU segmentation method to obtain a corresponding binary image, identifying the target according to the corresponding binary image, and obtaining initial information corresponding to the target, wherein the initial information comprises the positioning of the target, the scale of the target and the like.
S32, extracting local features of the target based on the initial image of the target collected by the camera, and constructing an initial model of the target;
specifically, images acquired by a camera which initially identifies the target are detected, local feature points in each frame of image are extracted, and an initial model of the target is constructed.
S33, updating the initial model based on the images acquired by the cameras in real time to obtain a real-time model of the target;
specifically, local feature points are extracted according to image information about the target acquired by each subsequent camera in real time, and are used for matching and updating the real-time model of the target.
Preferably, local feature points in the image are selected, and an initial Markov model is constructed; and taking local feature points in the images acquired in real time as training samples, inputting the training samples into the initial Markov model, and training the initial Markov model to obtain the real-time model until the optimized model is obtained.
S34 searching for the target in the images captured by the cameras based on the real-time model of the target.
Specifically, a real-time model of the current target is input into each camera for searching for the target in each camera.
S4, predicting the motion trail of the target based on the global network topology.
Specifically, based on a global network topology structure, that is, a network topology structure among cameras, a camera set associated with finding a moving target is obtained according to the network topology structure, and a corresponding camera is locked as a key monitoring camera to track the moving target by analyzing a spatial proximity relationship among the cameras and a time difference occurring in the moving target. And then combining a Camshift algorithm and a Kalman filtering algorithm, detecting the edge of the moving target, and determining the relation with the surrounding environment, so as to perform positioning processing, namely, converting the position in a local coordinate system of the target into indoor coordinate system data by using the Camshift algorithm, and predicting the moving track of the target, namely predicting the image acquired by which camera the target appears next.
And S5, fusing the motion tracks under the single-path camera to obtain the global motion track of the target.
Specifically, corresponding images collected by cameras for recognizing the moving target are spliced, so that a global motion track of the target is obtained.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the present invention may be made by those skilled in the art without departing from the principle of the present invention, and such modifications and embellishments should also be considered as within the scope of the present invention.

Claims (5)

1.基于多摄像机的跟踪拍摄方法,其特征在于,包括以下步骤:1. based on the tracking shooting method of multi-camera, it is characterized in that, comprise the following steps: 构建多摄像机的监控网络,并确定各路摄像机的信息;Build a multi-camera surveillance network and determine the information of each camera; 基于所述各路摄像机的信息,确定全局的网络拓扑结构;Determine the global network topology based on the information of the cameras; 对目标进行识别和跟踪,获取目标在单路摄像机下的运动轨迹;Identify and track the target, and obtain the movement trajectory of the target under the single-channel camera; 基于所述全局的网络拓扑结构,对所述目标的运动轨迹进行预测;based on the global network topology, predicting the motion trajectory of the target; 对所述单路摄像机下的运动轨迹进行融合,得到目标的全局的运动轨迹。The motion trajectory under the single camera is fused to obtain the global motion trajectory of the target. 2.根据权利要求1所述的基于多摄像机的跟踪拍摄方法,其特征在于,所述各路摄像机信息包括:各路摄像机的监控区域和/或各路摄像机的监控角度。2 . The tracking and shooting method based on multiple cameras according to claim 1 , wherein the information of each camera includes: the monitoring area of each camera and/or the monitoring angle of each camera. 3 . 3.根据权利要求2所述的基于多摄像机的跟踪拍摄方法,其特征在于,基于所述各路摄像机的信息,确定全局的网络拓扑结构还进一步包括以下步骤:3. The multi-camera-based tracking shooting method according to claim 2, wherein, based on the information of the cameras, determining the global network topology structure further comprises the following steps: 获取各路摄像机对应的监控区域的颜色直方图信息;Obtain the color histogram information of the monitoring area corresponding to each camera; 基于所述颜色直方图信息,采用Mean Shift算法对各路摄像机所采集的图像信息进行聚类处理,得到各路摄像机对应的图像分割结果;Based on the color histogram information, the Mean Shift algorithm is used to perform clustering processing on the image information collected by the cameras to obtain image segmentation results corresponding to the cameras; 基于所述对应的图像分割结果,确定全局的网络拓扑结构。Based on the corresponding image segmentation results, a global network topology is determined. 4.根据权利要求3所述的基于多摄像机的跟踪拍摄方法,其特征在于,对目标进行识别和跟踪,获取目标在单路摄像机下的运动轨迹还进一步包括以下步骤:4. the tracking shooting method based on multi-camera according to claim 3, is characterized in that, the target is identified and tracked, and the movement track that obtains the target under the single-channel camera also further comprises the following steps: 基于OTSU分割法对每路摄像机所采集的图像进行检测,以得到运动目标并获取目标的初始信息;Detect the images collected by each camera based on the OTSU segmentation method to obtain the moving target and obtain the initial information of the target; 基于摄像机所采集的目标的初始图像,提取目标的局部特征,构建目标的初始模型;Based on the initial image of the target collected by the camera, the local features of the target are extracted, and the initial model of the target is constructed; 基于各路摄像机实时采集的图像,对初始模型进行更新,以得到目标的实时模型;Based on the real-time images collected by each camera, the initial model is updated to obtain the real-time model of the target; 基于所述目标的实时模型,在各路摄像机所采集的图像中搜索所述目标。Based on the real-time model of the target, the target is searched in the images collected by various cameras. 5.根据权利要求4所述的基于多摄像机的跟踪拍摄方法,其特征在于,基于全局的网络拓扑结构,对所述目标的运动轨迹进行预测还进一步包括以下步骤:采用CamShift算法和Kalman滤波算法相结合,对目标的运动轨迹进行预测。5. the tracking shooting method based on multi-camera according to claim 4 is characterized in that, based on global network topology, the motion trajectory of described target is predicted and further comprises the following steps: adopt CamShift algorithm and Kalman filter algorithm Combined, the motion trajectory of the target is predicted.
CN202010493499.XA 2020-06-02 2020-06-02 Tracking and Shooting Method Based on Multiple Cameras Pending CN113763435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010493499.XA CN113763435A (en) 2020-06-02 2020-06-02 Tracking and Shooting Method Based on Multiple Cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010493499.XA CN113763435A (en) 2020-06-02 2020-06-02 Tracking and Shooting Method Based on Multiple Cameras

Publications (1)

Publication Number Publication Date
CN113763435A true CN113763435A (en) 2021-12-07

Family

ID=78783236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010493499.XA Pending CN113763435A (en) 2020-06-02 2020-06-02 Tracking and Shooting Method Based on Multiple Cameras

Country Status (1)

Country Link
CN (1) CN113763435A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118709144A (en) * 2024-08-27 2024-09-27 北京博能科技股份有限公司 A method and device for processing monitoring data from multiple sources

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572804A (en) * 2009-03-30 2009-11-04 浙江大学 Multi-camera intelligent control method and device
CN102629385A (en) * 2012-02-28 2012-08-08 中山大学 Object matching and tracking system based on multiple camera information fusion and method thereof
CN103325121A (en) * 2013-06-28 2013-09-25 安科智慧城市技术(中国)有限公司 Method and system for estimating network topological relations of cameras in monitoring scenes
JP2016099941A (en) * 2014-11-26 2016-05-30 日本放送協会 System and program for estimating position of object
CN106709436A (en) * 2016-12-08 2017-05-24 华中师范大学 Cross-camera suspicious pedestrian target tracking system for rail transit panoramic monitoring
KR20180032400A (en) * 2016-09-22 2018-03-30 한국전자통신연구원 multiple object tracking apparatus based Object information of multiple camera and method therefor
CN110175583A (en) * 2019-05-30 2019-08-27 重庆跃途科技有限公司 It is a kind of in the campus universe security monitoring analysis method based on video AI
CN111080679A (en) * 2020-01-02 2020-04-28 东南大学 Method for dynamically tracking and positioning indoor personnel in large-scale place

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572804A (en) * 2009-03-30 2009-11-04 浙江大学 Multi-camera intelligent control method and device
CN102629385A (en) * 2012-02-28 2012-08-08 中山大学 Object matching and tracking system based on multiple camera information fusion and method thereof
CN103325121A (en) * 2013-06-28 2013-09-25 安科智慧城市技术(中国)有限公司 Method and system for estimating network topological relations of cameras in monitoring scenes
JP2016099941A (en) * 2014-11-26 2016-05-30 日本放送協会 System and program for estimating position of object
KR20180032400A (en) * 2016-09-22 2018-03-30 한국전자통신연구원 multiple object tracking apparatus based Object information of multiple camera and method therefor
CN106709436A (en) * 2016-12-08 2017-05-24 华中师范大学 Cross-camera suspicious pedestrian target tracking system for rail transit panoramic monitoring
CN110175583A (en) * 2019-05-30 2019-08-27 重庆跃途科技有限公司 It is a kind of in the campus universe security monitoring analysis method based on video AI
CN111080679A (en) * 2020-01-02 2020-04-28 东南大学 Method for dynamically tracking and positioning indoor personnel in large-scale place

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118709144A (en) * 2024-08-27 2024-09-27 北京博能科技股份有限公司 A method and device for processing monitoring data from multiple sources
CN118709144B (en) * 2024-08-27 2024-11-19 北京博能科技股份有限公司 A method and device for processing monitoring data from multiple sources

Similar Documents

Publication Publication Date Title
CN110619657B (en) Multi-camera linkage multi-target tracking method and system for intelligent communities
US8611591B2 (en) System and method for visually tracking with occlusions
CN105745687B (en) Context aware Moving target detection
CN113506317A (en) Multi-target tracking method based on Mask R-CNN and apparent feature fusion
TWI382762B (en) Method for tracking moving object
JP5180733B2 (en) Moving object tracking device
US20110142283A1 (en) Apparatus and method for moving object detection
Bloisi et al. Argos—A video surveillance system for boat traffic monitoring in Venice
EP1410333A1 (en) Moving object assessment system and method
CN113988228B (en) Indoor monitoring method and system based on RFID and vision fusion
EP1399889A1 (en) Method for monitoring a moving object and system regarding same
EP1405504A1 (en) Surveillance system and methods regarding same
CN104008371A (en) Regional suspicious target tracking and recognizing method based on multiple cameras
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN112329671B (en) Pedestrian running behavior detection method based on deep learning and related components
Lin et al. Collaborative pedestrian tracking and data fusion with multiple cameras
Papaioannou et al. Tracking people in highly dynamic industrial environments
TW202244847A (en) Target tracking method and apparatus, electronic device and storage medium
CN116883458B (en) Transformer-based multi-target tracking system fusing motion characteristics with observation as center
Snidaro et al. Quality-based fusion of multiple video sensors for video surveillance
JP4578864B2 (en) Automatic tracking device and automatic tracking method
CN119445651A (en) Ground area detection method, device, equipment and medium in video surveillance scene
CN112767476A (en) Rapid positioning system, method and application
Bisio et al. Vehicular/non-vehicular multi-class multi-object tracking in drone-based aerial scenes
CN113763435A (en) Tracking and Shooting Method Based on Multiple Cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211207

RJ01 Rejection of invention patent application after publication