CN115909219A - Scene change detection method and system based on video analysis - Google Patents
Scene change detection method and system based on video analysis Download PDFInfo
- Publication number
- CN115909219A CN115909219A CN202211704536.2A CN202211704536A CN115909219A CN 115909219 A CN115909219 A CN 115909219A CN 202211704536 A CN202211704536 A CN 202211704536A CN 115909219 A CN115909219 A CN 115909219A
- Authority
- CN
- China
- Prior art keywords
- scene
- video
- video frame
- target
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 75
- 230000008859 change Effects 0.000 title claims abstract description 65
- 238000004458 analytical method Methods 0.000 title claims abstract description 35
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000013135 deep learning Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 8
- 230000011218 segmentation Effects 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 17
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 4
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 description 11
- 238000012544 monitoring process Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及图像处理技术领域,尤其是一种基于视频分析的场景变更检测方法与系统。The invention relates to the technical field of image processing, in particular to a scene change detection method and system based on video analysis.
背景技术Background technique
近年来,监控系统在交通、军事、机场、银行、商场、社区等场所得到广泛应用。在诸多领域中,监控系统正常运行的重要条件之一是获得稳定、清晰、目标性强的监控视频图像内容。In recent years, monitoring systems have been widely used in transportation, military, airports, banks, shopping malls, communities and other places. In many fields, one of the important conditions for the normal operation of the monitoring system is to obtain stable, clear and targeted monitoring video image content.
随着监控系统大规模应用使得人们对监控系统的运行维护能力提出了新的需求。首先,24小时不间断的使用加大了监控产品产生故障的风险和概率;其次,由于实际应用中监控设备部署分散,且使用数量庞大,造成了人为检查的成本和难度大大增加。在实际应用中,监控摄像头由于外力或人为的作用下,导致其拍摄角度发生改变会对监控视频的目标产生影响,因此为了快速、准确的识别监控视频中出现的场景变换,使用计算机视觉技术对监控视频场景改变自动检测的研究显得至关重要。With the large-scale application of the monitoring system, people put forward new requirements for the operation and maintenance capabilities of the monitoring system. First of all, 24-hour uninterrupted use increases the risk and probability of monitoring product failures; secondly, due to the scattered deployment of monitoring equipment in practical applications and the large number of uses, the cost and difficulty of manual inspection have greatly increased. In practical applications, the change of the shooting angle of the surveillance camera due to external force or human action will affect the target of the surveillance video. Therefore, in order to quickly and accurately identify the scene changes in the surveillance video, computer vision technology is used to Research on automatic detection of scene changes in surveillance video is crucial.
现有基于计算机视觉技术实现场景变化检测方法,普遍采用的是人工设计的场景描述特征,计算相邻图像间(帧间)的特征变化量与设定阈值进行比较,在超过阈值的情况下检测为场景变化图像。场景描述特征主要采用像素特征、运动特征、特征点等,现有基于像素特征方法,如像素差值特征、直方图特征对场景中存在光照变化、运动目标敏感;基于运动特征的方法中,可靠的运动估计算法消耗较多计算资源;在基于特征点匹配的方法中,不同场景特征点的选取差异大和计算量复杂。The existing scene change detection method based on computer vision technology generally adopts artificially designed scene description features, calculates the feature change amount between adjacent images (between frames) and compares it with the set threshold, and detects when the threshold exceeds the threshold. Change the image for the scene. Scene description features mainly use pixel features, motion features, feature points, etc. Existing methods based on pixel features, such as pixel difference features and histogram features, are sensitive to illumination changes and moving objects in the scene; methods based on motion features are reliable The motion estimation algorithm consumes more computing resources; in the method based on feature point matching, the selection of feature points in different scenes is very different and the amount of calculation is complex.
实际应用场景中复杂多样,人工设计的场景描述特征鲁棒性低,容易受场景中光照、轻微抖动的影响。The actual application scenarios are complex and diverse, and the artificially designed scene description features have low robustness and are easily affected by lighting and slight jitter in the scene.
发明内容Contents of the invention
针对目前的场景变化检测技术存在的场景描述特征鲁棒性低、容易受场景中光照、轻微抖动的影响等技术问题,本发明的目的在于提供一种基于视频分析的场景变更检测方法和系统。In view of the technical problems existing in the current scene change detection technology, such as low robustness of scene description features and being easily affected by lighting and slight shaking in the scene, the purpose of the present invention is to provide a scene change detection method and system based on video analysis.
一方面,本发明实施例包括一种基于视频分析的场景变更检测方法,包括:On the one hand, the embodiment of the present invention includes a scene change detection method based on video analysis, including:
获取视频图像数据;所述视频图像数据包括多个视频帧;Obtain video image data; the video image data includes a plurality of video frames;
对所述视频图像数据进行处理,获取各所述视频帧各自对应的目标要素;Processing the video image data to obtain target elements corresponding to each of the video frames;
根据所述视频图像数据,设定场景基准标的物的初始值;According to the video image data, setting the initial value of the scene benchmark object;
按照在所述视频图像数据中的顺序,遍历各所述视频帧;对于遍历过程中的任一当前视频帧,根据所述当前视频帧对应的所述目标要素与所述场景基准标的物,确定所述当前视频帧对应的第一场景相似度,当所述第一场景相似度小于第一阈值,更新所述场景基准标的物,反之则维持所述场景基准标的物不变;经过更新或者维持的所述场景基准标的物,用于确定后续视频帧对应的第一场景相似度。According to the order in the video image data, traverse each of the video frames; for any current video frame in the traversal process, determine according to the target element corresponding to the current video frame and the scene benchmark object The first scene similarity corresponding to the current video frame, when the first scene similarity is less than the first threshold, update the scene reference object, otherwise keep the scene reference object unchanged; after updating or maintaining The scene benchmark object is used to determine the first scene similarity corresponding to subsequent video frames.
进一步地,所述获取视频图像数据,包括:Further, the acquisition of video image data includes:
获取原始视频数据文件;Obtain the original video data file;
对所述原始视频数据文件进行预处理,获得所述视频图像数据。Preprocessing the original video data file to obtain the video image data.
进一步地,所述获取原始视频数据文件,包括:Further, said obtaining the original video data file includes:
通过安装在场景中的摄像头对固定场景进行拍摄,获得所述原始视频数据文件;Obtaining the original video data file by shooting a fixed scene through a camera installed in the scene;
或者or
获取场景中摄像头保存的历史视频数据文件,获得所述原始视频数据文件。The historical video data files saved by the cameras in the scene are obtained, and the original video data files are obtained.
进一步地,所述对所述原始视频数据文件进行预处理,包括:Further, the preprocessing of the original video data file includes:
将所述原始视频数据文件的图像尺寸转换为标准尺寸;converting the image size of the original video data file into a standard size;
将所述原始视频数据文件的图像颜色空间转换为标准颜色空间。Converting the image color space of the original video data file into a standard color space.
进一步地,所述根据所述视频图像数据,设定场景基准标的物的初始值,包括:Further, setting the initial value of the scene benchmark object according to the video image data includes:
设定标的物的特定类别;Set a specific category of subject matter;
从所述视频图像数据中选出最前的若干个所述视频帧;selecting the first several video frames from the video image data;
分别从选出的若干个所述视频帧中划定出相应的目标要素;划定出的所述目标要素,均表示所述特定类别的标的物;Respectively delineating corresponding target elements from the selected several video frames; the delimited target elements all represent the subject matter of the specific category;
将划定出的全部目标要素的交集,作为所述场景基准标的物的初始值。The intersection of all demarcated target elements is used as the initial value of the scene benchmark object.
进一步地,所述对所述视频图像数据进行处理,获取各所述视频帧各自对应的目标要素,包括:Further, the processing of the video image data to obtain the respective target elements corresponding to each of the video frames includes:
使用基于深度学习的目标检测算法,对所述视频图像数据进行检测,获取各所述视频帧各自对应的目标检测要素;Using a target detection algorithm based on deep learning to detect the video image data and obtain target detection elements corresponding to each of the video frames;
使用基于深度学习的目标分割算法,对所述视频图像数据进行分割,获取各所述视频帧各自对应的目标分割要素。The video image data is segmented by using a target segmentation algorithm based on deep learning, and target segmentation elements corresponding to each of the video frames are obtained.
进一步地,所述根据所述当前视频帧对应的所述目标要素与所述场景基准标的物,确定所述当前视频帧对应的第一场景相似度,包括:Further, the determining the first scene similarity corresponding to the current video frame according to the target element corresponding to the current video frame and the scene reference object includes:
根据公式执行计算;According to the formula perform calculations;
其中,scoreq表示作为所述当前视频帧的第q个视频帧对应的所述第一场景相似度,Regt和Reco表示所述场景基准标的物所在区域,Regk表示第q个视频帧中所述目标检测要素所在区域,Recl表示第q个视频帧中所述目标分割要素所在区域,n表示第q个视频帧中所述目标检测要素的数量,m表示第q个视频帧中所述分割检测要素的数量。Wherein, score q represents the first scene similarity corresponding to the qth video frame as the current video frame, Reg t and Rec o represent the area where the scene benchmark object is located, and Reg k represents the qth video frame The region where the target detection element is located, Rec l represents the region where the target segmentation element is located in the qth video frame, n represents the number of the target detection element in the qth video frame, and m represents the area in the qth video frame The number of elements detected by the split.
进一步地,所述更新所述场景基准标的物,包括:Further, said updating said scene reference object includes:
设置滑动窗口;所述滑动窗口用于选定所在位置的连续若干个视频帧;Setting a sliding window; the sliding window is used to select a number of consecutive video frames at the location;
在所述当前视频帧附近,移动所述滑动窗口;对于移动过程中所述滑动窗口的任一当前位置,分别所述滑动窗口在所述当前位置所确定的连续若干个视频帧中,划定出相应的目标要素,其中划定出的所述目标要素均表示所述特定类别的标的物,将划定出的全部目标要素的交集,作为临时场景基准标的物,根据所述当前视频帧对应的所述目标要素与所述临时场景基准标的物,确定所述临时场景基准标的物对应的第二场景相似度,当所述第二场景相似度大于第二阈值,停止移动所述滑动窗口,以所述临时场景基准标的物作为新的所述场景基准标的物,反之则继续移动所述滑动窗口。In the vicinity of the current video frame, move the sliding window; for any current position of the sliding window in the moving process, respectively, the sliding window is defined in several consecutive video frames determined by the current position. Corresponding target elements are delineated, wherein the delineated target elements all represent the targets of the specific category, and the intersection of all delineated target elements is used as the temporary scene reference target, according to the current video frame corresponding The target element and the temporary scene reference object, determine the second scene similarity corresponding to the temporary scene reference object, when the second scene similarity is greater than a second threshold, stop moving the sliding window, The temporary scene reference object is used as the new scene reference object, otherwise, the sliding window is continuously moved.
进一步地,所述根据所述当前视频帧对应的所述目标要素与所述临时场景基准标的物,确定所述临时场景基准标的物对应的第二场景相似度,包括:Further, the determining the second scene similarity corresponding to the temporary scene reference object according to the target element corresponding to the current video frame and the temporary scene reference object includes:
根据公式执行计算;According to the formula perform calculations;
其中,scoreq表示所述第二场景相似度,Regt和Reco表示所述临时场景基准标的物所在区域,Regk表示作为当前视频帧的第q个视频帧中所述目标检测要素所在区域,Recl表示第q个视频帧中所述目标分割要素所在区域,n表示第q个视频帧中所述目标检测要素的数量,m表示第q个视频帧中所述分割检测要素的数量。Wherein, score q represents the similarity of the second scene, Regt and Reco represent the region where the temporary scene benchmark object is located, Reg k represents the region where the target detection element is located in the qth video frame as the current video frame, and Rec l represents the area where the target segmentation elements are located in the qth video frame, n represents the number of the target detection elements in the qth video frame, and m represents the number of the segmentation detection elements in the qth video frame.
另一方面,本发明实施例还包括一种基于视频分析的场景变更检测系统,所述基于视频分析的场景变更检测系统包括:On the other hand, the embodiment of the present invention also includes a scene change detection system based on video analysis, and the scene change detection system based on video analysis includes:
第一模块,用于获取视频图像数据;所述视频图像数据包括多个视频帧;The first module is used to acquire video image data; the video image data includes a plurality of video frames;
第二模块,用于根据所述视频图像数据,设定场景基准标的物的初始值;The second module is used to set the initial value of the scene reference object according to the video image data;
第三模块,用于对所述视频图像数据进行处理,获取各所述视频帧各自对应的目标要素;The third module is used to process the video image data, and acquire target elements corresponding to each of the video frames;
第四模块,用于按照在所述视频图像数据中的顺序,遍历各所述视频帧;对于遍历过程中的任一当前视频帧,根据所述当前视频帧对应的所述目标要素与所述场景基准标的物,确定所述当前视频帧对应的第一场景相似度,当所述第一场景相似度小于第一阈值,更新所述场景基准标的物,反之则维持所述场景基准标的物不变;经过更新或者维持的所述场景基准标的物,用于确定后续视频帧对应的第一场景相似度。The fourth module is used to traverse each of the video frames according to the order in the video image data; for any current video frame in the traversal process, according to the target element corresponding to the current video frame and the Scene reference object, determine the first scene similarity corresponding to the current video frame, when the first scene similarity is less than the first threshold, update the scene reference object, otherwise maintain the scene reference object change; the updated or maintained scene reference object is used to determine the first scene similarity corresponding to subsequent video frames.
本发明的有益效果是:实施例中的基于视频分析的场景变更检测方法,基于深度学习提取到的场景中不变要素的深度特征,无需人工设计深度特征,提高了场景描述特征的鲁棒性;在检测到当前视频帧发生场景变化时更新场景基准标定物,能够使场景基准标定物追随视频帧的场景变化而变化,从而保持场景变化判断标准的自动更新,一方面有利于更加精准地判断视频帧是否发生场景变化,另一方面有利于提升场景变更检测自动化程度。The beneficial effects of the present invention are: the scene change detection method based on video analysis in the embodiment, based on the depth features of the invariant elements in the scene extracted by deep learning, does not need to manually design the depth features, and improves the robustness of scene description features ; Updating the scene reference calibration object when detecting a scene change in the current video frame can make the scene reference calibration object follow the scene change of the video frame and change, thereby maintaining the automatic update of the scene change judgment standard, which is conducive to more accurate judgment on the one hand Whether the video frame has a scene change, on the other hand, is conducive to improving the automation of scene change detection.
附图说明Description of drawings
图1为实施例中基于视频分析的场景变更检测方法的流程图;Fig. 1 is the flowchart of the scene change detection method based on video analysis in the embodiment;
图2为实施例中基于视频分析的场景变更检测方法的原理图;Fig. 2 is the schematic diagram of the scene change detection method based on video analysis in the embodiment;
图3为实施例中更新场景基准标的物的流程图。Fig. 3 is a flow chart of updating scene reference objects in an embodiment.
具体实施方式Detailed ways
场景变更检测问题可以定义为:当监控摄像头由于外力或人为的作用下,导致其拍摄角度发生改变,影响了正常的监控功能时,系统能够及时地自动检测这种场景的变换并进行预警提示。The scene change detection problem can be defined as: when the surveillance camera changes its shooting angle due to external force or human action, which affects the normal monitoring function, the system can automatically detect the scene change in time and give an early warning prompt.
本实施例中,参照图1,基于视频分析的场景变更检测方法包括以下步骤:In this embodiment, with reference to Fig. 1, the scene change detection method based on video analysis includes the following steps:
S1.获取视频图像数据;S1. Obtain video image data;
S2.对视频图像数据进行处理,获取各视频帧各自对应的目标要素;S2. Process the video image data to obtain the corresponding target elements of each video frame;
S3.根据视频图像数据,设定场景基准标的物的初始值;S3. According to the video image data, an initial value of the scene reference object is set;
S4.按照在视频图像数据中的顺序,遍历各视频帧;对于遍历过程中的任一当前视频帧,根据当前视频帧对应的目标要素与场景基准标的物,确定当前视频帧对应的第一场景相似度,当第一场景相似度小于第一阈值,更新场景基准标的物,反之则维持场景基准标的物不变;经过更新或者维持的场景基准标的物,用于确定后续视频帧对应的第一场景相似度。S4. Traverse each video frame according to the order in the video image data; for any current video frame in the traversal process, determine the first scene corresponding to the current video frame according to the target element corresponding to the current video frame and the scene benchmark object Similarity, when the similarity of the first scene is less than the first threshold, the scene reference object is updated, otherwise, the scene reference object remains unchanged; the updated or maintained scene reference object is used to determine the first scene corresponding to the subsequent video frame. scene similarity.
本实施例中,步骤S1-S4的原理如图2所示。In this embodiment, the principles of steps S1-S4 are shown in FIG. 2 .
在执行步骤S1时,可以通过安装在场景中的摄像头对固定场景进行拍摄,获得原始视频数据文件,或者获取场景中摄像头保存的历史视频数据文件,获得原始视频数据文件。原始视频数据文件可以表示为{frame1,frame2…frameq},其中frameq表示原始视频数据文件中的视频帧。When step S1 is executed, the camera installed in the scene can shoot the fixed scene to obtain the original video data file, or obtain the historical video data file saved by the camera in the scene to obtain the original video data file. The original video data file can be expressed as {frame 1 , frame 2 ... frame q }, where frame q represents a video frame in the original video data file.
对原始视频数据文件{frame1,frame2…frameq}进行图像尺寸转换以及图像颜色空间转换等处理,将原始视频数据又件中的每一帧转换至标准尺寸和标准激色空间,所得到的各视频帧{frame1,frame2′…frameq′}为视频图像数据。Perform image size conversion and image color space conversion on the original video data file {frame 1 , frame 2 ... frame q }, convert each frame in the original video data file to the standard size and standard excitation color space, and obtain Each video frame {frame 1 , frame 2 ′...frame q ′} is the video image data.
本实施例中,在执行步骤S2,也就是对视频图像数据进行处理,获取各视频帧各自对应的目标要素这一步骤时,具体可以执行以下步骤:In this embodiment, when performing step S2, that is, processing the video image data and obtaining the corresponding target elements of each video frame, the following steps can be specifically performed:
S201.使用基于深度学习的目标检测算法,对视频图像数据进行检测,获取各视频帧各自对应的目标检测要素;S201. Use a target detection algorithm based on deep learning to detect video image data, and obtain target detection elements corresponding to each video frame;
S202.使用基于深度学习的目标分割算法,对视频图像数据进行分割,获取各视频帧各自对应的目标分割要素。S202. Using a deep learning-based target segmentation algorithm, segment the video image data, and acquire target segmentation elements corresponding to each video frame.
步骤S201中,对于视频图像数据{frame1,frame2′…frameq’}中的每一帧,分别使用基于深度学习的目标检测算法进行检测,获取各视频帧各自对应的目标检测要素。以第q帧视频帧frameq为例,使用基于深度学习的目标检测算法进行检测,获得m个目标的类别信息Clam和区域信息Recmm,从而组成Detq={{Cia1,Rec1},{Cla2,Rec2}…{Clam,Recm}}。Detq为第q帧视频帧frameq对应的目标检测要素。In step S201, for each frame in the video image data {frame 1 , frame 2 '...frame q '}, use the target detection algorithm based on deep learning to detect respectively, and obtain the target detection elements corresponding to each video frame. Taking the qth video frame frame q as an example, use the target detection algorithm based on deep learning to detect, and obtain the category information Cla m and area information Rec m m of m targets, so as to form Det q = {{Cia 1 , Rec 1 }, {Cla 2 , Rec 2 }...{Cla m , Rec m }}. Det q is the target detection element corresponding to the qth video frame frame q .
步骤S202中,对于视频图像数据{frame1,frame2′…frameq′}中的每一帧,分别使用基于深度学习的目标分割算法进行分割,获取各视频帧各自对应的目标分割要素。以第q帧视频帧frameq为例,使用基于深度学习的目标检测算法进行分割,获得n个目标的类别信息Labn和区域信息Regn,从而组成Segq={{Lab1,Reg1},{Lab2,Reg2}…{Labn,Regn}}。Segq为第q帧视频帧frameq对应的目标分割要素。In step S202, for each frame in the video image data {frame 1 , frame 2 ′...frame q ′}, use the object segmentation algorithm based on deep learning to segment, and obtain the corresponding object segmentation elements of each video frame. Taking the qth video frame frame q as an example, use the target detection algorithm based on deep learning to segment, and obtain the category information Lab n and region information Reg n of n targets, so as to form Seg q = {{Lab 1 , Reg 1 } , {Lab 2 , Reg 2 }...{Lab n , Reg n }}. Seg q is the target segmentation element corresponding to the qth video frame frame q .
本实施例中,第q帧视频帧frameq对应的目标检测要素Detq和目标分割要素Seqq,组成目标要素petq,Segq}。各个视频帧的目标要素组成的集合表示为{{Det1,Seg1},{Det2,Seg2}…petq,Segq}}。In this embodiment, the target detection element Det q and the target segmentation element Seq q corresponding to the qth video frame frame q form the target element pet q , Seg q }. The set of target elements of each video frame is denoted as {{Det 1 , Seg 1 }, {Det 2 , Seg 2 }…pet q , Seg q }}.
本实施例中,在执行步骤S3,也就是根据视频图像数据,设定场景基准标的物的初始值这一步骤时,具体可以执行以下步骤:In this embodiment, when performing step S3, that is, setting the initial value of the scene reference object according to the video image data, the following steps may be specifically performed:
S301.设定标的物的特定类别;S301. Setting a specific category of the subject matter;
S302.从视频图像数据中选出最前的若干个视频帧;S302. Select the first several video frames from the video image data;
S303.分别从选出的若干个视频帧中划定出相应的目标要素;划定出的目标要素,均表示特定类别的标的物;S303. Respectively delineate corresponding target elements from several selected video frames; the delineated target elements all represent specific types of objects;
S304.将划定出的全部目标要素的交集,作为场景基准标的物的初始值。S304. Use the intersection of all demarcated target elements as the initial value of the scene benchmark object.
步骤S301中,以交通场景为例,由于道路区域的标志标线等是固定不变的,因此可以将标志标线作为交通场景的显著特征,从而作为场景变更检测的标准,因此将标的物的特定类别设定为“标志标线”,具体可以设定为{Cla=Road,Lab=Line}。In step S301, taking the traffic scene as an example, since the markings and markings in the road area are fixed, the markings and markings can be used as the salient features of the traffic scene, so as to be used as the standard for scene change detection, so the The specific category is set as "marking line", which can be specifically set as {Cla=Road, Lab=Line}.
步骤S302中,可以从视频图像数据{frame1,frame2′…frameq′}中选出最前的i个视频帧。In step S302, the first i video frames may be selected from the video image data {frame 1 , frame 2 ′...frame q ′}.
步骤S303中,对于步骤S302选择出的i个视频帧,从每个视频帧中都画出能够表示特定类别的标的物(根据步骤S301,即标志标线)的目标要素来。由于每个视频帧中的目标要素都表示特定类别的标的物(标志标线),而特定类别的标的物在特定的场景(道路区域)中是不变的,因此每个视频帧中的目标要素为不变要素。In step S303 , for the i video frames selected in step S302 , draw target elements that can represent specific types of objects (according to step S301 , ie marking lines) from each video frame. Since the target elements in each video frame represent a specific category of objects (marking lines), and the specific category of objects is invariant in a specific scene (road area), the target in each video frame Elements are invariant elements.
步骤S304中,对步骤S303划定出的全部目标要素求交集Markinit={{Det1,Det2,…Detx},{Seg1,Seg2,…Segy}},得到场景基准标的物的初始值Markinit。In step S304, calculate the intersection set Mark init ={{Det 1 , Det 2 ,...Det x }, {Seg 1 , Seg 2 ,...Seg y }} for all target elements delineated in step S303, and obtain the scene reference target The initial value of Mark init .
步骤S4的原理是:对实时的连续视频图像帧进行场景变化的判断,在场景变化判断中基于场景不变要素的基准标定物提出了场景相似度计算方法,场景相似度的原理是依据场景中存在有固定不变的要素,并且其可以作为连续帧间位置匹配的基准标定物,当场景不发生变化时,连续帧实时检测到的标定物和基准标定物位置重合度高,即表明场景相似度高;当场景发生变化时,连续帧实时检测到的标定物和基准标定物位置重合度低,甚至不重合,即表明场景相似度低。The principle of step S4 is: to judge the scene change on the real-time continuous video image frame, in the scene change judgment, based on the reference calibration object of the scene invariant element, a scene similarity calculation method is proposed, the principle of the scene similarity is based on the There are fixed elements, and they can be used as a reference calibration object for position matching between consecutive frames. When the scene does not change, the positions of the calibration objects detected in real time by consecutive frames and the reference calibration object have a high degree of coincidence, which means that the scenes are similar When the scene changes, the coincidence degree of the calibration object and the reference calibration object detected in real time by continuous frames is low, or even non-coincident, which means that the scene similarity is low.
基于上述原理,步骤S4中,对于视频图像数据{frame1,frame2′…frameq′},从第1个视频帧frame1开始遍历。Based on the above principle, in step S4, for the video image data {frame 1 , frame 2 ′...frame q ′}, traverse from the first video frame frame 1 .
在刚开始时,当前视频帧为第1个视频帧frame1,根据第1个视频帧frame1对应的目标要素与场景基准标的物(此时为初始值),根据公式 计算第1个视频帧frame1对应的第一场景相似度,式中score1表示第1个视频帧(当前视频帧)对应的第一场景相似度,Regt和Reco表示场景基准标的物所在区域,Regk表示第1个视频帧(当前视频帧)中目标检测要素所在区域,Recl表示第1个视频帧(当前视频帧)中目标分割要素所在区域,n表示第1个视频帧(当前视频帧)中目标检测要素的数量,m表示第1个视频帧(当前视频帧)中分割检测要素的数量;将第1个视频帧对应的第一场景相似度score1与第一阈值thre1进行比较,如果score1>thre1,表示场景未发生改变(即此时场景仍然是“道路区域”),即可以转入下一个视频帧即第2个视频帧ftame2,此时第2个视频帧frame2取代第1个视频帧frame1成为当前视频帧,以不变的场景基准标的物,计算第一场景相似度score2,再判断score2>thre1成立(场景未发生改变)还是score2≤thre1成立(场景发生改变),如果仍为score2>thre1成立(场景未发生改变),即可以转入下一个视频帧即第3个视频帧frame3,继续相同的过程,直至遍历完视频图像数据{frame1,frame2′…frameq′}中全部的视频帧。At the beginning, the current video frame is the first video frame frame 1 , according to the target element corresponding to the first video frame frame 1 and the scene reference object (the initial value at this time), according to the formula Calculate the similarity of the first scene corresponding to the first video frame frame 1 , where score 1 represents the similarity of the first scene corresponding to the first video frame (current video frame), and Reg t and Rec o represent the location of the scene reference object Region, Reg k represents the area where the target detection element is located in the first video frame (current video frame), Rec l represents the area where the target segmentation element is located in the first video frame (current video frame), n represents the first video frame ( The number of target detection elements in the current video frame), m represents the number of segmentation detection elements in the first video frame (current video frame); the first scene similarity score 1 corresponding to the first video frame and the first threshold value thre 1 for comparison, if score 1 >thre 1 , it means that the scene has not changed (that is, the scene is still "road area" at this time), that is, it can be transferred to the next video frame, which is the second video frame ftame 2 , at this time the second video frame The first video frame, frame 2, replaces the first video frame, frame 1 , as the current video frame, and calculates the first scene similarity score 2 with the same scene reference object, and then judges that score 2 >thre 1 holds true (the scene has not changed) Still score 2 ≤thre 1 is established (scene changes), if score 2 >thre 1 is still established (scene has not changed), that is, you can transfer to the next video frame, which is the third video frame frame 3 , and continue the same process , until all the video frames in the video image data {frame 1 , frame 2 ′… frame q ′} are traversed.
步骤S4中,只要任一当前视频帧(例如第q个视频帧frameq)出现scoreq≤thre1,表示场景发生改变,那么参照图3,暂停进入下一个视频帧(第q+1个视频帧),对场景基准标的物进行更新,更新完场景基准标的物之后,再进入下一个视频帧(第q+1个视频帧)。对于以后的视频帧(例如第q+1个视频帧),使用更新后的场景基准标的物计算视频帧对应的第一场景相似度。In step S4, as long as score q≤thre 1 occurs in any current video frame (for example, the qth video frame frame q ), indicating that the scene has changed, then referring to Figure 3, pause and enter the next video frame (q+1th video frame frame), update the scene reference object, and then enter the next video frame (q+1th video frame) after updating the scene reference object. For subsequent video frames (for example, the q+1th video frame), the first scene similarity corresponding to the video frames is calculated using the updated scene reference object.
步骤S4中,只要任一当前视频帧(例如第q个视频帧frameq)出现scoreq≤thre1,参照图3,可以设定一个滑动窗口,这个滑动窗口有固定的大小,能够选定连续的j个视频帧,而且滑动窗口的位置可变,即可以任意选择视频图像数据{frame1,frame2′…frameq′}上的连续j个视频帧。In step S4, as long as score q ≤ thre 1 occurs in any current video frame (for example, the qth video frame frame q ), referring to Fig. 3, a sliding window can be set. This sliding window has a fixed size and can select continuous The j video frames of , and the position of the sliding window is variable, that is, the continuous j video frames on the video image data {frame 1 , frame 2 ′…frame q ′} can be selected arbitrarily.
在一开始时,可以先将滑动窗口设定在当前视频帧(第q个视频帧frameq)附近的位置,从而选定包括当前视频帧(第q个视频帧frameq)在内的连续j个视频帧(第q个视频帧frameq可以在这连续j个视频帧中的任意位置)。对于这连续j个视频帧,划定出每个视频帧各自的目标检测要素{Det1,Der2,…Detz}和目标分割要素{Seg1,Seg2,…Segr},通过划定出的全部目标要素的交集Marktemp={{Det1,Det2,…Detz},{Seg1,Seg2,…Seqr}},获得临时场景基准标的物Marktemp。At the beginning, the sliding window can be set at the position near the current video frame (the qth video frame frame q ), so as to select the continuous j including the current video frame (the qth video frame frame q ) video frames (the qth video frame frame q can be anywhere in this continuous j video frames). For these j consecutive video frames, delineate the respective target detection elements {Det 1 , Der 2 , ... Det z } and target segmentation elements {Seg 1 , Seg 2 , ... Seg r } for each video frame, by delineating The intersection of all target elements Mark temp = {{Det 1 , Det 2 , ... Det z }, {Seg 1 , Seg 2 , ... Seq r }} to obtain the temporary scene reference target Mark temp .
在获得临时场景基准标的物Marktemp之后,根据临时场景基准标的物Marktemp与当前视频帧(第q个视频帧frameq)对应的目标要素,计算临时场景基准标的物Marktemp对应的第二场景相似度scoreq。本实施例中,计算第二场景相似度scoreq的公式为 式中,Regt和Reco表示临时场景基准标的物所在区域,Regk表示作为当前视频帧的第q个视频帧中目标检测要素所在区域,Recl表示第q个视频帧中目标分割要素所在区域,n表示第q个视频帧中目标检测要素的数量,m表示第q个视频帧中分割检测要素的数量。After obtaining the temporary scene reference target Mark temp , according to the target elements corresponding to the temporary scene reference target Mark temp and the current video frame (the qth video frame frame q ), calculate the second scene corresponding to the temporary scene reference target Mark temp Similarity score q . In this embodiment, the formula for calculating the second scene similarity score q is In the formula, Reg t and Rec o represent the area where the temporary scene benchmark object is located, Reg k represents the area where the target detection element is located in the qth video frame as the current video frame, and Rec l represents the location of the target segmentation element in the qth video frame area, n represents the number of target detection elements in the qth video frame, and m represents the number of segmentation detection elements in the qth video frame.
根据不同场景设置相似度第二阈值thre2用于判断场景是否停止改变。当第二场景相似度scoreq≤第二阈值thre2,表示场景仍然发生改变,将滑动窗口移动到新的位置(例如可以沿着视频图像数据{frame1,frame2…frameq}的时间轴方向前进一帧),移动后的滑动窗口能够选择新的一组连续的j个视频帧,从而重新计算第二场景相似度scoreq,若新的第二场景相似度scoreq仍小于或等于第二阈值thre2,则继续重复执行将滑动窗口移动到新的位置等过程。The second threshold of similarity thre 2 is set according to different scenes to determine whether the scene stops changing. When the second scene similarity score q ≤ the second threshold thre 2 , it means that the scene still changes, and the sliding window is moved to a new position (for example, along the time axis of the video image data {frame 1 , frame 2 ... frame q } direction forward one frame), the moved sliding window can select a new set of continuous j video frames, thereby recalculating the second scene similarity score q , if the new second scene similarity score q is still less than or equal to the first If the second threshold value is thre 2 , the process of moving the sliding window to a new position and so on will be repeated.
只要出现第二场景相似度scoreq>第二阈值thre2,则表示最新的j个视频帧的场景不再发生改变,那么可以把这最新的j个视频帧的全部目标要素交集计算得到的临时场景基准标的物Marktemp,确定为更新后的场景基准标的物,完成对场景基准标的物的更新。As long as the second scene similarity score q > the second threshold thre 2 appears, it means that the scene of the latest j video frames will no longer change, then the temporary The scene reference target Mark temp is determined as the updated scene reference target, and the update of the scene reference target is completed.
本实施例中的基于视频分析的场景变更检测方法,其原理是基于深度学习提取到的场景中不变要素的深度特征,无需人工设计深度特征,提高了场景描述特征的鲁棒性;在检测到当前视频帧发生场景变化时更新场景基准标定物,能够使场景基准标定物追随视频帧的场景变化而变化,从而保持场景变化判断标准的自动更新,一方面有利于更加精准地判断视频帧是否发生场景变化,另一方面有利于提升场景变更检测自动化程度。The scene change detection method based on video analysis in this embodiment is based on the depth features of the invariant elements in the scene extracted by deep learning, without manual design of depth features, which improves the robustness of scene description features; Updating the scene reference calibration object when the scene changes in the current video frame can make the scene reference calibration object follow the scene change of the video frame and change, thereby maintaining the automatic update of the scene change judgment standard. On the one hand, it is conducive to more accurately judging whether the video frame is Scene changes occur, on the other hand, it is beneficial to improve the automation of scene change detection.
本实施例中的基于视频分析的场景变更检测方法,是基于连续帧视频数据采用深度特征的方法进行场景变更检测,可以准确快速地对实时监控视频或视频文件进行场景变更检测,能够在实际场景中多种光照和天气下实现场景变更检测,增强了算法鲁棒性。The scene change detection method based on video analysis in this embodiment is to detect scene changes based on continuous frame video data using deep features, which can accurately and quickly perform scene change detection on real-time monitoring videos or video files, and can be used in actual scenes. The scene change detection is realized under various lighting and weather conditions, which enhances the robustness of the algorithm.
本实施例中的基于视频分析的场景变更检测方法,利用深度特征提取到的场景要素类别信息和空间位置信息与场景基准标定物进行匹配,根据位置相似性的差异实现场景变更检测,同时考虑到在实际应用中场景变更后场景基准标定物的更新方法,更加有利于实际应用。In the scene change detection method based on video analysis in this embodiment, the scene element category information and spatial position information extracted from the depth feature are used to match the scene reference calibration object, and the scene change detection is realized according to the difference in position similarity, taking into account In practical applications, the method for updating the scene reference calibration object after the scene changes is more conducive to practical applications.
本实施例中,提供一种基于视频分析的场景变更检测系统,包括:In this embodiment, a scene change detection system based on video analysis is provided, including:
第一模块,用于获取视频图像数据;视频图像数据包括多个视频帧;The first module is used to acquire video image data; the video image data includes a plurality of video frames;
第二模块,用于根据视频图像数据,设定场景基准标的物的初始值;The second module is used to set the initial value of the scene benchmark object according to the video image data;
第三模块,用于对视频图像数据进行处理,获取各视频帧各自对应的目标要素;The third module is used to process the video image data to obtain the corresponding target elements of each video frame;
第四模块,用于按照在视频图像数据中的顺序,遍历各视频帧;对于遍历过程中的任一当前视频帧,根据当前视频帧对应的目标要素与场景基准标的物,确定当前视频帧对应的第一场景相似度,当第一场景相似度小于第一阈值,更新场景基准标的物,反之则维持场景基准标的物不变;经过更新或者维持的场景基准标的物,用于确定后续视频帧对应的第一场景相似度。The fourth module is used to traverse each video frame according to the order in the video image data; for any current video frame in the traversal process, determine the corresponding The first scene similarity, when the first scene similarity is less than the first threshold, update the scene reference object, otherwise, keep the scene reference object unchanged; the updated or maintained scene reference object is used to determine the subsequent video frame The corresponding first scene similarity.
当基于视频分析的场景变更检测系统运行时,第一模块可以执行基于视频分析的场景变更检测方法中的步骤S1,第二模块可以执行步骤S2,第三模块可以执行步骤S3,第四模块可以执行步骤S4,因此基于视频分析的场景变更检测系统可以执行基于视频分析的场景变更检测方法,从而实现基于视频分析的场景变更检测方法的技术效果。When the video analysis-based scene change detection system is running, the first module can execute step S1 in the video analysis-based scene change detection method, the second module can execute step S2, the third module can execute step S3, and the fourth module can Step S4 is executed, so the video analysis-based scene change detection system can execute the video analysis-based scene change detection method, thereby realizing the technical effect of the video analysis-based scene change detection method.
可以通过编写执行本实施例中的基于视频分析的场景变更检测方法的计算机程序,将该计算机程序写入至存储介质或者计算机装置中,当计算机程序被读取出来运行时,执行本实施例中的基于视频分析的场景变更检测方法,从而实现与实施例中的基于视频分析的场景变更检测方法相同的技术效果。It is possible to write a computer program that executes the video analysis-based scene change detection method in this embodiment, write the computer program into a storage medium or a computer device, and when the computer program is read and run, execute the method in this embodiment. The scene change detection method based on video analysis achieves the same technical effect as the scene change detection method based on video analysis in the embodiment.
需要说明的是,如无特殊说明,当某一特征被称为“固定”、“连接”在另一个特征,它可以直接固定、连接在另一个特征上,也可以间接地固定、连接在另一个特征上。此外,本公开中所使用的上、下、左、右等描述仅仅是相对于附图中本公开各组成部分的相互位置关系来说的。在本公开中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。此外,除非另有定义,本实施例所使用的所有的技术和科学术语与本技术领域的技术人员通常理解的含义相同。本实施例说明书中所使用的术语只是为了描述具体的实施例,而不是为了限制本发明。本实施例所使用的术语“和/或”包括一个或多个相关的所列项目的任意的组合。It should be noted that, unless otherwise specified, when a feature is called "fixed" or "connected" to another feature, it can be directly fixed and connected to another feature, or indirectly fixed and connected to another feature. on a feature. In addition, descriptions such as up, down, left, and right used in the present disclosure are only relative to the mutual positional relationship of the components of the present disclosure in the drawings. As used in this disclosure, the singular forms "a", "the", and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. In addition, unless otherwise defined, all technical and scientific terms used in this embodiment have the same meaning as commonly understood by those skilled in the art. The terms used in the description of this embodiment are only for describing specific embodiments, not for limiting the present invention. The term "and/or" used in this embodiment includes any combination of one or more related listed items.
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种元件,但这些元件不应限于这些术语。这些术语仅用来将同一类型的元件彼此区分开。例如,在不脱离本公开范围的情况下,第一元件也可以被称为第二元件,类似地,第二元件也可以被称为第一元件。本实施例所提供的任何以及所有实例或示例性语言(“例如”、“如”等)的使用仅意图更好地说明本发明的实施例,并且除非另外要求,否则不会对本发明的范围施加限制。It should be understood that although the terms first, second, third etc. may be used in the present disclosure to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish elements of the same type from one another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. The use of any and all examples, or exemplary language ("such as", "such as", etc.) provided in the examples is intended merely to better illuminate the examples of the invention and will not cast a shadow on the scope of the invention unless otherwise claimed impose restrictions.
应当认识到,本发明的实施例可以由计算机硬件、硬件和软件的组合、或者通过存储在非暂时性计算机可读存储器中的计算机指令来实现或实施。所述方法可以使用标准编程技术-包括配置有计算机程序的非暂时性计算机可读存储介质在计算机程序中实现,其中如此配置的存储介质使得计算机以特定和预定义的方式操作——根据在具体实施例中描述的方法和附图。每个程序可以以高级过程或面向对象的编程语言来实现以与计算机系统通信。然而,若需要,该程序可以以汇编或机器语言实现。在任何情况下,该语言可以是编译或解释的语言。此外,为此目的该程序能够在编程的专用集成电路上运行。It should be appreciated that embodiments of the invention may be realized or implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods can be implemented in a computer program using standard programming techniques - including a non-transitory computer-readable storage medium configured with a computer program, where the storage medium so configured causes the computer to operate in a specific and predefined manner - according to the specific Methods and Figures described in the Examples. Each program can be implemented in a high-level procedural or object-oriented programming language to communicate with the computer system. However, the programs can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on an application specific integrated circuit programmed for this purpose.
此外,可按任何合适的顺序来执行本实施例描述的过程的操作,除非本实施例另外指示或以其他方式明显地与上下文矛盾。本实施例描述的过程(或变型和/或其组合)可在配置有可执行指令的一个或多个计算机系统的控制下执行,并且可作为共同地在一个或多个处理器上执行的代码(例如,可执行指令、一个或多个计算机程序或一个或多个应用)、由硬件或其组合来实现。所述计算机程序包括可由一个或多个处理器执行的多个指令。Furthermore, operations of processes described in this embodiment may be performed in any suitable order unless otherwise indicated by this embodiment or otherwise clearly contradicted by context. The processes described in this embodiment (or variants and/or combinations thereof) can be executed under the control of one or more computer systems configured with executable instructions, and can be executed as code jointly executed on one or more processors (eg, executable instructions, one or more computer programs, or one or more applications), hardware or a combination thereof. The computer program comprises a plurality of instructions executable by one or more processors.
进一步,所述方法可以在可操作地连接至合适的任何类型的计算平台中实现,包括但不限于个人电脑、迷你计算机、主框架、工作站、网络或分布式计算环境、单独的或集成的计算机平台、或者与带电粒子工具或其它成像装置通信等等。本发明的各方面可以以存储在非暂时性存储介质或设备上的机器可读代码来实现,无论是可移动的还是集成至计算平台,如硬盘、光学读取和/或写入存储介质、RAM、ROM等,使得其可由可编程计算机读取,当存储介质或设备由计算机读取时可用于配置和操作计算机以执行在此所描述的过程。此外,机器可读代码,或其部分可以通过有线或无线网络传输。当此类媒体包括结合微处理器或其他数据处理器实现上文所述步骤的指令或程序时,本实施例所述的发明包括这些和其他不同类型的非暂时性计算机可读存储介质。当根据本发明所述的方法和技术编程时,本发明还包括计算机本身。Further, the method can be implemented in any type of computing platform operably connected to a suitable one, including but not limited to personal computer, minicomputer, main frame, workstation, network or distributed computing environment, stand-alone or integrated computer platform, or communicate with charged particle tools or other imaging devices, etc. Aspects of the invention can be implemented as machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or written storage medium, RAM, ROM, etc., such that they are readable by a programmable computer, when the storage medium or device is read by the computer, can be used to configure and operate the computer to perform the processes described herein. Additionally, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described in this embodiment includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs for implementing the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
计算机程序能够应用于输入数据以执行本实施例所述的功能,从而转换输入数据以生成存储至非易失性存储器的输出数据。输出信息还可以应用于一个或多个输出设备如显示器。在本发明优选的实施例中,转换的数据表示物理和有形的对象,包括显示器上产生的物理和有形对象的特定视觉描绘。Computer programs can be applied to input data to perform the functions described in this embodiment, thereby transforming the input data to generate output data stored to non-volatile memory. Output information may also be applied to one or more output devices such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including specific visual depictions of physical and tangible objects produced on a display.
以上所述,只是本发明的较佳实施例而已,本发明并不局限于上述实施方式,只要其以相同的手段达到本发明的技术效果,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。在本发明的保护范围内其技术方案和/或实施方式可以有各种不同的修改和变化。The above is only a preferred embodiment of the present invention, and the present invention is not limited to the above-mentioned implementation, as long as it achieves the technical effect of the present invention by the same means, within the spirit and principles of the present invention, any Any modification, equivalent replacement, improvement, etc., shall be included within the protection scope of the present invention. Various modifications and changes may be made to the technical solutions and/or implementations within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211704536.2A CN115909219A (en) | 2022-12-29 | 2022-12-29 | Scene change detection method and system based on video analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211704536.2A CN115909219A (en) | 2022-12-29 | 2022-12-29 | Scene change detection method and system based on video analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115909219A true CN115909219A (en) | 2023-04-04 |
Family
ID=86493917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211704536.2A Pending CN115909219A (en) | 2022-12-29 | 2022-12-29 | Scene change detection method and system based on video analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115909219A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116645530A (en) * | 2023-04-23 | 2023-08-25 | 广东建瀚工程管理有限公司 | Construction detection method, device, equipment and storage medium based on image comparison |
-
2022
- 2022-12-29 CN CN202211704536.2A patent/CN115909219A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116645530A (en) * | 2023-04-23 | 2023-08-25 | 广东建瀚工程管理有限公司 | Construction detection method, device, equipment and storage medium based on image comparison |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102189262B1 (en) | Apparatus and method for collecting traffic information using edge computing | |
CN101996410B (en) | Method and system of detecting moving object under dynamic background | |
CN113674416B (en) | Three-dimensional map construction method and device, electronic equipment and storage medium | |
US11900676B2 (en) | Method and apparatus for detecting target in video, computing device, and storage medium | |
CN110232330B (en) | Pedestrian re-identification method based on video detection | |
CN107609486A (en) | To anti-collision early warning method and system before a kind of vehicle | |
CN103413444A (en) | Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video | |
CN112947419B (en) | Obstacle avoidance method, device and equipment | |
CN103325112A (en) | Quick detecting method for moving objects in dynamic scene | |
Liu et al. | Dynamic RGB-D SLAM based on static probability and observation number | |
WO2023283929A1 (en) | Method and apparatus for calibrating external parameters of binocular camera | |
CN105374049B (en) | Multi-corner point tracking method and device based on sparse optical flow method | |
CN111383204A (en) | Video image fusion method, fusion device, panoramic monitoring system and storage medium | |
CN107993256A (en) | Dynamic target tracking method, apparatus and storage medium | |
CN113269007A (en) | Target tracking device and method for road monitoring video | |
CN103440667A (en) | Automatic device for stably tracing moving targets under shielding states | |
CN109636828A (en) | Object tracking methods and device based on video image | |
CN109658441B (en) | Foreground detection method and device based on depth information | |
KR102295183B1 (en) | object tracking method for CCTV video by use of CCTV projection model | |
Nguyen et al. | Compensating background for noise due to camera vibration in uncalibrated-camera-based vehicle speed measurement system | |
Zhang et al. | Faster R-CNN based on frame difference and spatiotemporal context for vehicle detection | |
Pei et al. | Improved Camshift object tracking algorithm in occluded scenes based on AKAZE and Kalman | |
CN115909219A (en) | Scene change detection method and system based on video analysis | |
Lee et al. | Vehicle counting based on a stereo vision depth maps for parking management | |
CN113112479A (en) | Progressive target detection method and device based on key block extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |