CN111091136B - Video scene change detection method and system - Google Patents
Video scene change detection method and system Download PDFInfo
- Publication number
- CN111091136B CN111091136B CN201811238433.5A CN201811238433A CN111091136B CN 111091136 B CN111091136 B CN 111091136B CN 201811238433 A CN201811238433 A CN 201811238433A CN 111091136 B CN111091136 B CN 111091136B
- Authority
- CN
- China
- Prior art keywords
- feature number
- detection result
- image
- reference image
- current image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000009466 transformation Effects 0.000 claims abstract description 18
- 238000001914 filtration Methods 0.000 claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims description 15
- 230000005856 abnormality Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 abstract description 5
- 238000012544 monitoring process Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a video scene change detection method and a system, wherein the method comprises the following steps: acquiring a reference image and a current image; extracting SIFT features of a reference image and a current image to obtain a first feature number and a second feature number; judging whether the first feature number and the second feature number meet a first condition, and if so, obtaining a detection result according to the first feature number and the second feature number; otherwise, filtering interference features in the current image and the reference image by using a RANSAC algorithm based on the homographic transformation, and accumulating the same feature numbers of the current image and the reference image after filtering; and obtaining a detection result according to the same feature number. The SIFT features are extracted for comparison, so that the method has good robustness to various weather conditions; when SIFT feature comparison cannot be met, the method filters interference caused by the dynamic moving target by using an algorithm, and improves detection accuracy. The invention can be widely applied to the field of image processing.
Description
Technical Field
The invention relates to an image processing technology, in particular to a video scene change detection method and a video scene change detection system.
Background
When the scene of the camera is changed, the camera is rotated by people or other reasons, so that the monitoring area deviates from the appointed monitoring area. At present, related detection means comprise detection based on video coding, detection based on the difference of histograms among video frames and the like, but the detection accuracy of the methods is not high, the detection based on the video coding mainly comprises a dynamic threshold based on the number of coding bits and AC image similarity based on AC energy, but the method is complex in technology, difficult to realize and not high in accuracy. The outdoor camera has complex scene, has the influence of weather factors such as illumination, haze, rain and the like, and also has the change factors such as people, vehicles, objects and the like, so that scene movement is difficult to accurately detect by only considering the similarity of the AC images and the dynamic threshold value based on the number of coding bits. Video scene change detection techniques based on video coding may utilize motion vector estimation for predictive coding in a manner that is consistent with the analysis above and difficult to implement. The method for detecting the difference based on the histograms among the video frames mainly comprises the steps of calculating the histograms among the video frames, calculating the difference among the histograms, judging and determining whether scene change occurs according to a set threshold value, and also has low accuracy and difficult threshold value determination according to the analysis.
The above algorithm is not suitable for scene change detection of an outdoor monitoring camera.
Disclosure of Invention
In order to solve the technical problems, the invention aims to: a video scene change detection method and system suitable for an outdoor monitoring camera are provided.
The first technical scheme adopted by the invention is as follows:
a video scene change detection method comprising the steps of:
the acquisition step: acquiring a reference image and a current image;
the extraction step: extracting SIFT features of a reference image to obtain a first feature number; extracting SIFT features of the current image to obtain a second feature number;
and a primary judging step: judging whether the first feature number and the second feature number meet a first condition, and if so, obtaining a detection result according to the first feature number and the second feature number; otherwise, executing a secondary judgment step;
the secondary judging step comprises the following steps:
filtering interference features in the current image and the reference image by using a RANSAC algorithm based on homographic transformation, and accumulating the same feature numbers of the current image and the reference image after filtering;
and obtaining a detection result according to the same feature number.
Further, the steps of:
the reference image and the current image are reduced to a set size while maintaining the aspect ratio.
Further, the first setting condition is that at least one of the first feature number and the second feature number is 0.
Further, the detecting result is obtained according to the first feature number and the second feature number, which specifically includes:
if the first feature number and the second feature number are both 0, returning to normal as a detection result;
if only one of the first feature number and the second feature number is 0, an abnormality is returned as a detection result.
Further, the detecting result obtained according to the same feature number specifically includes:
judging whether the same feature number is larger than or equal to a first set threshold value; if yes, acquiring matrix parameters of the homographic transformation, and acquiring a detection result according to the matrix parameters; otherwise, an abnormality is returned as a detection result.
Further, the obtaining a detection result according to the matrix parameter specifically includes:
judging whether at least one parameter with a parameter value larger than a second set threshold value exists in the h3 parameter and the h6 parameter in the matrix parameters; if yes, returning an abnormality as a detection result; otherwise, returning to the normal state as the detection result.
Further, the current image is a frame of image decoded from the current video stream, and the reference image is an image stored in the detection that the last detection result is normal or is a pre-stored image.
Further, the reducing the reference image and the current image to the set size while maintaining the aspect ratio is specifically:
the reference image and the current image are reduced to a size of 480 pixels in width with the aspect ratio maintained.
The second technical scheme adopted by the invention is as follows:
a video scene change detection system, comprising:
the acquisition module is used for acquiring a reference image and a current image;
the extraction module is used for extracting SIFT features of the reference image to obtain a first feature number; extracting SIFT features of the current image to obtain a second feature number;
the primary judging module is used for judging whether the first feature number and the second feature number meet a first condition, and if so, obtaining a detection result according to the first feature number and the second feature number; otherwise, the second judgment module processes the data;
the secondary judging module is used for:
filtering interference features in the current image and the reference image by using a RANSAC algorithm based on homographic transformation, and accumulating the same feature numbers of the current image and the reference image after filtering;
and obtaining a detection result according to the same feature number.
The third technical scheme adopted by the invention is as follows:
a video scene change detection system, comprising:
a memory for storing a program;
and the processor is used for loading the program to execute a video scene change detection method.
The beneficial effects of the invention are as follows: the SIFT features are extracted for comparison, so that the method has good robustness to various weather conditions, and for the condition that SIFT meets a specific condition, whether the current video scene is abnormal or not, namely whether scene transformation occurs or not can be rapidly judged; when SIFT features do not meet specific conditions, the invention utilizes the RANSAC algorithm based on the homographic transformation to filter interference caused by dynamic moving targets such as people, vehicles, animals and the like, and improves the detection accuracy.
Drawings
FIG. 1 is a flow chart of a video scene change detection method according to an embodiment of the invention;
FIG. 2 is a flowchart showing a first determining step according to an embodiment of the present invention;
FIG. 3 is a flowchart showing a secondary judgment procedure according to an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and specific examples.
Referring to fig. 1, a video scene change detection method includes the steps of:
the acquisition step: acquiring a reference image and a current image; the reference image is a normal image shot by the camera, and the image can be an image reserved when the camera is detected to be normal last time or an image reserved when the camera is installed. The current image may be obtained by video stream decoding.
The extraction step: extracting SIFT features of a reference image to obtain a first feature number; extracting SIFT features of the current image to obtain a second feature number; the extraction of the SIFT features is prior art. Among them, SIFT, that is, scale-invariant feature transform (SIFT), is a description for the field of image processing. The description has scale invariance, can detect key points in an image, and is a local characteristic descriptor. SIFT features are based on points of interest on some local appearance on an object, independent of the size and rotation of the image. The tolerance to light, noise, micro-viewing angle changes is also quite high. Based on these characteristics, they are highly significant and relatively easy to retrieve, and in a huge population of feature databases, objects are easily identified and rarely mistaken. The detection rate of partial object shielding is quite high by using SIFT feature description, and even more than 3 SIFT object features are enough to calculate the position and the orientation.
And a primary judging step: judging whether the first feature number and the second feature number meet a first condition, and if so, obtaining a detection result according to the first feature number and the second feature number; otherwise, executing a secondary judgment step; by comparing the SIFT feature quantity of the two, the method can judge whether the video scene is transformed or not under the condition that the specific condition is met. In case that the specific condition is not satisfied, we need to make further judgment on the current image and the reference image.
The secondary judging step comprises the following steps:
and filtering interference features in the current image and the reference image by using a RANSAC algorithm based on the homographic transformation, and accumulating the same feature numbers of the current image and the reference image after filtering. Wherein, the interference features refer to moving objects such as people, vehicles, animals and the like. homographic transformation, i.e., homography transformation, whose parameters are represented by a matrix of 3*3. Of these, the part most interesting for homography is only a subset of the other meanings. Homography of planes is defined as projection mapping from one plane to another. For example, a mapping of points on a two-dimensional plane to camera imagers is an example of a plane homography. RANSAC, an abbreviation of Random Sample Consensus, is an algorithm that calculates mathematical model parameters of data from a set of sample data sets containing outlier data, resulting in valid sample data. The RANSAC algorithm is often used in computer vision. In this embodiment, the RANSAC algorithm based on the homographic transformation mainly judges whether the feature points satisfy the homographic mapping geometric relationship, and uses the algorithm to extract static points in the real world, filter out dynamic points, and eliminate interference objects.
And obtaining a detection result according to the same feature number. According to the same feature number obtained by the user, whether the video scene is transformed or not can be judged.
In a preferred embodiment, in order to reduce the amount of computation of the system, the steps of:
the reference image and the current image are reduced to a set size while maintaining the aspect ratio.
By shrinking the image, some of the non-obvious features can be removed while processing speed is increased.
The invention will be further described with reference to specific decision logic and specific thresholds.
With reference to fig. 2, as a preferred embodiment, the present embodiment describes a specific flow of the first judgment step, the first setting condition being that at least one of the first feature number N1 and the second feature number N2 is 0. Namely, three cases of n1=0, n2=0, and n1=0 and n2=0. Since at least 4 points are needed for the homography transformation matrix parameters, it is assumed that the number of static feature points in the real world is not less than 4; n1 equal to or N2 equal to 0 is an extreme case, indicating that none of the feature points are present, so that one is 0 and the other is not zero, indicating that scene movement has occurred. The embodiment can exclude the polar situation and improve the accuracy of the algorithm.
The detecting result is obtained according to the first feature number and the second feature number, and the detecting method specifically comprises the following steps:
if the first feature number and the second feature number are both 0, namely, n1=0 and n2=0, returning to normal as a detection result;
if only one of the first feature number and the second feature number is 0, that is, n1=0 or n2=0, an abnormality is returned as a detection result. Wherein, returning to normal specifically means returning a code representing normal, for example, returning to 1 in the case of normal and returning to 0 in the case of abnormal.
Referring to fig. 3, as a preferred embodiment, the present embodiment describes a specific flow of the second judgment step including:
filtering interference features in the current image and the reference image by using a RANSAC algorithm based on homographic transformation, and accumulating the same feature numbers of the current image and the reference image after filtering;
and obtaining a detection result according to the same feature number.
The detecting result is obtained according to the same feature number, and specifically includes:
judging whether the same feature number N3 is larger than or equal to a first set threshold value; if yes, acquiring matrix parameters of the homographic transformation, and acquiring a detection result according to the matrix parameters; otherwise, an abnormality is returned as a detection result. According to the experimental result, the method has a good detection effect when the first set threshold value is 4.
The method for obtaining the detection result according to the matrix parameters specifically comprises the following steps:
judging whether at least one parameter with a parameter value larger than a second set threshold value exists in the h3 parameter and the h6 parameter in the matrix parameters; if yes, returning an abnormality as a detection result; otherwise, returning to the normal state as the detection result. Among matrix parameters of the homographic transformation, the h3 parameter and the h6 parameter are parameters related to displacement. According to the experimental result, the method has a good detection effect when the second set threshold value is 20.
As a preferred embodiment, the current image is a frame of image decoded from the current video stream, and the reference image is an image stored in the last detection that the detection result is normal or is a pre-stored image. The embodiment can detect through real-time video stream without manual intervention.
As a preferred embodiment, the reference image and the current image are reduced to a set size with the aspect ratio being kept unchanged, which is specifically:
the reference image and the current image are reduced to a size of 480 pixels in width with the aspect ratio maintained. The embodiment compresses the image size to the size with the width of 480 pixels, and has better processing effect on 1080P and 720P videos.
The embodiment discloses a video scene change detection system for implementing the method shown in fig. 1, which comprises:
the acquisition module is used for acquiring a reference image and a current image;
the extraction module is used for extracting SIFT features of the reference image to obtain a first feature number; extracting SIFT features of the current image to obtain a second feature number;
the primary judging module is used for judging whether the first feature number and the second feature number meet a first condition, and if so, obtaining a detection result according to the first feature number and the second feature number; otherwise, the second judgment module processes the data;
the secondary judging module is used for:
filtering interference features in the current image and the reference image by using a RANSAC algorithm based on homographic transformation, and accumulating the same feature numbers of the current image and the reference image after filtering;
and obtaining a detection result according to the same feature number.
The embodiment discloses a video scene change detection system, which comprises:
a memory for storing a program;
a processor for loading the program to perform a video scene change detection method as shown in fig. 1.
The step numbers in the above method embodiments are set for convenience of illustration, and the order of steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments described above, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.
Claims (8)
1. A video scene change detection method is characterized in that: the method comprises the following steps:
the acquisition step: acquiring a reference image and a current image;
the extraction step: extracting SIFT features of a reference image to obtain a first feature number; extracting SIFT features of the current image to obtain a second feature number;
and a primary judging step: judging whether the first feature number and the second feature number meet a first condition, and if so, obtaining a detection result according to the first feature number and the second feature number; otherwise, executing a secondary judgment step;
the secondary judging step comprises the following steps:
filtering interference features in the current image and the reference image by using a RANSAC algorithm based on homographic transformation, and accumulating the same feature numbers of the current image and the reference image after filtering;
obtaining a detection result according to the same feature number;
wherein the first condition is that at least one of the first feature number and the second feature number is 0;
the detecting result is obtained according to the first feature number and the second feature number, and specifically includes:
if the first feature number and the second feature number are both 0, returning to normal as a detection result;
if only one of the first feature number and the second feature number is 0, an abnormality is returned as a detection result.
2. The method for detecting video scene change according to claim 1, wherein: the method further comprises the following steps between the acquisition step and the extraction step:
the reference image and the current image are reduced to a set size while maintaining the aspect ratio.
3. The method for detecting video scene change according to claim 1, wherein: the detection result is obtained according to the same feature number, and the detection method specifically comprises the following steps:
judging whether the same feature number is larger than or equal to a first set threshold value; if yes, acquiring matrix parameters of the homographic transformation, and acquiring a detection result according to the matrix parameters; otherwise, an abnormality is returned as a detection result.
4. A video scene change detection method according to claim 3, characterized in that: the method for obtaining the detection result according to the matrix parameters specifically comprises the following steps:
judging whether at least one parameter with a parameter value larger than a second set threshold value exists in the h3 parameter and the h6 parameter in the matrix parameters; if yes, returning an abnormality as a detection result; otherwise, returning to the normal state as the detection result.
5. The method for video scene change detection according to claim 4, wherein: the current image is a frame of image obtained by decoding the current video stream, and the reference image is an image stored in the detection of which the latest detection result is normal or a pre-stored image.
6. A video scene change detection method according to claim 2, characterized in that: the method reduces the reference image and the current image to a set size under the condition of keeping the aspect ratio unchanged, which comprises the following steps:
the reference image and the current image are reduced to a size of 480 pixels in width with the aspect ratio maintained.
7. A video scene change detection system, characterized by: comprising the following steps:
the acquisition module is used for acquiring a reference image and a current image;
the extraction module is used for extracting SIFT features of the reference image to obtain a first feature number; extracting SIFT features of the current image to obtain a second feature number;
the primary judging module is used for judging whether the first feature number and the second feature number meet a first condition, and if so, obtaining a detection result according to the first feature number and the second feature number; otherwise, the second judgment module processes the data;
the secondary judging module is used for:
filtering interference features in the current image and the reference image by using a RANSAC algorithm based on homographic transformation, and accumulating the same feature numbers of the current image and the reference image after filtering;
obtaining a detection result according to the same feature number;
wherein the first condition is that at least one of the first feature number and the second feature number is 0;
the detecting result is obtained according to the first feature number and the second feature number, and specifically includes:
if the first feature number and the second feature number are both 0, returning to normal as a detection result;
if only one of the first feature number and the second feature number is 0, an abnormality is returned as a detection result.
8. A video scene change detection system, characterized by: comprising the following steps:
a memory for storing a program;
a processor for loading the program to perform a video scene change detection method according to any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811238433.5A CN111091136B (en) | 2018-10-23 | 2018-10-23 | Video scene change detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811238433.5A CN111091136B (en) | 2018-10-23 | 2018-10-23 | Video scene change detection method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111091136A CN111091136A (en) | 2020-05-01 |
CN111091136B true CN111091136B (en) | 2023-05-23 |
Family
ID=70391588
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811238433.5A Active CN111091136B (en) | 2018-10-23 | 2018-10-23 | Video scene change detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111091136B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2499963A1 (en) * | 2011-03-18 | 2012-09-19 | SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH | Method and apparatus for gaze point mapping |
CN102982537A (en) * | 2012-11-05 | 2013-03-20 | 安维思电子科技(广州)有限公司 | Scene change detection method and scene change detection system |
CN103646391A (en) * | 2013-09-30 | 2014-03-19 | 浙江大学 | Real-time camera tracking method for dynamically-changed scene |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102821323B (en) * | 2012-08-01 | 2014-12-17 | 成都理想境界科技有限公司 | Video playing method, video playing system and mobile terminal based on augmented reality technique |
-
2018
- 2018-10-23 CN CN201811238433.5A patent/CN111091136B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2499963A1 (en) * | 2011-03-18 | 2012-09-19 | SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH | Method and apparatus for gaze point mapping |
CN102982537A (en) * | 2012-11-05 | 2013-03-20 | 安维思电子科技(广州)有限公司 | Scene change detection method and scene change detection system |
CN103646391A (en) * | 2013-09-30 | 2014-03-19 | 浙江大学 | Real-time camera tracking method for dynamically-changed scene |
Non-Patent Citations (2)
Title |
---|
杨涛 ; 张艳宁 ; 张秀伟 ; 张新功 ; .基于场景复杂度与不变特征的航拍视频实时配准算法.电子学报.2010,(05),全文. * |
葛鹤银 ; 孙建红 ; 林楠 ; 吴凡 ; .融合小波变换及SIFT算法的去抖动运动目标检测.实验室研究与探索.2016,(02),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111091136A (en) | 2020-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5908174B2 (en) | Image processing apparatus and image processing method | |
CN109035304B (en) | Target tracking method, medium, computing device and apparatus | |
Cheung et al. | Robust background subtraction with foreground validation for urban traffic video | |
Elhabian et al. | Moving object detection in spatial domain using background removal techniques-state-of-art | |
JP6482195B2 (en) | Image recognition apparatus, image recognition method, and program | |
Nonaka et al. | Evaluation report of integrated background modeling based on spatio-temporal features | |
Fendri et al. | Fusion of thermal infrared and visible spectra for robust moving object detection | |
KR101524548B1 (en) | Apparatus and method for alignment of images | |
CN108399627B (en) | Video inter-frame target motion estimation method and device and implementation device | |
CN103679756A (en) | Automatic target tracking method and system based on color and shape features | |
CN108198205A (en) | A kind of method for tracking target based on Vibe and Camshift algorithms | |
WO2009105812A1 (en) | Spatio-activity based mode matching field of the invention | |
CN105184771A (en) | Adaptive moving target detection system and detection method | |
CN111985314B (en) | Smoke detection method based on ViBe and improved LBP | |
CN111914627A (en) | A vehicle identification and tracking method and device | |
US10708600B2 (en) | Region of interest determination in video | |
Phadke et al. | Illumination invariant mean-shift tracking | |
Huang et al. | Random sampling-based background subtraction with adaptive multi-cue fusion in RGBD videos | |
CN111091136B (en) | Video scene change detection method and system | |
Walha et al. | Moving object detection system in aerial video surveillance | |
Yang et al. | Misaligned RGB-depth boundary identification and correction for depth image recovery | |
Song et al. | An improved vibe algorithm of dual background model for quickly suppressing ghost images | |
Asundi et al. | Raindrop detection algorithm for ADAS | |
CN106250859B (en) | The video flame detecting method spent in a jumble is moved based on characteristic vector | |
WO2013093731A1 (en) | A motion detection method for a video processing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |