CN110751124A - Video detection comparison system - Google Patents
Video detection comparison system Download PDFInfo
- Publication number
- CN110751124A CN110751124A CN201911032557.2A CN201911032557A CN110751124A CN 110751124 A CN110751124 A CN 110751124A CN 201911032557 A CN201911032557 A CN 201911032557A CN 110751124 A CN110751124 A CN 110751124A
- Authority
- CN
- China
- Prior art keywords
- video
- data
- key frame
- target
- target video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 16
- 238000012795 verification Methods 0.000 claims abstract description 18
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 4
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 4
- 239000003550 marker Substances 0.000 claims description 6
- 239000013589 supplement Substances 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- 238000013508 migration Methods 0.000 claims 1
- 230000005012 migration Effects 0.000 claims 1
- 238000000034 method Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention belongs to the technical field of video comparison, and particularly relates to a video detection comparison system, which comprises: (1) a data acquisition module; (2) a video fingerprint synthesis module; (3) a first verification module; (4) and a second verification module. According to the method, the characteristics of each key frame in the target video are extracted, the image characteristics of each extracted key frame are directly used as the fingerprint of the target video, the satellite three-dimensional data and the actual three-dimensional data of a user are introduced to reconstruct pictures to construct a video scene live-action model to obtain a reconstructed video key frame, so that the detail characteristics of the key frame are more, and the similarity between the target video and the reference video is determined by mutual verification of the obtained reconstructed video key frame and the key frame number distance data of the verification module I. The accuracy of outdoor video comparison is further improved, repeated comparison of videos is reduced, and time is saved.
Description
Technical Field
The invention relates to the technical field of video comparison, in particular to an environmental condition detection system.
Background
With the increase of short video platforms, video data grows exponentially, massive outdoor shot videos also grow rapidly, and how to manage the outdoor video data becomes a very critical ring. In particular, the similarity between two videos is measured through a video detection technology, so that video management services such as video duplicate removal, piracy detection and the like are realized.
The currently common video detection technology is used for judging whether two videos are similar or not by comparing the distance between the video fingerprints of the two videos; the video fingerprint is specifically that the key frame features are obtained by extracting the features of the key frames of the video, then the dimension reduction is carried out on the features through a dimension reduction algorithm, and finally the video fingerprint with fixed length is obtained by aggregating or averaging all the key frame features of the video.
In the above-mentioned conventional video detection technology, the effectiveness of video retrieval based on such video fingerprints is not high when processing general video, and the single detection dimension affects the progress of video management business.
Disclosure of Invention
In order to solve the above technical problems in the prior art, the present invention provides a video detection and comparison system, including:
(1) the data acquisition module is used for acquiring target video data and three-dimensional data in a target video real object;
(2) the video fingerprint synthesis module is used for extracting DC image information of target video data and combining the obtained image characteristics with the motion characteristics extracted based on the interframe difference to generate a video fingerprint through Harris detection;
(3) the verification module I determines key frames in a reference video according to the video fingerprints, numbers the determined key frames according to the video progress, and numbers the key frames with the same type of characteristics again to obtain a set so as to form the video capture catalog data of the previous level; recording the distance between key frames in the video, and recording the distance between key frames corresponding to the obtained key frame numbers; the target video fingerprint comprises image features of key frames in the reference video;
(4) the verification module II extracts a target video shooting address according to the acquired target video information, captures a satellite image of the address location according to the target video shooting address, compares a marker displayed in the satellite image with a marker in the video to obtain an accurate video shooting position and a shooting angle, matches a display range of video content in a satellite map, verifies three-dimensional data of a real object in the target video by using real object parameters of a video shooting range in the satellite map, further supplements the three-dimensional data in the target video, and constructs a video scene real scene model through the three-dimensional data to obtain a reconstructed video key frame;
meanwhile, the verification module matches the local weather conditions when the videos are shot according to the time and the geographic position in the target videos so as to conveniently divide the visibility of the weather at that time and reasonably judge the interference of the weather and the display error of detailed scenes when the videos are shot; when the three-dimensional data is used for constructing a video scene real-scene model, the video display detail loss position is supplemented with key points when the satellite data is used, so as to increase the contrast parameters of the reconstructed key frame
And determining the similarity between the target video and the reference video according to the obtained key frame of the reconstructed video and the key frame number distance data of the verification module I.
Further, the data acquisition module acquires target video data including video content shot and uploaded by the user terminal and video data migrated by an existing database.
Further, the three-dimensional data in the target video real object comprises three-dimensional data obtained by automatic splicing based on point cloud by a user or three-dimensional data directly obtained by a three-dimensional scanning device.
Advantageous effects
According to the method, the characteristics of each key frame in the target video are extracted, the image characteristics of each extracted key frame are directly used as the fingerprint of the target video, the satellite three-dimensional data and the actual three-dimensional data of a user are introduced to reconstruct pictures to construct a video scene live-action model to obtain a reconstructed video key frame, so that the detail characteristics of the key frame are more, and the similarity between the target video and the reference video is determined by mutual verification of the obtained reconstructed video key frame and the key frame number distance data of the verification module I. The accuracy of outdoor video comparison is further improved, repeated comparison of videos is reduced, and time is saved.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solution of the present invention is further limited by the following specific embodiments, but the scope of the claims is not limited to the description.
Example 1
A video inspection alignment system, comprising:
(1) the data acquisition module is used for acquiring target video data and three-dimensional data in a target video real object; the data acquisition module acquires target video data including video contents uploaded by a user terminal and video data migrated by an existing database; the three-dimensional data in the target video real object comprises three-dimensional data obtained by automatic splicing based on point cloud by a user or three-dimensional data directly obtained by a three-dimensional scanning device; the method mainly includes that a user transmits outdoor shot video and self-collected three-dimensional data to a data collection module through a terminal, and time and geographic position of the user during shooting are accurately recorded in video information;
(2) the video fingerprint synthesis module is used for extracting DC image information of target video data and combining the obtained image characteristics with the motion characteristics extracted based on the interframe difference to generate a video fingerprint through Harris detection;
(3) the verification module I determines key frames in a reference video according to the video fingerprints, numbers the determined key frames according to the video progress, and numbers the key frames with the same type of characteristics again to obtain a set so as to form the video capture catalog data of the previous level; recording the distance between key frames in the video, and recording the distance between key frames corresponding to the obtained key frame numbers; the target video fingerprint comprises image features of key frames in the reference video;
(4) the verification module II extracts a target video shooting address according to the acquired target video information, captures a satellite image of the address location according to the target video shooting address, compares a marker displayed in the satellite image with a marker in the video to obtain an accurate video shooting position and a shooting angle, matches a display range of video content in a satellite map, verifies three-dimensional data of a real object in the target video by using real object parameters of a video shooting range in the satellite map, further supplements the three-dimensional data in the target video, and constructs a video scene real scene model through the three-dimensional data to obtain a reconstructed video key frame;
meanwhile, the verification module matches the local weather conditions when the videos are shot according to the time and the geographic position in the target videos so as to conveniently divide the visibility of the weather at that time and reasonably judge the interference of the weather and the display error of detailed scenes when the videos are shot; when the three-dimensional data is used for constructing a video scene real-scene model, the video display details are lost when the satellite data is used for performing key supplement so as to increase the contrast parameters of the reconstructed key frame.
And determining the similarity between the target video and the reference video according to the obtained key frame of the reconstructed video and the key frame number distance data of the verification module I.
The method can detect the similarity of outdoor videos and videos shot in the same scene, and is beneficial to primary classification of video types.
It should be noted that the above examples and test examples are only for further illustration and understanding of the technical solutions of the present invention, and are not to be construed as further limitations of the technical solutions of the present invention, and the invention which does not highlight essential features and significant advances made by those skilled in the art still belongs to the protection scope of the present invention.
Claims (3)
1. A video detection comparison system, comprising:
(1) the data acquisition module is used for acquiring target video data and three-dimensional data in a target video real object;
(2) the video fingerprint synthesis module is used for extracting DC image information of target video data and combining the obtained image characteristics with the motion characteristics extracted based on the interframe difference to generate a video fingerprint through Harris detection;
(3) the verification module I determines key frames in a reference video according to the video fingerprints, numbers the determined key frames according to the video progress, and numbers the key frames with the same type of characteristics again to obtain a set so as to form the video capture catalog data of the previous level; recording the distance between key frames in the video, and recording the distance between key frames corresponding to the obtained key frame numbers; the target video fingerprint comprises image features of key frames in the reference video;
(4) the verification module II extracts a target video shooting address according to the acquired target video information, captures a satellite image of the address location according to the target video shooting address, compares a marker displayed in the satellite image with a marker in the video to obtain an accurate video shooting position and a shooting angle, matches a display range of video content in a satellite map, verifies three-dimensional data of a real object in the target video by using real object parameters of the video shooting range in the satellite map, further supplements the three-dimensional data in the target video, and constructs a real scene model video scene through the three-dimensional data to obtain a reconstructed video key frame;
meanwhile, the verification module matches the local weather conditions when the videos are shot according to the time and the geographic position in the target videos so as to conveniently divide the visibility of the weather at that time and reasonably judge the interference of the weather and the display error of detailed scenes when the videos are shot; when the three-dimensional data is used for constructing a video scene real-scene model, the video display detail loss position is supplemented with key points when the satellite data is used, so as to increase the contrast parameters of the reconstructed key frame
And determining the similarity between the target video and the reference video according to the obtained key frame of the reconstructed video and the key frame number distance data of the verification module I.
2. The video detection and comparison system of claim 1, wherein the data acquisition module acquiring the target video data comprises the user terminal capturing uploaded video content and video data of existing database migration.
3. The video detection and comparison system according to claim 1, wherein the three-dimensional data in the target video object comprises three-dimensional data obtained by automatic stitching based on point clouds or three-dimensional data directly obtained by a three-dimensional scanning device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911032557.2A CN110751124A (en) | 2019-10-28 | 2019-10-28 | Video detection comparison system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911032557.2A CN110751124A (en) | 2019-10-28 | 2019-10-28 | Video detection comparison system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110751124A true CN110751124A (en) | 2020-02-04 |
Family
ID=69280496
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911032557.2A Pending CN110751124A (en) | 2019-10-28 | 2019-10-28 | Video detection comparison system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110751124A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112581618A (en) * | 2020-12-23 | 2021-03-30 | 深圳前海贾维斯数据咨询有限公司 | Three-dimensional building model and real scene comparison method and system in building engineering industry |
CN113469152A (en) * | 2021-09-03 | 2021-10-01 | 腾讯科技(深圳)有限公司 | Similar video detection method and device |
CN114827714A (en) * | 2022-04-11 | 2022-07-29 | 咪咕文化科技有限公司 | Video restoration method based on video fingerprints, terminal equipment and storage medium |
CN115100581A (en) * | 2022-08-24 | 2022-09-23 | 有米科技股份有限公司 | Video reconstruction model training method and device based on text assistance |
-
2019
- 2019-10-28 CN CN201911032557.2A patent/CN110751124A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112581618A (en) * | 2020-12-23 | 2021-03-30 | 深圳前海贾维斯数据咨询有限公司 | Three-dimensional building model and real scene comparison method and system in building engineering industry |
CN112581618B (en) * | 2020-12-23 | 2024-05-24 | 深圳前海贾维斯数据咨询有限公司 | Three-dimensional building model and real scene comparison method and system in building engineering industry |
CN113469152A (en) * | 2021-09-03 | 2021-10-01 | 腾讯科技(深圳)有限公司 | Similar video detection method and device |
CN114827714A (en) * | 2022-04-11 | 2022-07-29 | 咪咕文化科技有限公司 | Video restoration method based on video fingerprints, terminal equipment and storage medium |
CN114827714B (en) * | 2022-04-11 | 2023-11-21 | 咪咕文化科技有限公司 | Video restoration method, terminal equipment and storage media based on video fingerprinting |
CN115100581A (en) * | 2022-08-24 | 2022-09-23 | 有米科技股份有限公司 | Video reconstruction model training method and device based on text assistance |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110009561B (en) | Method and system for mapping surveillance video target to three-dimensional geographic scene model | |
CN110751124A (en) | Video detection comparison system | |
Senst et al. | Crowd violence detection using global motion-compensated lagrangian features and scale-sensitive video-level representation | |
CN108573222B (en) | Pedestrian image occlusion detection method based on cyclic confrontation generation network | |
CN109544501A (en) | A kind of transmission facility defect inspection method based on unmanned plane multi-source image characteristic matching | |
CN108932509A (en) | A kind of across scene objects search methods and device based on video tracking | |
CN114973028A (en) | Aerial video image real-time change detection method and system | |
CN114782679B (en) | A method and device for detecting hardware defects in power transmission lines based on cascade network | |
CN111723656A (en) | Smoke detection method and device based on YOLO v3 and self-optimization | |
CN112396831B (en) | Three-dimensional information generation method and device for traffic identification | |
CN116778357A (en) | Power line unmanned aerial vehicle inspection method and system utilizing visible light defect identification | |
CN112565630B (en) | A video frame synchronization method for video splicing | |
CN116883627B (en) | Unmanned aerial vehicle video augmented reality processing method and system | |
Li et al. | Global motion estimation based on SIFT feature match for digital image stabilization | |
CN105608423A (en) | Video matching method and device | |
CN118823288A (en) | Device image acquisition method, device, computer equipment and readable storage medium | |
Luo et al. | UAV Large Oblique Image Geo-localization Using Satellite Images in the Dense Buildings Area | |
Hambarde et al. | DetReIDX: A stress-test dataset for real-world UAV-based person recognition | |
CN116310263A (en) | Pointer type aviation horizon instrument indication automatic reading implementation method | |
CN110781797B (en) | Labeling method and device and electronic equipment | |
CN113283279A (en) | Deep learning-based multi-target tracking method and device in video | |
CN117689481B (en) | Natural disaster insurance processing method and system based on unmanned aerial vehicle video data | |
CN115909183B (en) | Monitoring system and monitoring method for external environment of fuel gas delivery | |
CN119478795B (en) | Pan/tilt recognition and capture method and system based on abnormal scenarios | |
Li et al. | Satellite video stabilization method based on global motion consistency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200204 |