CN101464909A - Fast robust approximately same video detection and exclusion method - Google Patents
Fast robust approximately same video detection and exclusion method Download PDFInfo
- Publication number
- CN101464909A CN101464909A CNA2009100771821A CN200910077182A CN101464909A CN 101464909 A CN101464909 A CN 101464909A CN A2009100771821 A CNA2009100771821 A CN A2009100771821A CN 200910077182 A CN200910077182 A CN 200910077182A CN 101464909 A CN101464909 A CN 101464909A
- Authority
- CN
- China
- Prior art keywords
- video
- key frame
- feature vector
- features
- global
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 230000007717 exclusion Effects 0.000 title claims abstract description 8
- 239000013598 vector Substances 0.000 claims abstract description 53
- 238000012847 principal component analysis method Methods 0.000 claims abstract description 6
- 238000013139 quantization Methods 0.000 claims description 6
- 238000011002 quantification Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims 1
- 230000008030 elimination Effects 0.000 abstract description 2
- 238000003379 elimination reaction Methods 0.000 abstract description 2
- 238000005065 mining Methods 0.000 abstract description 2
- 238000000513 principal component analysis Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
本发明涉及一种快速鲁棒的近相同视频检测和排除方法,本发明属于信息挖掘技术领域,该方法包括对待检测视频的关键帧特征进行提取,生成的视频全局特征的视频特征生成阶段,比较待检测视频的全局特征向量和已有视频的全局特征向量完成近相同视频的检测的模式匹配阶段,所述视频特征生成阶段,包括对关键帧图像分块,提取出各块平均灰度作为关键帧特征,并用主分量分析方法对关键帧特征进行二值化量化,基于量化后的关键帧特征,生成全局特征。本发明用于大规模网络视频数据库中的近相同视频的检测和排除,具有更高的速度和理想的结果。The invention relates to a fast and robust near-identical video detection and exclusion method, which belongs to the technical field of information mining. The method includes extracting the key frame features of the video to be detected, and the video feature generation stage of the generated video global features. The global feature vector of the video to be detected and the global feature vector of the existing video complete the pattern matching stage of the detection of nearly the same video. The video feature generation stage includes dividing the key frame image into blocks and extracting the average gray level of each block as the key Frame features, and use the principal component analysis method to binarize and quantize the key frame features, and generate global features based on the quantized key frame features. The invention is used for detection and elimination of nearly identical videos in a large-scale network video database, and has higher speed and ideal results.
Description
技术领域 technical field
本发明属于信息挖掘技术领域,涉及一种在大规模网络视频数据库中进行近相同视频的检测和排除的方法。The invention belongs to the technical field of information mining, and relates to a method for detecting and eliminating nearly identical videos in a large-scale network video database.
背景技术 Background technique
近年来,随着互联网技术的发展,特别是普通用户可以访问的带宽快速增长,越来越多的以视频为主的多媒体应用成为互联网的热点。它们在信息检索,数字娱乐方面有着极好的交互特性,大大丰富了人们的互联网体验。现在,人们可以轻松地从互联网上得到大量的视频信息,网络视频内容在变得丰富多彩的同时,同时也存在大量的冗余信息,特别是热点视频的重复上传,很多网络视频包含的内容基本一致,但是由于编码率,编码质量,颜色变换,后期人工编辑以及帧率的不同,需要用基于内容的方法来比较排除,这些视频被称为近相同视频。近相同视频的存在给网络存储带来了极大的浪费,以往解决这一的方法大致有三类,分别是基于全局特征,镜头(shot)级特征,是基于特征点的图像区域级特征的检测方法。总而言之,全局特征如利用视频颜色直方图等,具有特征简单,计算量较少的优点,但只适用于检测内容几乎相同的视频。镜头级特征可以检测到经过较多编辑的视频,最近出现了以特征点检测比如SIFT等匹配方法,可以更准确得进行相似度衡量。但是这些方法不是生成特征的计算量过高,匹配方法过于复杂。由于网络视频数量庞大,这些大量存在的近相同内容对视频检测和排除算法具有较高的性能要求,因此设计快速鲁棒的方法来进行近相同视频的检测排除变得势在必行。In recent years, with the development of Internet technology, especially the rapid growth of bandwidth accessible to ordinary users, more and more video-based multimedia applications have become Internet hotspots. They have excellent interactive characteristics in information retrieval and digital entertainment, which greatly enrich people's Internet experience. Nowadays, people can easily obtain a large amount of video information from the Internet. While the content of online video has become rich and colorful, there is also a large amount of redundant information, especially the repeated uploading of popular videos. Many online videos contain basic content. Consistent, but due to differences in encoding rate, encoding quality, color transformation, post-manual editing, and frame rate, content-based methods need to be used to compare and exclude these videos. These videos are called near-identical videos. The existence of near-identical videos has brought great waste to network storage. In the past, there are roughly three types of methods to solve this problem, which are based on global features, shot (shot)-level features, and image region-level feature detection based on feature points. method. All in all, global features, such as using video color histograms, have the advantages of simple features and less calculation, but they are only suitable for detecting videos with almost the same content. Shot-level features can detect more edited videos. Recently, feature point detection such as SIFT and other matching methods have appeared, which can measure similarity more accurately. However, these methods are not too computationally expensive to generate features, and the matching method is too complicated. Due to the large number of online videos, these large amounts of near-identical content have high performance requirements for video detection and exclusion algorithms. Therefore, it is imperative to design a fast and robust method to detect and exclude near-identical videos.
发明内容 Contents of the invention
本发明的目的是为克服已有技术的不足之处,设计出一种快速鲁棒的近相同视频检测和排除方法,本方法在分析视频特征的基础上先给每个视频生成相应的基于关键帧的准确鲁棒的签名信息,然后通过对抽象出来的签名信息进行匹配,用于大规模网络视频数据库中的近相同视频的检测和排除取,具有更高的速度和理想的结果。The purpose of the present invention is to design a fast and robust near-identical video detection and elimination method for overcoming the deficiencies of the prior art. The method first generates corresponding key-based The accurate and robust signature information of the frame, and then by matching the abstracted signature information, it is used for the detection and exclusion of nearly identical videos in a large-scale network video database, with higher speed and ideal results.
本发明提出的一种快速鲁棒的近相同视频检测和排除方法,包括对待检测视频的关键帧特征进行提取,生成的视频全局特征的视频特征生成阶段,比较待检测视频的全局特征向量和已有视频的全局特征向量完成近相同视频的检测的模式匹配阶段,所述视频特征生成阶段,包括对关键帧图像分块,提取出各块平均灰度作为关键帧特征,并用主分量分析方法(PCA)对关键帧特征进行二值化量化,基于量化后的关键帧特征,生成全局特征。A fast and robust near-identical video detection and exclusion method proposed by the present invention includes extracting the key frame features of the video to be detected, the video feature generation stage of the generated video global features, and comparing the global feature vector of the video to be detected with the existing There is a global feature vector of video to complete the pattern matching stage of the detection of nearly the same video, the video feature generation stage includes key frame image segmentation, extracting the average gray level of each block as key frame feature, and using the principal component analysis method ( PCA) performs binary quantization on the key frame features, and generates global features based on the quantized key frame features.
上述方法,主要包括以下步骤:The above method mainly includes the following steps:
(1)生成待检测视频的关键帧;(1) generate a key frame of the video to be detected;
(2)对该关键帧进行分块,得到该关键帧的平均灰度特征向量;(2) block the key frame to obtain the average gray feature vector of the key frame;
(3)对得到的平均灰度特征向量利用主分量分析方法降低该灰度特征向量的维度;(3) Utilize the principal component analysis method to reduce the dimension of this gray-scale feature vector to the average gray-scale feature vector obtained;
(4)利用海量图片的统计结果得到的平均值作为阈值对该降维后的平均灰度特征向量进行二值化量化;(4) Using the average value obtained from the statistical results of a large number of pictures as a threshold to perform binarization and quantization on the average gray feature vector after dimensionality reduction;
(5)利用二值化量化后的基于关键帧平均灰度特征向量,生成待检测视频的全局特征向量;(5) Generate the global feature vector of the video to be detected based on the key frame average grayscale feature vector after binarization and quantification;
(6)比较待检测视频的全局特征向量和已有视频的全局特征向量,排除数据库中与该待检测视频特征差异较大的已有视频;(6) compare the global feature vector of the video to be detected and the global feature vector of the existing video, get rid of the larger existing video with this video feature difference to be detected in the database;
(7)对数据库中用全局特征不能排除的视频,利用基于视频关键帧级特征向量间的相似度建立二分图;(7) For videos that cannot be excluded with global features in the database, use the similarity based on video key frame level feature vectors to build a bipartite graph;
(8)用最大匹配近似算法计算该二分图最大匹配值,将该最大匹配值与阈值Th比较,再次排除数据库中与该待检测视频差异较大的已有视频;(8) Calculate the maximum matching value of this bipartite graph with the maximum matching approximation algorithm, compare the maximum matching value with the threshold Th, and get rid of the larger existing video with this video difference to be detected in the database again;
(9)将不能排除的视频的二分图修改为带有源结点、汇结点的一般图,用图切分算法计算得到该图的最大匹配值,并用该匹配值作为阈值完成近相同视频的检测。(9) Modify the bipartite graph of videos that cannot be ruled out into a general graph with source nodes and sink nodes, use the graph segmentation algorithm to calculate the maximum matching value of the graph, and use the matching value as a threshold to complete nearly identical videos detection.
本发明的特点及效果:Features and effects of the present invention:
本发明提出的在大规模网络视频数据库中进行近相同视频排除的方法,用于对大规模网络视频的内容监控。本方法对视频进行特征重构,维度约减的有效表示。简单的灰度特征能有效保留图片的大部分信息,相同的分块方式能给不同解析度的关键帧带来比较的统一度量。主成分分析的方法进一步地保留了关键帧的主要特征,保持了最大部分的相似性,同时由于采用了从大量图片进行统计量化的方法,直接提取出最能代表关键帧级特性的二进制特征,极大地方便了后期计算机处理,保证了高效的处理速度。同时本发明采取了关键帧级特征生成全局特征,这样全局和局部特征并用的分层检测方法,能够快速发现非近相同视频,避免了后期较大的计算负担。可以预见,这种方法可以广泛地应用于网络视频存储,索引并可以用于搜索引擎搜索结果重排序的后处理阶段,同时本发明的方法也为近相同视频的检测提供了一个实际可行的框架,便于融合具有更少计算复杂度,更高区分度的图像特征。The method for eliminating nearly identical videos in a large-scale network video database proposed by the invention is used for content monitoring of large-scale network videos. This method performs feature reconstruction on video, an effective representation of dimensionality reduction. Simple grayscale features can effectively retain most of the information of the image, and the same block method can bring a unified measurement for comparison of key frames with different resolutions. The method of principal component analysis further retains the main features of the key frame and maintains the largest part of the similarity. At the same time, due to the method of statistical quantification from a large number of pictures, the binary feature that best represents the key frame-level characteristics is directly extracted. It greatly facilitates later computer processing and ensures efficient processing speed. At the same time, the present invention adopts key frame-level features to generate global features, so that the hierarchical detection method using both global and local features can quickly find non-nearly identical videos, avoiding a large calculation burden in the later stage. It can be foreseen that this method can be widely used in network video storage, indexing and can be used in the post-processing stage of search engine search results reordering, and the method of the present invention also provides a practical and feasible framework for the detection of near-identical videos , which facilitates the fusion of image features with less computational complexity and higher discrimination.
具体实施方式 Detailed ways
本发明提出的一种快速鲁棒的近相同视频检测和排除方法,结合实施例详细说明如下:A fast and robust near-identical video detection and exclusion method proposed by the present invention is described in detail in conjunction with the embodiments as follows:
本发明提出的一种快速鲁棒的近相同视频检测和排除方法,包括对待检测视频的关键帧特征进行提取,生成的视频全局特征的视频特征生成阶段,比较待检测视频的全局特征向量和已有视频的全局特征向量完成近相同视频的检测的模式匹配阶段,所述视频特征生成阶段,包括对关键帧图像分块,提取出各块平均灰度作为关键帧特征,并用主分量分析方法(PCA)对关键帧特征进行二值化量化,基于量化后的关键帧特征,生成全局特征。A fast and robust near-identical video detection and exclusion method proposed by the present invention includes extracting the key frame features of the video to be detected, the video feature generation stage of the generated video global features, and comparing the global feature vector of the video to be detected with the existing There is a global feature vector of video to complete the pattern matching stage of the detection of nearly the same video, the video feature generation stage includes key frame image segmentation, extracting the average gray level of each block as key frame feature, and using the principal component analysis method ( PCA) performs binary quantization on the key frame features, and generates global features based on the quantized key frame features.
本发明的方法主要包括以下步骤:Method of the present invention mainly comprises the following steps:
(1)生成待检测视频的关键帧;(1) generate a key frame of the video to be detected;
(2)对该关键帧进行分块,得到该关键帧的平均灰度特征向量;(2) block the key frame to obtain the average gray feature vector of the key frame;
(3)对得到的平均灰度特征向量利用主分量分析方法降低该灰度特征向量的维度;(3) Utilize the principal component analysis method to reduce the dimension of this gray-scale feature vector to the average gray-scale feature vector obtained;
(4)利用海量图片的统计结果得到的平均值作为阈值对该降维后的平均灰度特征向量进行二值化量化;(4) Using the average value obtained from the statistical results of a large number of pictures as a threshold to perform binarization and quantization on the average gray feature vector after dimensionality reduction;
(5)利用二值化量化后的基于关键帧平均灰度特征向量,生成待检测视频的全局特征向量;(5) Generate the global feature vector of the video to be detected based on the key frame average grayscale feature vector after binarization and quantification;
(6)比较待检测视频的全局特征向量和已有视频的全局特征向量,排除数据库中与该待检测视频特征差异较大的已有视频;(6) compare the global feature vector of the video to be detected and the global feature vector of the existing video, get rid of the larger existing video with this video feature difference to be detected in the database;
(7)对数据库中用全局特征不能排除的视频,利用基于视频关键帧级特征向量间的相似度建立二分图;(7) For videos that cannot be excluded with global features in the database, use the similarity based on video key frame level feature vectors to build a bipartite graph;
(8)用最大匹配近似算法计算该二分图最大匹配值,将该最大匹配值与阈值Th比较,再次排除数据库中与该待检测视频差异较大的已有视频;(8) Calculate the maximum matching value of this bipartite graph with the maximum matching approximation algorithm, compare the maximum matching value with the threshold Th, and get rid of the larger existing video with this video difference to be detected in the database again;
(9)将不能排除的视频的二分图修改为带有源结点、汇结点的一般图,用图切分算法计算得到该图的最大匹配值,并用该匹配值作为阈值完成近相同视频的检测。(9) Modify the bipartite graph of videos that cannot be ruled out into a general graph with source nodes and sink nodes, use the graph segmentation algorithm to calculate the maximum matching value of the graph, and use the matching value as a threshold to complete nearly identical videos detection.
上述步骤(1)-(5)为视频特征生成阶段,步骤(6)-(9)为模式匹配阶段The above steps (1)-(5) are the video feature generation stage, and steps (6)-(9) are the pattern matching stage
上述方法中,步骤(8)中最大匹配近似算法为已知算法。In the above method, the maximum matching approximation algorithm in step (8) is a known algorithm.
上述方法的实施例,具体包括以下步骤:The embodiment of above-mentioned method specifically comprises the following steps:
(1)采用每隔20帧采样的方法生成视频的关键帧;(1) adopt the method for sampling every 20 frames to generate the key frame of video;
(2)将关键帧进行分块,分块数目分别为8*8和7*7(每个分块大小相同);对于每一块计算块内平均灰度,将各块的灰度值串接成一个113(64+49)维的特征向量作为该关键帧的平均灰度特征向量;(2) Divide the key frame into blocks, and the number of blocks is 8*8 and 7*7 respectively (the size of each block is the same); for each block to calculate the average gray level in the block, the gray value of each block is concatenated Become a 113 (64+49) dimension feature vector as the average gray feature vector of the key frame;
(3)通过统计海量图片的均值向量作为PCA所需的均值向量,获得主成分分析所需的协方差矩阵;通过计算协方差矩阵的特征向量和特征值,然后选取具有最大特征值的前64维特征向量构成一个新的投影矩阵A.利用该协方差矩阵对(2)生成的关键帧的平均灰度特征向量进行主成分分析(PCA)降维,即将关键帧的平均灰度特征向量通过矩阵A投影,得到了一个新的64维特征向量;(3) Obtain the covariance matrix required for principal component analysis by counting the mean vector of a large number of pictures as the mean vector required by PCA; by calculating the eigenvector and eigenvalue of the covariance matrix, and then selecting the top 64 with the largest eigenvalue Dimensional feature vectors form a new projection matrix A. Use the covariance matrix to perform principal component analysis (PCA) dimensionality reduction on the average gray feature vector of the key frame generated in (2), that is, the average gray feature vector of the key frame is passed Matrix A is projected, and a new 64-dimensional feature vector is obtained;
(4)将(3)生成的64维特征向量的每一维数值与用海量图片统计得到的主分量分析后的均值向量作为阈值进行比较,如果大于等于均值向量,则该维置为1,否则置为0;通过这种方式将每个特征向量量化转变为64位的二进制数值作为哈希值;(4) Compare the value of each dimension of the 64-dimensional feature vector generated in (3) with the mean value vector after principal component analysis obtained from the statistics of a large number of pictures as a threshold. If it is greater than or equal to the mean value vector, then the dimension is set to 1. Otherwise, it is set to 0; in this way, each eigenvector is quantized and converted into a 64-bit binary value as a hash value;
(5)对一个视频所有的关键帧通过前面所述步骤生成64维特征向量,通过逐位比较该视频所有关键帧的特征向量的对应位,如果对应位为1的数量比对应位为0的数量多,全局特征向量的该位置为1,否则置为0;得到关于该视频的64维全局特征向量;(5) generate 64-dimensional feature vectors to all key frames of a video through the steps described above, and compare the corresponding bits of the feature vectors of all key frames of the video bit by bit, if the corresponding bit is 1 than the corresponding bit is 0 If the number is large, the position of the global feature vector is 1, otherwise it is set to 0; the 64-dimensional global feature vector about the video is obtained;
(6)将待检视频的全局特征向量和视频数据库中每个视频的全局特征向量做比较,不匹配则认为这两个视频不是近相似视频,予以排除,接着比较下一个视频;否则转(7)(6) compare the global feature vector of the video to be checked with the global feature vector of each video in the video database, if they do not match, the two videos are considered not to be nearly similar videos, and they are excluded, and then the next video is compared; otherwise, turn to ( 7)
(7)将待比较的两个视频的所有关键帧抽象成结点,根据特征向量设置对应的边最终建立起二分图;(7) abstract all the key frames of the two videos to be compared into nodes, and set up the corresponding edge according to the feature vector to finally set up a bipartite graph;
(8)用最大匹配近似算法计算步骤(7)建立的二分图,如果该算法的结果大于给定阈值Th(预先设定,大小与关键帧数目相关,关键帧数目越多,取值越大),认为这两个视频为近相同视频,如果该算法的结果小于Th/2,则认为这两个视频不同,否则转步骤(9);(8) Calculate the bipartite graph established in step (7) with the maximum matching approximation algorithm, if the result of the algorithm is greater than a given threshold Th (preset, the size is related to the number of key frames, the more the number of key frames, the greater the value ), think that these two videos are nearly identical videos, if the result of the algorithm is less than Th/2, then think that these two videos are different, otherwise go to step (9);
(9)在二分图的基础上,新增两个结点,分别为源结点(SOURCE)和汇结点(SINK),将所有的待比较结点(代表特定视频关键帧的结点)与源结点连一条边,其余结点分别和汇结点连一条边,从而得到一般图;利用一般图上的图切分算法得到的最大匹配值,将结果与给定阈值Th比较,如果大于Th则为近相同视频,否则认为这两个视频不同,比较下一个视频。(9) On the basis of the bipartite graph, add two new nodes, which are respectively the source node (SOURCE) and the sink node (SINK), and all the nodes to be compared (nodes representing specific video key frames) Connect an edge with the source node, and connect the other nodes with an edge with the sink node to obtain a general graph; use the maximum matching value obtained by the graph segmentation algorithm on the general graph, and compare the result with a given threshold Th, if If it is greater than Th, it is nearly the same video; otherwise, the two videos are considered different, and the next video is compared.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100771821A CN101464909B (en) | 2009-01-20 | 2009-01-20 | Fast robust approximately same video detection and exclusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100771821A CN101464909B (en) | 2009-01-20 | 2009-01-20 | Fast robust approximately same video detection and exclusion method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101464909A true CN101464909A (en) | 2009-06-24 |
CN101464909B CN101464909B (en) | 2010-11-03 |
Family
ID=40805484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009100771821A Expired - Fee Related CN101464909B (en) | 2009-01-20 | 2009-01-20 | Fast robust approximately same video detection and exclusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101464909B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103092935A (en) * | 2013-01-08 | 2013-05-08 | 杭州电子科技大学 | Approximate copy image detection method based on scale invariant feature transform (SIFT) quantization |
CN105183726A (en) * | 2014-05-28 | 2015-12-23 | 腾讯科技(深圳)有限公司 | Method and system for determining user similarity |
CN106572394A (en) * | 2016-08-30 | 2017-04-19 | 上海二三四五网络科技有限公司 | Video data navigation method |
CN110163079A (en) * | 2019-03-25 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Video detecting method and device, computer-readable medium and electronic equipment |
CN110415339A (en) * | 2019-07-19 | 2019-11-05 | 清华大学 | Method and device for calculating matching relationship between input three-dimensional shapes |
CN111182364A (en) * | 2019-12-27 | 2020-05-19 | 杭州趣维科技有限公司 | Short video copyright detection method and system |
CN111510724A (en) * | 2019-01-31 | 2020-08-07 | 北京小犀智能科技中心(有限合伙) | Equivalent video compression storage method and system based on image feature extraction |
-
2009
- 2009-01-20 CN CN2009100771821A patent/CN101464909B/en not_active Expired - Fee Related
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103092935A (en) * | 2013-01-08 | 2013-05-08 | 杭州电子科技大学 | Approximate copy image detection method based on scale invariant feature transform (SIFT) quantization |
CN105183726A (en) * | 2014-05-28 | 2015-12-23 | 腾讯科技(深圳)有限公司 | Method and system for determining user similarity |
CN106572394A (en) * | 2016-08-30 | 2017-04-19 | 上海二三四五网络科技有限公司 | Video data navigation method |
CN111510724A (en) * | 2019-01-31 | 2020-08-07 | 北京小犀智能科技中心(有限合伙) | Equivalent video compression storage method and system based on image feature extraction |
CN110163079A (en) * | 2019-03-25 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Video detecting method and device, computer-readable medium and electronic equipment |
CN110163079B (en) * | 2019-03-25 | 2024-08-02 | 腾讯科技(深圳)有限公司 | Video detection method and device, computer readable medium and electronic equipment |
CN110415339A (en) * | 2019-07-19 | 2019-11-05 | 清华大学 | Method and device for calculating matching relationship between input three-dimensional shapes |
CN110415339B (en) * | 2019-07-19 | 2021-07-13 | 清华大学 | Method and device for calculating matching relationship between input three-dimensional bodies |
CN111182364A (en) * | 2019-12-27 | 2020-05-19 | 杭州趣维科技有限公司 | Short video copyright detection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN101464909B (en) | 2010-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103593464B (en) | Video fingerprint detecting and video sequence matching method and system based on visual features | |
CN101374234B (en) | A content-based video copy monitoring method and device | |
Liu et al. | Key frame extraction from MPEG video stream | |
CN101394522A (en) | Method and system for detecting video copy | |
CN104376105B (en) | The Fusion Features system and method for image low-level visual feature and text description information in a kind of Social Media | |
CN107844779A (en) | A kind of video key frame extracting method | |
CN101464909A (en) | Fast robust approximately same video detection and exclusion method | |
CN103065153A (en) | Video key frame extraction method based on color quantization and clusters | |
CN104166685A (en) | Video clip detecting method and device | |
CN112434553B (en) | Video identification method and system based on deep dictionary learning | |
CN102750339B (en) | Positioning method of repeated fragments based on video reconstruction | |
Li et al. | Multi-scale cascade network for salient object detection | |
CN104376003A (en) | Video retrieval method and device | |
CN104036287A (en) | Human movement significant trajectory-based video classification method | |
CN111506773A (en) | Video duplicate removal method based on unsupervised depth twin network | |
CN109614933B (en) | A Motion Segmentation Method Based on Deterministic Fitting | |
CN107153670A (en) | The video retrieval method and system merged based on multiple image | |
CN106844785A (en) | Saliency segmentation-based content-based image retrieval method | |
Lin et al. | Robust fisher codes for large scale image retrieval | |
CN105760875B (en) | The similar implementation method of differentiation binary picture feature based on random forests algorithm | |
Mou et al. | Content-based copy detection through multimodal feature representation and temporal pyramid matching | |
CN101635851B (en) | Video Fingerprint Extraction Method | |
CN110188625B (en) | A Video Refinement Structure Method Based on Multi-feature Fusion | |
CN104837028A (en) | Video same-bit-rate dual-compression detection method | |
CN104463864B (en) | Multistage parallel key frame cloud extracting method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20101103 |