CN115695909A - Video playing method, electronic equipment and storage medium - Google Patents
Video playing method, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115695909A CN115695909A CN202211180825.7A CN202211180825A CN115695909A CN 115695909 A CN115695909 A CN 115695909A CN 202211180825 A CN202211180825 A CN 202211180825A CN 115695909 A CN115695909 A CN 115695909A
- Authority
- CN
- China
- Prior art keywords
- video
- signature
- digest
- abstract
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 90
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims abstract description 116
- 238000012545 processing Methods 0.000 claims abstract description 40
- 238000012795 verification Methods 0.000 claims abstract description 40
- 238000009826 distribution Methods 0.000 claims description 99
- 230000015654 memory Effects 0.000 claims description 29
- 239000013598 vector Substances 0.000 description 20
- 230000006835 compression Effects 0.000 description 18
- 238000007906 compression Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 17
- 230000008859 change Effects 0.000 description 13
- 238000009877 rendering Methods 0.000 description 13
- 238000000926 separation method Methods 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000005070 sampling Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000004043 dyeing Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000007667 floating Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention relates to the technical field of video processing, in particular to a video playing method, electronic equipment and a storage medium, wherein the method comprises the steps of obtaining a target video, wherein the target video comprises a signature reference abstract and a compressed coding video stream; separating the signature reference digest and the compressed coded video stream from the target release video based on the identification of the signature reference digest; decoding the compressed and coded video stream, and determining a decoded video in the compressed and coded video stream; determining a decoding summary index based on summary calculation of the content of the decoding video, wherein the summary calculation mode is consistent with the calculation mode for obtaining the summary calculation result; verifying the target video based on the decoding abstract index and the signature reference abstract; and when the verification is passed, playing the decoded video. The reliability of the signature reference abstract obtained based on the invariant is high, and the target release video is verified based on the signature reference abstract during playing, so that the playing reliability is guaranteed.
Description
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video playing method, an electronic device, and a storage medium.
Background
With the gradually extensive application of intelligent technology in the video field, the current intelligent video compiling technology can edit videos, and the authenticity of the video contents is difficult to distinguish by naked eyes. For example, by capturing several photos and a reference video, the content in the reference video can be generated by simulating the performance of the characters in the photos according to the given roles in the reference video, so as to generate the content of the unreliable video about the characters in the photos, thereby obtaining the distributed video. And then, according to the requirements of different service qualities, the release video is made into the distribution video with different service qualities and is distributed to the corresponding terminal, so that the reliability of the video played by the terminal is lower.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video playing method, an electronic device, and a storage medium, so as to solve the problem that the reliability of a video played by a distributed terminal is low.
According to a first aspect, an embodiment of the present invention provides a video playing method, including:
acquiring a target video, wherein the target video comprises a signature reference abstract and a compressed coding video stream, the signature reference abstract comprises a reference abstract and a signature of the reference abstract, and the reference abstract is obtained by splicing an abstract calculation result of the content of an original video in the compressed coding video stream with publisher identity information of the target distribution video;
separating the signed reference digest and the compressed encoded video stream from the target video based on the identification of the signed reference digest;
decoding the compressed and coded video stream, and determining a decoded video in the compressed and coded video stream;
determining a decoding summary index based on summary calculation of the content of the decoding video, wherein the summary calculation mode is consistent with the calculation mode of obtaining the summary calculation result;
verifying the target video based on the decoding summary index and the signature reference summary;
and when the verification is passed, playing the decoded video.
According to the video playing method provided by the embodiment of the invention, the signature reference abstract is carried in the target release video and is obtained by carrying out abstract calculation on the basis of the content of the original video in the compressed coding video stream, and because the image has the unique content, namely the image has the invariant related to the image, the reliability of the signature reference abstract is calculated on the basis of the invariant; meanwhile, the signature reference abstract also comprises the identity information of the publisher, so that the finally obtained target video has the identity mark of the publisher, and the signature reference abstract has good anti-counterfeiting capability.
In some embodiments, the signed reference digest includes a first signed reference digest including the publisher identity information and a computation description for the digest computation, and a second signed reference digest including a digest computation result of content of an original video in the compressed encoded video stream, the separating the signed reference digest and the compressed encoded video stream from the target published video based on an identification of the signed reference digest includes:
separating the first signature reference abstract from the target release video by using the identifier of the first signature reference abstract;
and separating the second signature reference abstract and the compressed coding video stream corresponding to the second signature reference abstract from the target release video by using the identifier of the second signature reference abstract.
According to the video playing method provided by the embodiment of the invention, the target release video comprises the two types of signature reference abstracts, and the signature reference abstracts can be accurately separated from the target release video by using the corresponding identifiers.
In some embodiments, the verifying the target video based on the decoding digest index and the signed reference digest includes:
extracting the reference digest in the second signed reference digest to obtain a video digest index, wherein the video digest index is a digest calculation result of content of an original video in the compressed coded video stream;
calculating the similarity of the decoding summary index and the video summary index;
and determining a verification result of the target video based on the similarity.
In the video playing method provided by the embodiment of the present invention, the second signature reference digest is in one-to-one correspondence with the compressed coded video stream, and therefore, the second signature reference digest can be used to indicate the condition of the corresponding compressed coded video stream, so as to determine the verification result of the target video.
In some embodiments, the first signed reference digest comprises a first reference digest and a first signature of the first reference digest, the first reference digest comprising the publisher identity information and a computation description for the digest computation, the verifying the target video based on the decoding digest index and the signed reference digest further comprising:
extracting the publisher identity information and/or the first signature in the first signature reference digest;
and verifying the identity information of the publisher and/or the first signature, and determining the verification result of the target video.
According to the video playing method provided by the embodiment of the invention, the first signature reference abstract comprises some description information and cannot change along with the change of the image in the video, so that the first signature reference abstract can be used for representing the whole situation of the target release video.
In some embodiments, when the target video is a target distribution video, the method for generating the target distribution video includes:
acquiring a target release video, wherein the target release video comprises a signature reference abstract and a compressed coding video stream;
separating the signed reference digest and the compressed encoded video stream from the target release video based on the identification of the signed reference digest;
processing the compressed and coded video stream with preset service quality to obtain a video to be distributed with the preset service quality;
and compiling the signature reference abstract into the video to be distributed, and determining the target distribution video with the preset service quality.
According to the video playing method provided by the embodiment of the invention, the signature reference abstract is carried in the target release video and is obtained by carrying out abstract calculation on the basis of the content of the original video in the compressed coding video stream, and because the image has the unique content, namely the image has the invariant related to the image, the reliability of the signature reference abstract is calculated on the basis of the invariant; meanwhile, the signature reference abstract also comprises the identity information of the publisher, so that the finally obtained target distribution video has the identity mark of the publisher, and the anti-counterfeiting capability is good, thereby ensuring that the reliability of the obtained target distribution video with the preset service quality is higher.
In some embodiments, the video to be distributed includes sub-videos to be distributed that correspond to the compressed coded video streams one to one, and the encoding the signature reference digest into the video to be distributed and determining and distributing the target distribution video with the preset quality of service includes:
compiling the second signature reference abstract into the corresponding sub-video to be distributed to obtain the target distribution sub-video with the preset service quality;
and splicing the target distribution sub-videos with the preset service quality, compiling the first signature reference abstract into a splicing result, and determining and distributing the target distribution videos with the preset service quality.
According to the video playing method provided by the embodiment of the invention, the first signature reference abstract represents some description information and does not change along with time change; and the second signature reference digest is closely related to the video content, so that different compiling modes are adopted for different signature reference digests, and the data volume of the target distribution video increased by compiling the signature reference digest can be reduced on the basis of ensuring the reliability of the target distribution video.
In some embodiments, the method for generating the target release video includes:
acquiring an original video of the compressed coding video stream and the publisher identity information;
performing abstract calculation on the content of the original video to determine a video abstract index;
determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature;
splicing the reference abstract and the digital signature to determine a signature reference abstract;
and coding the signature reference abstract into the compressed coding video stream to determine the target release video.
According to the video playing method provided by the embodiment of the invention, the image has the unique content, namely the image has the invariant related to the image, and the result of the video abstract index can be ensured not to be influenced by the outside by carrying out abstract calculation based on the invariant, so that the reliability of the video abstract index is ensured; meanwhile, the identity information of the publisher is combined on the basis, so that the published video carries the identity mark of the publisher, the anti-counterfeiting capability is good, and meanwhile, as the video content of the published video is publicly accessible, the non-encryption property of the public field is maintained, and the reliability of the target published video can be ensured on the basis of not influencing the publicly accessible published video.
In some embodiments, said encoding said signed reference digest into said compressed encoded video stream to determine said target published video comprises:
coding a second signature reference abstract into the target position of each compressed and coded video stream to obtain a video code stream, wherein the second signature reference abstract is obtained by splicing the second reference abstract and the second signature;
and coding the first signature reference abstract into a target position of the video code stream, and determining a target release video, wherein the first signature reference abstract is obtained by splicing the first reference abstract and the first signature.
According to the video playing method provided by the embodiment of the invention, as the first reference abstract is only used in the release process for a long time and is coded into the target position of the video code stream instead of each compressed coding video stream, the data volume of the released video can be shortened, and the smaller the data volume is, the better the data volume is when all necessary information is transmitted; and the second reference digest is changed along with time, so that the second signature reference digest corresponding to the second reference digest is coded into each compressed and coded video stream, and the reliability of the distributed video is improved on the basis of shortening the data volume of the distributed video.
According to a second aspect, an embodiment of the present invention further provides a video playing apparatus, including:
the system comprises an acquisition module, a compression module and a storage module, wherein the acquisition module is used for acquiring a target video, the target video comprises a signature reference abstract and a compressed coded video stream, the signature reference abstract comprises a reference abstract and a signature of the reference abstract, and the reference abstract is obtained by splicing an abstract calculation result of content of an original video in the compressed coded video stream with publisher identity information of the target video;
a separation module, configured to separate the signature reference digest and the compressed encoded video stream from the target video based on the identifier of the signature reference digest;
the decoding module is used for decoding the compressed and coded video stream and determining a decoded video in the compressed and coded video stream;
the calculation module is used for performing summary calculation on the content of the decoded video and determining a decoding summary index, wherein the summary calculation mode is consistent with the calculation mode for obtaining the summary calculation result;
a verification module, configured to verify the target video based on the decoding summary indicator and the signature reference summary;
and the playing module is used for playing the decoded video when the verification is passed.
According to a third aspect, embodiments of the present invention provide an electronic device, comprising: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory storing therein computer instructions, and the processor executing the computer instructions to perform the video playing method according to the first aspect or any one of the embodiments of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions for causing a computer to execute the video playing method described in the first aspect or any one implementation manner of the first aspect.
It should be noted that, for corresponding beneficial effects of the video playing apparatus, the electronic device and the computer readable storage medium provided in the embodiment of the present invention, please refer to the description of the corresponding beneficial effects of the video distribution method above, which is not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow diagram of a method of generating a target publication video according to an embodiment of the invention;
FIG. 2 is a flow diagram of a method of generating a target release video according to an embodiment of the invention;
FIG. 3 is a flow diagram of a method of generating a target publication video according to an embodiment of the invention;
fig. 4 is a flowchart of a video distribution method according to an embodiment of the present invention;
fig. 5 is a flowchart of a video playing method according to an embodiment of the present invention;
fig. 6 is a block diagram of a video playback apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The whole process from generation to playing of the video can be mainly divided into video distribution, video distribution and video playing. The video publishing, namely the video publishing server and the like, performs video publishing processing on the acquired video to be published to obtain a target publishing video, and sends the target publishing video to the video distribution server and the like. Taking a video distribution server as an example, making a target release video into target distribution videos with different preset service qualities, and sending the target distribution videos with different preset service qualities to corresponding terminals for playing; after receiving the target distribution video with the corresponding service quality, the terminal performs operations such as decoding and the like on the target distribution video, so that the target video with the corresponding service quality can be played.
It should be noted that the video distribution, and video playing are not strictly processed in the above order. For example, for a video distribution server, the obtained video may be distributed by a video distribution server, or may be processed by a video distribution server at a previous stage, and so on; for a terminal that plays video, the video that is played by the terminal may be delivered by a video distribution server, or may be delivered by a video distribution server, and so on, for the terminal that plays video, it does not know whether the received target video is from video distribution or video distribution.
The video publishing method comprises the steps of performing abstract calculation on an original video in a video to be published to determine a video abstract index, forming a signature reference abstract on the basis, and encoding the signature reference abstract into a code stream after the video to be published is compressed and encoded to obtain a target publishing video. The original video comprises an original video image, wherein the original video image refers to large data formed by pixel point matrix accumulation; the video code stream refers to a media data stream after compression coding, including data streams of audio media and video media, and is obtained by greatly compressing an original media usually in a lossy compression mode, and the video code stream has small data volume, is suitable for transmission, but is not suitable for analysis and processing of media contents. Therefore, in the embodiment of the invention, the reference summary is directed to the original video, not the compressed and encoded video media code stream.
The original video image can be regarded as a sequence of images at successive video points, and each image can be mathematically transformed to obtain an invariant, which describes the amount of data that will, or will not be sensitive to, changes in the video image over a range of deformations and adjustments. The invariant of an image is characterized in that as long as the content of the image is not greatly changed, the good invariant is selected and cannot be significantly changed due to the change of the quality of the image. But whenever the content of the image changes significantly, the invariants should change sensitively with it. The invariant of the image reflects what the image describes. The invariant obtained by dividing the video into images is arranged and counted according to a time sequence, so that the video content can be regarded as the invariant of the video content, and the video content in a period of time is reflected. Therefore, the processing object for generating the video summary index is the content of the original video.
For example, an image is subjected to average brightness of all pixels, and the average brightness can be used as a constant, and as long as the brightness of the image is not adjusted, the average brightness is not changed significantly whether the image is enlarged or reduced; for another example, the position of the center of gravity of an image is calculated as another invariant, and the position of the center of gravity of the image is basically kept unchanged no matter whether the image is enlarged or reduced or dimmed; for another example, the N-order central moment of the image is calculated and can be used as a geometric invariant similar to the position of the center of gravity; for another example, the eigenvalue/singular value of the numerical matrix of the image after normalization processing is calculated, and after a plurality of processing is performed on the image, the main part of the eigenvalue/singular value of the numerical matrix after normalization processing can not be changed significantly; for another example, the frequency spectrum of the image is calculated, and the characteristics of the high-frequency and low-frequency positions are extracted from the frequency spectrum, which can also be used as an invariant; for another example, a plurality of representative key points are selected from the image and the appearance similarity feature descriptors in the vicinity of the key points are added, and the key points can be arranged to be invariant.
Therefore, performing the summarization calculation based on the content of the original video can be regarded as a constant summarization calculation. The target release video is obtained by encoding the signature reference abstract into the code stream of the compressed and encoded video to be released, so that the target release video carries the identity information of the publisher and has good anti-counterfeiting capability.
In accordance with an embodiment of the present invention, there is provided a video playback method embodiment, it is noted that the steps illustrated in the flow chart of the accompanying figures may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flow chart, in some cases, the steps illustrated or described may be performed in an order different than that presented herein.
As described above, the video playing method provided by the embodiment of the present invention is processed on the basis of the target distribution video and the target release video. In order to better describe the video playing method described in the embodiment of the present invention, a method for generating a target distribution video and a method for generating a target release video are described in detail first. The target distribution video is sent to the video distribution server by an electronic device such as a video distribution server, and therefore the target distribution video is processed by the electronic device such as the video distribution server. For example, as shown in fig. 1, the method for generating the target release video includes:
s11, obtaining the original video of the compressed and coded video stream and the identity information of the publisher.
The compressed and encoded video stream is obtained by compressing and encoding a video to be distributed acquired by a video distribution server, for example, after the video to be distributed is acquired by the video distribution server, the video to be distributed is compressed and encoded to obtain the compressed and encoded video stream. Based on this, the original video of the compressed and encoded video stream is the original video of the video to be published.
The original video of the video to be published may be a video with a target duration in the video to be published, for example, the video with the target duration is extracted from the video to be published every preset duration, and is used as the original video to perform subsequent video summary index calculation.
The target duration may be as small as 0 second, that is, the video of the target duration does not include any original video image, and the corresponding video summary index is null and may be represented by a corresponding data length of 0. The length of 0 indicates that the content is empty, that is, there is no content, and it can be known from the description of the subsequent steps that the reference summary not only contains one content of the video summary index, but also forms a reasonable reference summary for functional or identification use under the condition that the result of the video summary index is empty.
Alternatively, the target time period may be a time length of 1 to 10 seconds or the like. The specific length of the target duration is not limited at all, and is specifically set according to actual requirements.
For example, the video to be published is 30 minutes, the preset time duration is 5 minutes, and the target time duration is 1 second. Extracting 1s original video in 0-5 minutes of the video to be published for calculating video abstract indexes, and performing compression coding on the video to be published in 0-5 minutes to obtain a compression coding video stream; extracting 1s of original video in 6-10 minutes of the video to be published for calculating video abstract indexes, and performing compression coding on the video to be published in 6-10 minutes to obtain a compression coding video stream; and so on, and so on.
The publisher identity information of the video to be published may include: the system comprises a publisher universal name, a publisher unit name, a publisher address, an author name, an author contact, a publisher digital certificate access descriptor, a publisher identity public key access descriptor, a publisher identity description number, and extension data of business-related general description added due to a special requirement, wherein the extension data is used for being submitted to a specific business system or an access descriptor to obtain the digital certificate or the identity public key. The publisher identity information is specifically set according to actual needs, and is not limited herein.
And S12, performing summary calculation on the content of the original video and determining a video summary index.
As described above, the digest calculation of the content of the original video is an invariant construct of the video. Video is images arranged in time, and invariant is a data description of images and video contents in summary on data. The same video content viewed from multiple perspectives can result in an invariant data description, i.e., an invariant. The advantage of using invariants is that the calculation of the video summary index is made to be concerned not with the data of the video but with the content of the video.
When performing the summary calculation, the brightness or the picture of the original video image in the original video may be analyzed, or the value calculation may be performed on the original video image in the original video, and so on. Alternatively, the video summary index includes at least one video summary sub-index, and the video summary sub-index includes, but is not limited to, luminance, chrominance, and the like.
And S13, determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature.
The reference summary includes publisher identity information and video summary index, or further includes other information on the basis of the publisher identity information and the video summary index, for example, video description information and the like. After the reference digest is determined, it is signed to obtain a digital signature.
The manner of signing includes, but is not limited to, digital certificates and private keys, quantum entanglement techniques or other techniques, and the like. Taking the example of signing the reference digest by using the digital certificate and the private key, the digital certificate is signed by a widely recognized authority or a digital certificate center, or is signed by a digital certificate center accepted in a small range, or is a self-signed digital certificate which is not widely accepted, or is only an asymmetric public key which is not widely recognized. Of course, the form and source of the digital certificate selected by the publisher will have a corresponding indirect effect on the confidence level of the video stream published by the publisher.
And S14, splicing the reference digest and the digital signature to determine a signature reference digest.
The signature reference digest is obtained by splicing the reference digest and the corresponding digital signature, wherein the splicing of the reference digest and the digital signature can be performed before or after the digital signature.
And S15, coding the signature reference abstract into the compressed and coded video stream to determine a target release video.
The signature reference digest may be encoded before, after, etc. the compressed encoded video stream, and the specific location of the encoding is not limited in any way. And after the signature reference abstract is coded into the compressed and coded video stream, the target release video can be obtained.
Since the reference digest included in the signed reference digest and the digital signature are related to the original video, they vary with the content of the video. Therefore, when the signature reference abstract is compiled, the position of the original video generating the video reference index in the video to be published can be combined for processing.
In the method for generating the target release video provided by this embodiment, the original video can be regarded as an image sequence on continuous video points, the images have their own unique contents, each image can obtain a certain invariant through mathematical computation with respect to its content, that is, the image has an invariant related to it, the video is split into the images to obtain the invariant of the images, and the result of the video summary index can be ensured not to be affected by the outside by performing summary computation based on the invariant, thereby ensuring the reliability of the video summary index; meanwhile, the identity information of the publisher is combined on the basis, so that the published video carries the identity mark of the publisher, the anti-counterfeiting capability is good, and meanwhile, as the video content of the published video is publicly accessible, the non-encryption property of the public field is maintained, and the reliability of the target published video can be ensured on the basis of not influencing the publicly accessible published video.
In some embodiments, fig. 2 shows a flow chart of another alternative embodiment of a method for generating a target publication video, which, as shown in fig. 2, comprises the following steps:
s21, obtaining the original video of the compressed and coded video stream and the identity information of the publisher.
Please refer to S11 in fig. 1 for details, which are not described herein again.
And S22, performing summary calculation on the content of the original video to determine a video summary index.
Specifically, the above S22 includes:
s221, performing feature processing on the original video image in the original video and determining a feature processing result.
S222, determining the characteristic processing result as a video abstract index.
In summary calculation, the original video image in the original video is processed. The original video comprises at least one original video image, wherein if the original video comprises at least two original video images, each original video image can be subjected to feature processing respectively, and feature processing results of the at least two original video images are fused to obtain a feature processing result of the original video. The fusion method includes, but is not limited to, mean, weighted sum, and the like. Alternatively, if the original video includes at least two original video images, only one of the original video images may be extracted to be used as the original video image for feature processing.
In some embodiments, the S221 includes: and reducing the size of the original video image to a preset size to obtain a thumbnail of the original video so as to determine a feature processing result.
For example, a first video image in the original video is determined as an original video image for feature processing, and the size of the original video image is reduced to a preset size, for example, to a 64 × 32 thumbnail, thereby obtaining a feature processing result.
Or extracting a preset number of original video images from the original video, respectively reducing the sizes of the original video images to preset sizes, then superposing and averaging the original video images according to pixels to obtain an average thumbnail, and determining the average thumbnail as a feature processing result. For example, three original video images are obtained from a video image near the 1/3 time point, a video image near the 2/3 time point and a video image near the 3/3 time point in the original video, thumbnail images are respectively made for the three original video images, and then an average thumbnail image is made after pixel-by-pixel superposition averaging, and a feature processing result is determined.
In some embodiments, the S221 includes: and analyzing the color characteristics and/or the light and shade characteristics of the original video image to determine the characteristic processing result.
The color feature and/or the shading feature may be obtained based on a partial region in the original video image, may be obtained based on an entire region in the original video image, and the like.
For example, for each original video image, 16 horizontal blocks and 9 vertical blocks are divided averagely, and the statistical brightness average value of pixels in about one fifth of the central part of each divided image block represents the block, so that each original video image can be represented by a 16 × 9 brightness vector; and then, carrying out vector superposition on the brightness vectors of all the original video images, averaging to obtain a brightness vector with 16 x 9, and determining a feature processing result.
In some embodiments, the S22 includes:
(1) And acquiring data with a preset length.
(2) And carrying out numerical calculation on the original video image in the original video and the data with the preset length to determine the video abstract index.
The data with the preset length is auxiliary configuration data, and the configuration data with the corresponding length needs to be input due to different abstract index calculation programs. The configuration data are arranged to form a finite length data column, and a data column is a vector. The data of the preset length may be generated from pre-configured data, and may be generated by a user's selection. The data of the preset length is different according to modes, programs and the like selected by a user.
For example, all original video images in the original video are used as input factors, and data with a preset length is used as an additional input factor. A finite number vector of finite dimension, namely a group of numbers consisting of a plurality of numbers, is obtained by carrying out numerical calculation on an original video image and data of a preset length.
It should be noted that the specific manner of calculating the numerical value is not limited to the above, and other manners may be adopted, and the specific manner of calculating the data is not limited at all.
In the calculation process, data with a preset length is combined, and the data with the preset length can be different along with different use scenes, so that the calculation of the video abstract index can be suitable for different use scenes.
It should be noted that, the manner described above regarding determining the video summary index in S22 may be obtained based on one or more of the feature processing results, may also be obtained based on numerical calculation, or may be obtained based on a combination of the feature processing results and the numerical calculation, and so on. The method is not limited in any way, and is specifically set according to actual requirements.
And S23, determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature.
Please refer to S13 in fig. 1, which is not repeated herein.
And S24, splicing the reference digest and the digital signature to determine a signature reference digest.
Please refer to S14 in fig. 1 for details, which are not described herein again.
And S25, coding the signature reference abstract into the compressed and coded video stream to determine a target release video.
Please refer to S15 in fig. 1 for details, which are not described herein again.
According to the method for generating the target release video, the feature processing result is determined by the analysis result of the reduced size, the color feature or the light and shade feature, the calculation is simple, fast and effective, and the real-time performance of video release is improved.
Fig. 3 is a flowchart showing another alternative embodiment of the target distribution video generation method, and as shown in fig. 3, the flowchart includes the following steps:
s31, obtaining the original video of the compressed and coded video stream and the publisher identity information.
Please refer to S11 in fig. 1 for details, which are not described herein again.
And S32, performing summary calculation on the content of the original video and determining a video summary index.
Please refer to S22 in fig. 2 for details, which are not described herein.
And S33, determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature.
Specifically, the above S33 includes:
s331, obtain a first reference summary.
Wherein the first reference summary comprises publisher identity information and a calculation description for summary calculation.
The calculation description for the summary calculation includes, but is not limited to, the name of the calculation program used, the number of original video images for generating the first reference summary, and a list of configuration parameters to be used when performing the calculation using the specified calculation program, and the like.
In some embodiments, the S331 includes:
(1) And acquiring the video description of the video to be published.
(2) And splicing the publisher identity information, the video description and the calculation description to determine a first reference abstract.
When the first reference abstract is generated, the video description of the video to be published needs to be combined. Video descriptions include, but are not limited to, program title, program duration, program content text vignette, program cover art, program category, program related participants, program related contributors, and the like. And splicing the publisher identity information, the video description and the calculation description to obtain a first reference abstract. Therefore, the first reference abstract comprises some description information which is not changed along with the change of the video content without external modification. Therefore, the first reference abstract can be stored and used as a common backup, and when the first reference abstract needs to be coded into a video code stream of a video to be published, the first reference abstract is signed to obtain a first signature reference abstract.
The video description is used to represent some side information of the video, such as video usage scope, etc., to facilitate distribution of subsequent videos, etc.
S332, sign the first reference digest, and determine a first signature.
In this embodiment, a specific manner of signing is not limited, for example, a digital certificate and a private key are used for signing, and a first signature is obtained after signing the first reference digest.
And S333, determining the video abstract index as a second reference abstract, signing the second reference abstract, and determining a second signature.
The video summary index is obtained by performing summary calculation by using an original video image of an original video, and is changed along with the change of the original video image. Therefore, the summary calculation is required for each extracted original video image. And determining the video abstract index as a second reference abstract, and signing the second reference abstract by using a corresponding signature mode to determine a second signature.
And S34, splicing the reference digest and the digital signature to determine a signature reference digest.
Splicing the first reference abstract and the first signature to obtain a first signature reference abstract; and splicing the second reference digest and the second signature to obtain a second signature reference digest.
And S35, coding the signature reference abstract into a video code stream of the video to be published, and determining the target published video.
The video code stream is a code stream obtained after the video to be released is compressed and coded.
Specifically, the above S35 includes:
s351, compiling the second signature reference abstract into the target position of each compressed coding video stream to obtain the video code stream.
Each compressed encoded video stream is the original video used to generate the index of the video digest since the second signed reference digest is changed from the original video, i.e. from time to time. Therefore, when the second signed reference digest is compiled, the position of the original video generating the second signed reference digest in the video to be published needs to be combined. After the position is determined, the second signature reference digest may be coded into the first frame or the last frame of the compressed and encoded video stream corresponding to the original video segment, so as to obtain the video stream.
And S352, the first signature reference abstract is coded into the target position of the video code stream, and the target release video is determined.
As described above, the first reference digest is a common backup, and the first signed reference digest obtained from the first reference digest may be encoded on or before the first frame of the video bitstream, and so on. For example, the first signature reference summary is compiled every odd number of times or every other 1 time of N division of the video bitstream.
Since it is a common backup, it should not be counted for each compressed encoded video stream, but for several compressed encoded video streams. However, the number of times of insertion is not limited to one, but may be several times. This has the advantage that if the total content is long, no long backtracking is required to find the data, but a short backtracking time is sufficient.
Because the publisher identity information and the calculation description used for the abstract calculation are not changed along with the change of the video content, the publisher identity information and the calculation description can be commonly used in the process of video publishing once, and therefore repeated calculation can be avoided by directly storing the publisher identity information and extracting the publisher identity information when the publisher identity information and the calculation description are required to be used, and the efficiency of video publishing is improved. The first reference abstract is only used in the releasing process and is coded into the target position of the video code stream instead of each compressed and coded video stream, so that the data volume of the released video can be shortened.
It should be noted that the time for compiling the first signature reference abstract and the second signature reference abstract does not limit the sequence, and the compiling time may be set according to actual requirements.
As a specific application example of the embodiment of the present invention, the generated signature reference digests are classified into two types, one type is a first signature reference digest, hereinafter abbreviated as RA; the other is a second signature reference digest, hereinafter abbreviated RB. The first reference abstract forming the RA comprises publisher identity information, video description and calculation description used for abstract calculation; the second reference summary forming the RB contains only the video summary index.
For example, JSON is used in RA to organize and encode information into data blocks, or other forms, such as XML, or protobuf, or Box structures of ISOBMFF, may be used to organize and encode information into data blocks.
Taking JSON as an example, RA has the following structure:
as shown in the above structural code, author represents the publisher identity information, wherein sn, cn, cert respectively represent name, common name, and digital certificate for display, which must be provided, and do not allow the contents to be provided anonymously. The other data items in the author are opt-in. Also, it is allowed to add other data items that do not conflict with the above table definition.
video represents a video description, and each data item of the video description is a selectable item. The video description entry is allowed to be empty. Allowing the addition of other data items that do not conflict with the above table definitions.
feature represents the calculation description of the video summary index, wherein program represents the name of a used calculation program, length represents the number of image frames of a short section of video, and init represents a configuration parameter list required to be used when calculation is performed by using a specified calculation program. A program must be provided that allows no length, init, and also allows other data items to be added, depending on the chosen name of the computer program.
When a plurality of video summary sub-indexes are included in the video summary index, each feature object needs to use a unique id to indicate which video summary sub-index is currently described. When a plurality of feature objects appear in array form, id is a data item that cannot be omitted.
sign denotes a digital signature of RA, where digest denotes a hash algorithm used in calculating the digital signature; signature represents the calculated digital signature result, expressed in Base64 code. Both digest and signature must be supplied and cannot be missing, and sign must be supplied. Useful hashing algorithms are SHA-256, SM3, etc.
Since JSON-form data is not convenient for direct storage into binary data, wherever binary data values are involved, the binary data is encoded using Base64 encoding or any other suitable encoding.
In the process of calculating the signature, a hash code is calculated by using a hash algorithm described by digest on the RA coded data block to which the sign object/data block is not added, or the hash code is calculated after the sign object/data block is deleted from the RA coded data block. And then signing the calculated hash code by using a private key associated with the data certificate cert in the identity description of the issuer, and coding the hash code by using a Base64 algorithm to obtain a signature value.
The private key associated with cert here is private information and is key information that the publisher owns its true identity, requiring that the publisher be properly kept. According to the x.509 public key cryptosystem, just because a publisher owns this private key that anyone else does not own, anyone else other than the publisher is believed to be unable to forge a digital signature of consistent cryptographic effect in a short time that is practically valid.
In RB, JSON, XML, protobuf, box structure of ISOBMFF, or the like is used to organize and encode information into data blocks. Taking JSON as an example, the specific structure of RB is as follows:
in the RB, only feature objects and sign objects need to be included. sign objects are computed in the same way as RA, requiring the use of the cert-associated private key described in RA.
In RB, the feature object has id, program, length, init data items as in RA, but the program, length, and init data items are negligible. When an RB is encountered that ignores program, length, and init data items, it may be filled in using the feature object in the RA. The feature object is most different from the feature object in the RA by more data items, which are video summary indicators, and the specific calculation method is described in detail by program and init.
Since the data items of feature objects in RA and RB, such as id, program, length, init, etc., are essentially identical, the following description may be treated without distinction, as may the sign object.
In the embodiment, the feature object optional calculation model, i.e. the calculation model for abstract calculation, is provided, and the name program has two optional models/programs, namely thunbnail and blocks.
The index calculation model/program thumbnafil calculates the index result by enumerating the picture image to be an average thumbnail. The configuration parameters of this model are: "init" [ "thumbnail wide", "thumbnail high", "sampling mode" ]. The thumbnail width is used to configure the pixel width of the thumbnail finally generated by the program thumbnail, the thumbnail height is used to configure the final pixel height, and the sampling mode can be an integer of-2, -1, 0, 1, 2,. Etc. to indicate which image is extracted.
When n < =0 is taken, the 1 st to nth images are taken to generate thumbnails; when n >0 is taken, the thumbnail is generated by taking every n-1 original video images, and then the average thumbnail is obtained. For example, when n =1 indicates taking all the original video images; when n =2, the original video image is taken every other original video image after the first one is taken; when n =3, it means that one sheet is taken every 2 sheets after the first sheet is taken. When making average thumbnails, each thumbnail is accumulated pixel-by-pixel and then divided by the number of accumulated image sheets. And the thumbnail is coded into a data block according to jpeg, and then is coded by Base64 to obtain the value of a result data item.
And the index calculation models/program blocks are divided into equally divided blocks by enumerating the image of the picture, then the average brightness of the central small block is taken as a representation after each block is divided according to the nine-square grid, each picture obtains the representative brightness of a group of blocks, and the brightness matrixes are combined into a brightness tensor to be made into the value of the result data item.
Index calculation models/programs blocks, the model configuration parameters of which are as follows: init [ "column number of equal blocks", "row number of equal blocks", "sampling mode" ]. The "sampling mode" is the same as the "sampling mode" and is used to describe a mode of extracting an image for calculation from an image group.
As a specific application example of the foregoing embodiment, the video distribution method includes: 30 color original images with the BT.71 specification of 1920 pixels wide and 1080 pixels high are shot every second, the data format of the images is YUV420P, and the images are called videos to be issued. And performing image compression coding operation on the video to be issued, specifically, performing compression coding on each 1920 × 1080 YUV420P image to obtain compression coding frame data. By default, the image frame Slice compressed data blocks of H.264 are to be obtained, for example, I-Slice frame data blocks, B-Slice frame data blocks, P-Slice frame data blocks, etc. may be obtained.
When an image is coded, a key image I-Slice compressed data block of a frame H.264 can be appointed to be coded, an SPS data block and a PPS data block are obtained, and the SPS data block, the PPS data block and the I-Slice compressed data block are spliced together to form a key frame image data block IDR frame data block. The encoding of the image corresponds to the image capturing, and for example, 30 1920 × 1080 color images of YUV420P are received per second, and 30 h.264 frame data blocks are output. When the image is coded, after each key frame IDR frame is output, one IDR frame is output again when each subsequent 300 th frame data block.
When the abstract calculation is carried out on the original video image in the original video, the abstract index is calculated according to the index calculation model/program of the blocks, one image is divided into 16 columns and 9 rows of blocks, and the sampling mode is configured to be 6, namely, after one YUV420 image is extracted, the 6 th YUV420 image is extracted at intervals of 5 images. The length is configured to be 30, that is, one video summary index is output after every 30 images are input. When the abstract is calculated, each time a video abstract index is received, the video abstract index is made into an RB.
When an IDR frame is received, a copy of RA is obtained and the calculation of the summary index calculation module is reset. When an RA or RB digest is obtained, the RA or RB digest is made into an auxiliary data block (SEI), and an SEI NAL Unit containing a signed reference digest is made according to the Annex B specification of the IEC/ISO-14496-10 standard and is output to a data interface of a video stream.
In particular, in order to prevent the signature reference digest from being semantically confused in the resulting SEI data block with any other application-generated SEI data block, the auxiliary data payload of SEI PayloadType 5, i.e. user _ data _ unregistered, is used, and here a UUID is specifically introduced to be assigned to the signature reference digest for use, which is placed at the first 16 bytes of the SEI data block to guide a signature reference digest.
For example, the UUID is defined as 1e2bc68c-33d2-5ca2-af3b-0b5e5469c7b8.
When an IDR frame or Slice frame is obtained, NAL units are generated by Annex B rules and output to the data interface of the video stream. When meeting the simultaneous arrival of SEI NAL units and IDR frames or Slice frames, the SEI NAL Unit containing RA abstract is placed in front of the SEI NAL Unit, and the SEI NAL Unit containing RB abstract is placed behind the SEI NAL Unit.
The generation method of the target release video provided by the embodiment allows a video publisher to publish a trusted video based on the signature reference digest to the public domain, and the video has an anti-counterfeiting feature and carries an identity stamp of the publisher, but at the same time, the non-encryption property of the public domain is maintained, and the video content is publicly accessible. The video anti-counterfeiting characteristics are mainly expressed in three aspects: firstly, video abstract indexes can be calculated from video contents and can be checked with indexes in a signature reference abstract; secondly, the signature reference digest is the reference digest plus the signature thereof, and people without a private key cannot calculate a new reference digest from the forged video content to generate a qualified signature; and thirdly, the signature reference digest carries a digital certificate which indicates the identity of the publisher, and the digital certificate can be managed by an authority. Therefore, the verification digital certificate is true, the verification reference digest signature is true, the calculation verification digest index goodness of fit is high, and the video is necessarily distributed by the person who holds the private key of the valid digital certificate.
In the present embodiment, a video distribution method is provided, which may be used in a video distribution server, a mobile terminal, and the like, fig. 4 is a flowchart of a video distribution method according to an embodiment of the present invention, as shown in fig. 4, the flowchart includes the following steps:
and S41, acquiring the target release video.
The target release video comprises a signature reference abstract and a compressed coding video stream, the signature reference abstract comprises a reference abstract and a signature of the reference abstract, and the reference abstract is obtained by splicing an abstract calculation result of content of an original video in the compressed coding video stream and publisher identity information of the release video.
It should be noted that the target release video herein is not specifically referred to as a video delivered from the video release server, and may also be obtained from a video distribution server at a higher level, and so on.
For the generation process of the target release video, please refer to the above details, and details are not repeated herein.
And S42, separating the signature reference digest and the compressed coding video stream from the target release video based on the identification of the signature reference digest.
The signed reference digest is encoded in the compressed encoded video stream and is distinguished from the compressed encoded video stream by a corresponding identifier. For example, a signed reference digest is coded as an SEI field into a compression-coded video stream. This field can thus be used to determine a signed reference digest in the target release video.
The signature reference digest may include a plurality of signature reference digests of the same type, or may include a plurality of signature reference digests of different types. The signature reference digest is obtained based on the content of the original video, for example, the second signature reference digest described above; the different types may be obtained based on the content of the original video or in combination with the identity information of the publisher, for example, the first signature reference digest described above.
As described above, the signed reference digest may correspond to the compressed encoded video stream, and accordingly, the signed reference digest separated from the target distribution video may correspond to the compressed encoded video stream. For example, for a video to be published, an original video with a target duration is extracted from the video to be published every preset duration according to the requirement for generating the second signature reference digest. Continuing with the above example, if the video to be published is 30 minutes, the preset time duration is 5 minutes, and the target time duration is 10 seconds, then the processing of the video to be published is as follows:
to-be-published sub-video 1: representing videos from [0,5] minutes in the to-be-released videos, obtaining a compressed coding video stream 1 after compression coding, extracting 10 seconds of original videos from the to-be-released sub-videos 1, and generating a second signature reference abstract 1;
to-be-published sub-video 2: representing a video of (5, 10) minutes in the video to be published, obtaining a compressed and encoded video stream 2 after compression encoding, extracting an original video of 10 seconds from the sub-video 2 to be published, and generating a second signature reference abstract 2;
to-be-published sub-video 3: representing videos of (10, 15) minutes in the to-be-released videos, obtaining compressed coded video streams 3 after compression coding, extracting 10 seconds of original videos from the to-be-released sub-videos 3, and generating second signature reference digests 3;
and so on;
to-be-published sub-video 6: and (3) representing the video (25, 30) minutes in the video to be published, obtaining a compressed and encoded video stream 6 after compression encoding, extracting 10 seconds of original video from the sub-video 6 to be published, and generating a second signature reference digest 6.
As indicated above, the second signed reference digest corresponds to the compressed encoded video stream one-to-one, and therefore, the second signed reference digest and the corresponding compressed encoded video stream can be separated by using the identifier of the second signed reference digest.
And S43, processing the compressed and coded video stream with preset service quality to obtain the video to be distributed with the preset service quality.
The preset service quality includes but is not limited to 8K, 4K, high definition, standard definition, fluency, and the like, and the preset service quality is determined according to actual requirements. And processing the compressed and coded video stream according to the determined preset service quality to obtain the video to be distributed with the preset service quality. That is, through the processing of this step, videos to be distributed with different service qualities can be obtained for the same compressed and encoded video stream.
In some embodiments, S43 above further includes: verifying the publisher identity information based on the signed reference digest; when the verification passes, the above step S43 is performed.
For example, verifying the authenticity of the digital certificate of the issuer identity carried in the signed reference digest may alert and record an alert information log or the like on the digital certificate of the false identity, which may further prevent further distribution of untrusted video streams.
Alternatively, the signature in the signed reference digest may be verified, a warning may be issued for an unreal signature and a warning information log may be recorded, etc., and further, further distribution of the untrusted video stream may be prevented.
In some embodiments, when a video to be distributed with a preset service quality is obtained, since the compressed and encoded video stream needs to be processed, an image of the compressed and encoded video stream may be scanned, and the video content may be verified whether the video content meets a preset distribution condition. For the occurrence of a video that does not satisfy the preset distribution condition, a warning event may be issued and a warning information log or the like may be recorded. Or restrict distribution of the video, etc.
And S44, compiling the signature reference abstract into the video to be distributed, and determining and distributing the target distribution video with the preset service quality.
For the signature reference digest to be compiled into the video to be distributed, refer to the above description about the compilation of the signature reference digest in the generation of the target distribution video. Alternatively, the location of the signature reference digest is recorded when the compressed encoded video stream is separated from the signature reference digest. And after the video to be distributed is obtained, the signature reference abstract is programmed into the recorded position, so that the target distribution video is determined. And finally, distributing the target distribution video to the corresponding terminal.
In some embodiments, the signed reference digest includes a first signed reference digest including publisher identity information and a computation description for digest computation, and a second signed reference digest including a digest computation result of content of an original video in the compressed encoded video stream, and the video to be distributed includes sub-videos to be distributed in one-to-one correspondence with the compressed encoded video stream. Based on this, S44 includes:
(1) And compiling the second signature reference abstract into the corresponding sub-video to be distributed to obtain the target distribution sub-video with the preset service quality.
(2) And splicing the target distribution sub-videos with the preset service quality, compiling the first signature reference abstract into a splicing result, and determining and distributing the target distribution videos with the preset service quality.
It should be noted that the target distribution video includes multiple segments of compressed and encoded video streams, and each segment of compressed and encoded video stream can obtain a sub-video to be distributed with a preset service quality through the processing in the above steps. Based on the corresponding relation between the compressed coding video stream and the second signature reference abstract, the corresponding relation between the sub-video to be distributed and the second signature reference abstract can be determined. Based on the above, the second signature reference abstract is programmed into the corresponding sub-video to be distributed, so as to form the target distribution sub-video with the preset service quality.
Splicing the target distribution sub-videos to obtain a splicing result; and then the first signature reference abstract is compiled into the splicing result to obtain the target distribution video. The number of the first signature reference digests can be determined according to the duration of the splicing result, if the duration of the splicing result is longer, a plurality of first signature reference digests can be compiled, and the compiling positions of the signature reference digests are distributed at different positions of the splicing result; if the duration of the concatenation result is short, only one first signature reference digest may be incorporated, and so on. The position of the first signature reference digest is set according to actual requirements, and no limitation is imposed on the position.
It should be noted that, when the first signature reference digest is compiled, it is not limited to the above-mentioned concatenation result obtained first and then compiled. Or the number of the first signature reference digests to be coded is determined, the number of the compressed coding video streams between two adjacent first signature reference digests is determined according to the number of the first signature reference digests, and finally, when the second signature reference digests are coded, the number of the coded second signature reference digests is counted, so that the coding position of the first signature reference digests can be determined.
The first signature reference abstract represents some description information which does not change along with time change; and the second signature reference digest is closely related to the video content, so that different compiling modes are adopted for different signature reference digests, and the data volume of the target distribution video increased by compiling the signature reference digest can be reduced on the basis of ensuring the reliability of the target distribution video.
In the video distribution method provided by this embodiment, a signature reference digest is carried in a target release video, and the signature reference digest is obtained by performing digest calculation based on the content of an original video in a compressed coded video stream, and since an image has its own unique content, that is, the image has an invariant related to the image, the reliability of the signature reference digest is calculated based on the invariant; meanwhile, the signature reference abstract also comprises the identity information of the publisher, so that the finally obtained target distribution video has the identity mark of the publisher, and the anti-counterfeiting capability is good, thereby ensuring that the reliability of the obtained target distribution video with the preset service quality is higher.
As a specific application example of the video distribution method according to the embodiment of the present invention, the video distribution method includes: the target distribution video and the signature reference digest RA or RB, etc. of 30 frames per second are obtained, and the separation process of the signature reference digest is performed thereon.
In the separation process, by detecting the SEI data block with payload type PayloadType of 5 and having UUID {1e2bc68c-33d2-5ca2-af3b-0b5e5469c7b8} guidance as defined in the above embodiment, a signature reference digest is extracted from this SEI data block, in addition to any other video frame data for the subsequent production of the preset quality of service.
When producing videos with different service qualities, it is necessary to decode each input video frame or compressed data of a plurality of video frames, output one or a plurality of video frame images, and obtain 30 decoded 1920 × 1080 video picture images each second.
And then carrying out video compression coding on each sent video image after reducing or amplifying according to specified configuration, and outputting one or more video frame images with different sizes or different video code rates. For example, for a video with an input image size of 1920 × 1080, three small-sized video compression codes 1280 × 720, 704 × 576 and 352 × 288 are obtained through the production process, and then three compression-coded video frames are obtained through the h.264 compression coding.
After the image is reduced or enlarged, the original video aspect ratio information can be kept in the encoded video frame or the related description information. For example, after an image of 1920 × 1080 size is reduced to an image of 704 × 576 size and encoded, the original video aspect ratio of 16 can be described in the encoded frame data.
During the decoding process, if an IDR frame is received and decoded, the indication encoding process synchronously encodes the IDR frame at each different size/video code rate.
When a copy of the RA is received from the reference digest detach module, it is buffered until the next signature RA arrives. When receiving an IDR frame from the image coding module, the temporary stored signature RA is first output to the data interface of the video stream of the corresponding size, and then the IDR frame of the corresponding size is output to the output interface. For example, when receiving an IDR encoded frame with size 704 × 576 sent from the image encoding module, the buffered RA is output on the video output data interface with size 704 × 576 before outputting the 704 × 576 video encoded frame.
When a non-IDR frame is received from the image coding module, the video frame is directly output to the corresponding video stream data output interface.
When a RB is received, the RB is output to the video stream data output interface of all streams at the same time.
When an RA is received, the digital certificate in the RA is first verified when the RA or RB is detached. If the digital certificate carried in the RA is signed by a trusted CA certificate that is already local for playback purposes, then this certificate is validated as true. If no valid certificate can be extracted, a warning event is issued, a log is logged and a recommendation is made to the system to prevent the distribution process.
Or, the true and reliable digital certificate may be extracted, the value of the CN data field is compared with the value of the author CN data field in the RA, and if the values are not the same or there is no inclusion relationship, a warning event is issued, a log is recorded, and a potential risk is reported.
Alternatively, a sign object is taken from the RA and deleted in the RA. And then extracting the public key of the publisher from the digital certificate, obtaining a hash algorithm from the digest data field of the sign object, calculating a hash value of the reference digest data block after the sign object is deleted by using the hash algorithm, and decrypting the signature data of the sign by using the public key of the publisher to obtain another hash value. The two hash values are compared and if identical, the original signature reference digest is authenticated as being true. Otherwise, a warning event may be issued, logged and a recommendation made to the system to prevent the distribution process.
When the video with the preset service quality is manufactured, the video which does not accord with the distribution condition can be screened out by scanning the video. Specifically, when the RA is received, the author object is extracted, the video object is extracted, and relevant records are made. When an event which does not meet the distribution condition is triggered, recording and proposing a proposal to prevent the distribution process to the system or using a proposal image processing method to appropriately process the image.
The video distribution method provided by the embodiment does not need to encrypt the content of the video, the video tool or service has the capability of distributing the video content according to different service qualities without reducing the credibility of the video stream, and the video tool or service has the capability of scanning the video content and carrying out necessary supervision measures without reducing the credibility of the video stream.
In this embodiment, a video playing method is provided, which can be used in a playing terminal, such as a computer, a mobile terminal, etc., fig. 5 is a flowchart of the video playing method according to an embodiment of the present invention, as shown in fig. 5, the flowchart includes the following steps:
and S51, acquiring a target video.
The target video comprises a signature reference abstract and a compressed coding video stream, the signature reference abstract comprises a reference abstract and a signature of the reference abstract, and the reference abstract is obtained by splicing an abstract calculation result of content of an original video in the compressed coding video stream and publisher identity information of the target distribution video.
When the target video is the target distribution video, please refer to the above description for the generation method of the target video, which is not described herein again. When the target video is the target release video, please refer to the above description for the generation method of the target release video, which is not described herein again.
And S52, separating the signature reference digest and the compressed and coded video stream from the target video based on the identification of the signature reference digest.
Regarding the way of separating the signed reference digest and the compressed encoded video stream from the target video, it is similar to the way of separating the signed reference digest and the compressed encoded video stream from the target release video described in S42 in the embodiment shown in fig. 4, and therefore, the description thereof is omitted here.
In some embodiments, the signed reference digest includes a first signed reference digest including publisher identity information and a computation description for digest computation, and a second signed reference digest including a digest computation result of content of an original video in the compressed encoded video stream. Based on this, S52 includes:
(1) And separating the first signature reference abstract from the target release video by using the identification of the first signature reference abstract.
(2) And separating the second signature reference abstract and the compressed coding video stream corresponding to the second signature reference abstract from the target release video by using the identifier of the second signature reference abstract.
The specific contents of the first signature reference digest and the second signature reference digest are as described above, and are not described herein again.
For example, a video frame queue is used to process the received target distribution video, and video frames are placed into the video frame queue when a compressed encoded frame of video is found. When a second signature reference digest is found, all video frames in the video frame queue are taken out as the compressed coded video stream in a short period of time.
Alternatively, it may be the case that the above-mentioned video frame queue is empty and the second signature reference digest is encountered again, and this time, the length of the compression-encoded video stream is regarded as 0, that is, an empty video stream is obtained.
When putting video frames into the video frame queue, the compressed video frames can be decoded to obtain video frame images, and the compressed coded data of the images, not the videos, can be put into the video frames, so that the effect of verifying the authenticity of the video streams is the same.
The target release video comprises two types of signature reference digests, and the signature reference digests can be accurately separated from the target release video by using the corresponding identifications.
S53, decoding the compressed and coded video stream, and determining a decoded video in the compressed and coded video stream.
And sending the compressed and coded video stream into a video decoder for decoding, thus obtaining the decoded video in the compressed and coded video stream.
And S54, performing summary calculation on the content of the decoded video, and determining a decoding summary index.
And the calculation mode of the abstract is consistent with the calculation mode of obtaining the abstract calculation result.
Specifically, please refer to the above description of S22 in the embodiment shown in fig. 2, and details thereof are not repeated herein.
For example, when the first thumbnail of 64 × 32 is obtained as the calculation method of the digest calculation result, the first original video image of the decoded video is reduced to the size of 64 × 32 as the index of the decoding digest.
For example, when three enumerated average thumbnails whose digest calculation result is calculated in a manner of 64 × 32 are obtained, three frames of original video images near 1/3, 2/3, and 3/3 time points in the decoded video are reduced to a size of 64 × 32, and then an average image is calculated, that is, an average thumbnail is obtained by averaging pixel by pixel, and is used as a decoding digest index.
For example, when the calculation mode of obtaining the digest calculation result is to designate the average brightness of 16 × 9 blocks of the enumerated image, a plurality of images of a designated number of columns are taken, each image is divided averagely according to 16 horizontal blocks and 9 vertical blocks, the statistical brightness average value of pixels in about one fifth of the central part of each divided image block represents the block, so that each taken image can be represented by a 16 × 9 brightness vector, and the group of representative vectors are superposed and averaged to form a 16 × 9 brightness vector, which serves as a decoding digest index.
For example, when the calculation mode of obtaining the digest calculation result is to designate the singular value of the average brightness of the 32 × 18 blocks of the enumerated image, the designated number of columns of images are respectively taken to be 32 × 18 blocks of the enumerated image, the average brightness of the blocks is counted to obtain a 32 × 18 numerical matrix, the numerical matrix is accumulated and then is subjected to singular value decomposition, and the nonzero singular value is intercepted to form a variable length vector to serve as a decoding digest index.
For example, the calculation method for obtaining the digest calculation result is to assign 16 × 9 block average luminance joint vectors of enumerated images, take the assigned number of images to obtain 16 × 9 block average luminance numerical matrices, directly concatenate and splice into a long numerical vector, and make a decoding digest index.
And S55, verifying the target video based on the decoding abstract index and the signature reference abstract.
And when the target video is verified, comparing the calculated decoding abstract index with the video abstract index, and recording the difference. When the comparison difference is large, a relevant warning is given or relevant recording is made, which indicates that the compressed and coded video stream has a certain degree of unreality.
In some embodiments, the first signed reference digest comprises a first reference digest and a first signature of the first reference digest, the first reference digest comprising the issuer identity information and a computation description for the digest computation. Based on this, the above S54 includes:
(1) And extracting the publisher identity information and/or the first signature in the first signature reference digest.
(2) And verifying the identity information and/or the first signature of the publisher to determine the verification result of the target video.
When the verification is performed by using the first signature reference digest, the authenticity of the digital certificate and/or the first signature in the issuer identity information can be verified. When the digital certificate of the publisher is obtained, the authenticity of the digital certificate is verified, and when the authenticity cannot be verified due to the lack of information, relevant warning is given or relevant records are made to show that the authenticity of the identity of the publisher is questioned.
When the acquired digital certificate of the publisher is not signed by a trusted authority or a CA center, a relevant warning is given or a relevant record is made, which indicates that the authenticity of the identity of the publisher is suspicious.
When no publisher digital certificate exists, a trusted publisher digital certificate access descriptor is not provided, and a trusted publisher identity public key access descriptor is not provided, a relevant warning is given or a relevant record is made, and the fact that the authenticity of the publisher identity is suspicious is indicated.
When the digital certificate of the publisher is obtained, verification failure occurs in the process of verifying the digital certificate, and relevant warning is given or relevant records are made to indicate that the authenticity of the identity of the publisher is false and is not worth being trusted.
And inputting the reference digest data except the signature part in the signature reference digest by using the obtained publisher public key to verify the authenticity of the digital signature, and giving a relevant warning or making a relevant record if the verification fails, wherein the signature reference digest is not authentic and is not worthy of being informed.
The first signed reference digest includes some descriptive information and does not change with the change of the image in the video, and therefore, the first signed reference digest can be used to represent the situation of the target release video as a whole.
In some embodiments, the S54 includes:
(1) And extracting the reference abstract in the second signature reference abstract to obtain a video abstract index, wherein the video abstract index is an abstract calculation result of the content of the original video in the compressed and coded video stream.
(2) And calculating the similarity between the decoding abstract index and the video abstract index.
(3) And determining the verification result of the target video based on the size of the similarity.
For example, when comparing the vector of the decoded abstract index with the vector of the video abstract index, the absolute difference may be calculated data by data and then averaged, the cosine of the included angle between the two vectors may be calculated, the covariance may be calculated, the correlation coefficient between the two vectors may be calculated, the structural similarity between the two vectors may be calculated, and the like.
Or, taking the cosine of the included angle between the vector of the decoded summary index and the vector of the video summary index as an example, when the cosine of the included angle between the two summary index result vectors is close to 1.0, it indicates that the true degree of the segment of compressed and encoded video stream is very high; when the cosine is close to 0.0, the truth degree of the compressed coding video stream is very low; a good true degree can be discriminated when the cosine is > 0.7; the poor trueness can be discriminated when the cosine is < 0.5.
When the calculation method of the video abstract index is an average thumbnail, the average thumbnail representing the decoding abstract index and the average thumbnail representing the video abstract index can be compared pixel by pixel, then the difference of pixel by pixel comparison is compared, and a representative area corresponding to the pixel with the difference of more than 30% is dyed and displayed in the playing process to be used as a warning, and relevant records are made.
The second signed reference digest is in one-to-one correspondence with the compressed and encoded video stream, and therefore, the second signed reference digest can be used for representing the situation of the corresponding compressed and encoded video stream, and therefore the verification result of the target video is determined.
And S56, when the verification is passed, playing the decoded video.
When the verification is passed, the obtained decoded video can be determined to be a reliable video, so that the decoded video can be played.
When the verification fails, a warning mark may be made in the video image according to the result of the verification when the decoded video is rendered, for example, a target image is displayed. It is not limited in any way herein.
In the video playing method provided by this embodiment, a signature reference digest is carried in a target release video, and the signature reference digest is obtained by performing digest calculation based on the content of an original video in a compressed coded video stream, and since an image has its own unique content, that is, the image has an invariant related to the image, the reliability of the signature reference digest is calculated based on the invariant; meanwhile, the signature reference abstract also comprises the identity information of the publisher, so that the finally obtained target video has the identity mark of the publisher, and the signature reference abstract has good anti-counterfeiting capability.
As a specific application example of the video playing method, the video playing method includes: and acquiring a target video, and separating the signed reference digest and the compressed and coded video stream without the reference digest from the target video. For example, a video frame of 30 frames per second and a signature reference summary RA or RB, etc. are obtained for subsequent separation processing.
During the separation process, signature reference summary data is extracted from an SEI data block with PayloadType of 5 by detecting the SEI data block and guiding with UUID {1e2bc68c-33d2-5ca2-af3b-0b5e5469c7b8} defined in the specific application example, so as to be used for signature verification. Any other video frame data in addition to that is used for decoding.
In decoding, each of the input video frames or the plurality of video frame compressed data is decoded, and one or a plurality of video frame images are output. For example, 30 decoded 1920 × 1080 video picture images will be obtained every second.
And when the signature reference digest data block is verified, determining whether RA or RB is received by judging whether the signature result is contained or not every time the signature reference digest data block is received from the reference digest separation module.
When an RA is received, the digital certificate in the RA is first verified. If the digital certificate carried in the RA is signed by a trusted CA certificate that is already local for playback purposes, then this certificate is validated as true. Or the authenticity of this certificate can be confirmed in some trusted way for playback purposes. When a certificate can be verified as authentic and trustworthy, authentication of this certificate is passed and further authentication and verification operations will be enabled. If a valid certificate cannot be extracted, the potential risk is reported to the image rendering module. The image rendering module will alert the video display window of this risk.
Optionally, the authentic and trusted digital certificate is extracted, the value of its CN data field is compared with the value of the author CN data field in the RA, if the two are not identical or there is no certain inclusion relationship, a potential CN spoofing risk is reported to the image rendering module, and the author information in the RA and the CN data field of the actual certificate or information of more data fields are displayed in the image rendering module to prompt the viewer of a possible CN spoofing risk.
Sign objects are taken from the signature reference summary data block and deleted in the signature reference summary data block. And then extracting the public key of the publisher from the digital certificate, obtaining a hash algorithm from the digest data field of the sign object, calculating a hash value of the reference digest data block after the sign object is deleted by using the hash algorithm, and decrypting the signature data of the sign by using the public key of the publisher to obtain another hash value. The two hash values are compared and if identical, the original signature reference digest is authenticated as being true. Otherwise, the false reference abstract is not adopted, a warning is sent to the image rendering module, and further index verification work is terminated until the next RA is encountered.
When an RA is received, the author object is extracted, the video object is extracted, and the image rendering module is informed to display appropriately.
When receiving each frame of IDR frame data of a video compression-encoded key frame, a configured signed reference digest RA must be received, and it is known from this RA whether the received video stream is a video stream with a signed reference digest. When the received video stream is the video stream without the signature reference digest, the processing of the video stream does not need to be subjected to signature authentication and only needs to be subjected to digest separation, decoding and rendering as long as the processing is verified, so that the related playing is completed.
In summary index verification, the index calculation model/program described in the above example, thumbnail and blocks, is used. After the sign configuration in RA is used, each image is received from the image decoding module, the data contribution part of the image in a short time is calculated by using a configured program through or blocks.
For example, taking thunbnail as an example, if the sequence number of the current image since the module reset does not conform to the sampling pattern of thunbnail, the image is discarded/ignored directly. If the sampling mode is met, the image is reduced to a thumbnail of a specified width and height by a faster algorithm, such as a bilinear method, and is accumulated on a preset floating point thumbnail base image. This thumbnail floor is created when the thumbnail program is reset and configured. When the received image sequence number is the same as the length, it indicates that the current video segment has been processed, and at this time, the floating point value on each pixel is averaged once according to the number of the accumulated images on the accumulated floating point thumbnail base map, and then is arranged into an integer thumbnail image. Namely, an index result is calculated according to the thunmbnail description, but the jpeg compression is not carried out.
And when the feature result of the RB is received, performing Base64 decoding and JPEG decompression on the image carried in the feature, namely obtaining an integer thumbnail. At this time, the thumbnail obtained by the above calculation and the currently decompressed thumbnail are compared pixel by pixel, when the pixel difference is greater than 50%, the pixel is marked as unreal (value 2), when the pixel difference is greater than 30%, the pixel is marked as possible unreal (value 1), and when the pixel difference is less than 30%, the pixel is marked as acceptable (value 0). Thus, a real score thumbnail is made. The abstract index verification module sends the real calculation thumbnail to the image rendering module to guide the image rendering module to dye the video picture which is currently played/rendered according to scores. Meanwhile, all scores are accumulated and then divided by the total number of pixels of the thumbnail to obtain a comprehensive score, which indicates unreal when the comprehensive score is 2, may indicate unreal when the comprehensive score is 1, and indicates that the degree of reality is acceptable when the comprehensive score is 0. And the comprehensive score is reported to the image rendering module.
And when each complete index verification is completed, namely the feature result of the RB is received and the verification report is completed, immediately cleaning the calculation parameters to prepare for the index calculation of the next batch of decoded videos.
When the blocks program is configured, calculation is carried out by a method similar to that of the thumbnail, and a dyeing guide picture and a comprehensive score are obtained for subsequent image rendering display.
After the obtained functions of showing the truth degree information, rendering the image and the like are processed, the video audiences can directly know the possible unreliable behaviors of the video in the process of playing the video.
The following images can be drawn on the video drawing window during rendering:
when a copy of an author object is received, displaying the content in the author object appropriately, such as displaying the name, units, etc. of the publisher;
when an author object is received and attached with a digital certificate false alarm, displaying the alarm with proper attention, and optionally, carrying out high-deception warning dyeing on a playing picture;
and when an index verification result, namely a dyeing guide and an integrated score, is received, dyeing the played video according to the dyeing guide, and properly recording the integrated score, or posting a deceptive warning sign on the video.
The video playing method provided by the embodiment plays the credible video based on the signature reference abstract, traces the source of the publisher of the video while playing, and identifies and warns the true degree of the video stream, so that the publisher identity of the video can be confirmed by the player while playing the video, and the possibility that the video is tampered to the extent that the promised value of the publisher is lost can be confirmed. When the high-reliability video content is confirmed by playing the video content, the real intention of a publisher in video publishing can be confirmed.
In this embodiment, a video playing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
The present embodiment provides a video playing apparatus, as shown in fig. 6, including:
an obtaining module 61, configured to obtain a target video, where the target video includes a signature reference digest and a compressed encoded video stream, the signature reference digest includes a reference digest and a signature of the reference digest, and the reference digest is obtained by splicing a digest calculation result of content of an original video in the compressed encoded video stream and publisher identity information of the target video;
a separation module 62 configured to separate the signed reference digest and the compressed encoded video stream from the target release video based on the identifier of the signed reference digest;
a decoding module 63, configured to decode the compressed and encoded video stream, and determine a decoded video in the compressed and encoded video stream;
a calculation module 64, configured to perform summary calculation on the content of the decoded video, and determine a decoding summary index, where a manner of the summary calculation is consistent with a calculation manner of obtaining a result of the summary calculation;
a verification module 65, configured to verify the target video based on the decoding summary indicator and the signature reference summary;
and a playing module 66, configured to play the decoded video when the verification is passed.
In some embodiments, the signed reference digest includes a first signed reference digest including the publisher identity information and a computation description for the digest computation, and a second signed reference digest including a digest computation result of the content of the original video in the compressed encoded video stream, the separation module 62 includes:
the first separation unit is used for separating the first signature reference abstract from the target release video by using the identifier of the first signature reference abstract;
and the second separation unit is used for separating the second signature reference abstract and the compressed coding video stream corresponding to the second signature reference abstract from the target release video by using the identifier of the second signature reference abstract.
In some embodiments, the verification module 65 includes:
a first extraction unit, configured to extract the reference digest in the second signed reference digest to obtain a video digest index, where the video digest index is a digest calculation result of content of an original video in the compressed encoded video stream;
the calculating unit is used for calculating the similarity of the decoding abstract index and the video abstract index;
and the first verification unit is used for determining the verification result of the target video based on the similarity.
In some embodiments, the first signed reference digest comprises a first reference digest and a first signature of the first reference digest, the first reference digest comprising the issuer identity information and a calculation description for the digest calculation, the verification module 65 further comprises:
a first extracting unit, configured to extract the publisher identity information and/or the first signature in the first signature reference digest;
and the second verification unit is used for verifying the publisher identity information and/or the first signature and determining the verification result of the target video.
In some embodiments, when the target video is a target distribution video, the target distribution video generation device includes:
a published video acquisition module, configured to acquire a target published video, where the target published video includes the signature reference digest and the compressed encoded video stream;
a digest separation module for separating the signature reference digest and the compressed encoded video stream from the target release video based on the identifier of the signature reference digest;
the processing module is used for processing the compressed and coded video stream with preset service quality to obtain a video to be distributed with the preset service quality;
and the determining module is used for compiling the signature reference abstract into the video to be distributed and determining and distributing the target distribution video with the preset service quality.
In some embodiments, the video to be distributed includes sub-videos to be distributed in one-to-one correspondence with the compressed encoded video streams, and the determining module includes:
the first compiling unit is used for compiling the second signature reference abstract into the corresponding sub-video to be distributed to obtain the target distribution sub-video with the preset service quality;
and the second compiling unit is used for splicing the target distribution sub-videos with the preset service quality, compiling the first signature reference abstract into a splicing result, and determining and distributing the target distribution videos with the preset service quality.
In some embodiments, the target release video generation device includes:
the video acquisition module is used for acquiring the original video of the compressed and coded video stream and the publisher identity information;
the calculation module is used for performing abstract calculation on the content of the original video and determining video abstract indexes;
the signature module is used for determining a reference abstract based on the publisher identity information and the video abstract index and signing the reference abstract to obtain a digital signature;
the splicing module is used for splicing the reference abstract and the digital signature to determine a signature reference abstract;
and the compiling module is used for compiling the signature reference abstract into the compressed coding video stream so as to determine the target release video.
In some embodiments, the programming module comprises:
the first encoding unit is used for encoding a second signature reference abstract into the target position of each compressed coding video stream to obtain a video code stream, and the second signature reference abstract is obtained by splicing the second reference abstract and the second signature;
and the second compiling unit is used for compiling the first signature reference abstract into the target position of the video code stream and determining a target release video, wherein the first signature reference abstract is obtained by splicing the first reference abstract and the first signature.
The video distribution apparatus in this embodiment is presented as a functional unit, where the unit refers to an ASIC circuit, a processor and memory executing one or more software or fixed programs, and/or other devices that may provide the above-described functionality.
Further functional descriptions of the modules are the same as those of the corresponding embodiments, and are not repeated herein.
An embodiment of the present invention further provides an electronic device, which includes the video distribution apparatus shown in fig. 6.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, and as shown in fig. 7, the electronic device may include: at least one processor 71, such as a CPU (Central Processing Unit), at least one communication interface 73, memory 74, at least one communication bus 72. Wherein a communication bus 72 is used to enable the connection communication between these components. The communication interface 73 may include a Display (Display) and a Keyboard (Keyboard), and the optional communication interface 73 may also include a standard wired interface and a standard wireless interface. The Memory 74 may be a high-speed RAM Memory (volatile Random Access Memory) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 74 may alternatively be at least one memory device located remotely from the processor 71. Wherein the processor 71 may be in connection with the apparatus described in fig. 6, an application program is stored in the memory 74, and the processor 71 calls the program code stored in the memory 74 for performing any of the above-mentioned method steps.
The communication bus 72 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The communication bus 72 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 7, but that does not indicate only one bus or one type of bus.
The memory 74 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (e.g., flash memory), a hard disk (HDD) or a solid-state drive (SSD); the memory 74 may also comprise a combination of memories of the kind described above.
The processor 71 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of CPU and NP.
The processor 71 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
Optionally, the memory 74 is also used for storing program instructions. Processor 71 may invoke program instructions to implement a video playback method as shown in any of the embodiments of the present application.
An embodiment of the present invention further provides a non-transitory computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions may execute the video distribution method in any of the above method embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk Drive (Hard Disk Drive, abbreviated as HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.
Claims (10)
1. A video playback method, comprising:
acquiring a target video, wherein the target video comprises a signature reference abstract and a compressed coding video stream, the signature reference abstract comprises a reference abstract and a signature of the reference abstract, and the reference abstract is obtained by splicing an abstract calculation result of the content of an original video in the compressed coding video stream with publisher identity information of the target video;
separating the signed reference digest and the compressed encoded video stream from the target video based on the identification of the signed reference digest;
decoding the compressed and coded video stream, and determining a decoded video in the compressed and coded video stream;
determining a decoding summary index based on summary calculation on the content of the decoding video, wherein the summary calculation mode is consistent with the calculation mode for obtaining the summary calculation result;
verifying the target video based on the decoding summary index and the signature reference summary;
and when the verification is passed, playing the decoded video.
2. The method of claim 1, wherein the signature reference digest comprises a first signature reference digest and a second signature reference digest, the first signature reference digest comprising the publisher identity information and a computation description for the digest computation, the second signature reference digest comprising a digest computation result of content of an original video in the compressed encoded video stream, and wherein the separating the signature reference digest and the compressed encoded video stream from the target published video based on an identification of the signature reference digest comprises:
separating the first signed reference digest from the target release video by using the identifier of the first signed reference digest;
and separating the second signature reference abstract and the compressed coding video stream corresponding to the second signature reference abstract from the target release video by using the identifier of the second signature reference abstract.
3. The method of claim 2, wherein the verifying the target video based on the decoding digest index and the signed reference digest comprises:
extracting the reference digest in the second signed reference digest to obtain a video digest index, wherein the video digest index is a digest calculation result of content of an original video in the compressed coded video stream;
calculating the similarity of the decoding abstract index and the video abstract index;
and determining a verification result of the target video based on the similarity.
4. The method according to claim 2 or 3, wherein the first signed reference digest comprises a first reference digest and a first signature of the first reference digest, the first reference digest comprises the publisher identity information and a calculation description for the digest calculation, and the target video is verified based on the decoding digest index and the signed reference digest, further comprising:
extracting the issuer identity information and/or the first signature in the first signature reference digest;
and verifying the identity information of the publisher and/or the first signature, and determining the verification result of the target video.
5. The method according to claim 2, wherein when the target video is a target distribution video, the method for generating the target distribution video comprises:
acquiring a target release video, wherein the target release video comprises the signature reference abstract and the compressed coding video stream;
separating the signed reference digest and the compressed encoded video stream from the target release video based on the identification of the signed reference digest;
processing the compressed and coded video stream with preset service quality to obtain a video to be distributed with the preset service quality;
and compiling the signature reference abstract into the video to be distributed, and determining the target distribution video with the preset service quality.
6. The method according to claim 5, wherein the video to be distributed includes sub-videos to be distributed in one-to-one correspondence with the compressed coded video streams, and wherein the encoding the signature reference digest into the video to be distributed and the determining and distributing the target distribution video with the preset quality of service include:
compiling the second signature reference abstract into the corresponding sub-video to be distributed to obtain the target distribution sub-video with the preset service quality;
and splicing the target distribution sub-videos with the preset service quality, compiling the first signature reference abstract into a splicing result, and determining and distributing the target distribution videos with the preset service quality.
7. The method according to claim 5, wherein the method for generating the target release video comprises:
acquiring an original video of the compressed and coded video stream and the publisher identity information;
performing abstract calculation on the content of the original video to determine a video abstract index;
determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature;
splicing the reference abstract and the digital signature to determine a signature reference abstract;
and coding the signature reference abstract into the compressed coding video stream to determine the target release video.
8. The method of claim 7, wherein said incorporating said signed reference digest into said compressed encoded video stream to determine said target published video comprises:
coding a second signature reference abstract into the target position of each compressed and coded video stream to obtain a video code stream, wherein the second signature reference abstract is obtained by splicing the second reference abstract and the second signature;
and compiling the first signature reference abstract into a target position of the video code stream, and determining a target release video, wherein the first signature reference abstract is obtained by splicing the first reference abstract and the first signature.
9. An electronic device, comprising:
a memory and a processor, the memory and the processor being communicatively coupled to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the video distribution method of any of claims 1-8.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the video distribution method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211180825.7A CN115695909A (en) | 2022-09-27 | 2022-09-27 | Video playing method, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211180825.7A CN115695909A (en) | 2022-09-27 | 2022-09-27 | Video playing method, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115695909A true CN115695909A (en) | 2023-02-03 |
Family
ID=85062295
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211180825.7A Pending CN115695909A (en) | 2022-09-27 | 2022-09-27 | Video playing method, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115695909A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118711483A (en) * | 2024-07-19 | 2024-09-27 | 马努(上海)艺术设计有限公司 | Mechanical physical pixel screen image display system and method based on servo cluster control |
-
2022
- 2022-09-27 CN CN202211180825.7A patent/CN115695909A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118711483A (en) * | 2024-07-19 | 2024-09-27 | 马努(上海)艺术设计有限公司 | Mechanical physical pixel screen image display system and method based on servo cluster control |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11023618B2 (en) | Systems and methods for detecting modifications in a video clip | |
Lin et al. | Issues and solutions for authenticating MPEG video | |
CN112040336B (en) | Method, device and equipment for adding and extracting video watermark | |
US9202257B2 (en) | System for determining an illegitimate three dimensional video and methods thereof | |
US8938095B2 (en) | Verification method, verification device, and computer product | |
Lin | Watermarking and digital signature techniques for multimedia authentication and copyright protection | |
US12081843B2 (en) | System and method for identifying altered content | |
US10834158B1 (en) | Encoding identifiers into customized manifest data | |
CN115695909A (en) | Video playing method, electronic equipment and storage medium | |
US12010320B2 (en) | Encoding of modified video | |
CN113014953A (en) | Video tamper-proof detection method and video tamper-proof detection system | |
CN115115968A (en) | Video quality evaluation method, device and computer-readable storage medium | |
CN117956176A (en) | Code stream data authentication method, computer equipment and storage medium | |
JP4740706B2 (en) | Fraud image detection apparatus, method, and program | |
CN115695942A (en) | Video distribution method and device, electronic equipment and storage medium | |
CN117615075A (en) | Watermark adding and watermark identifying method, device, equipment and readable storage medium | |
US11599570B2 (en) | Device and method to render multimedia data stream tamper-proof based on block chain recording | |
CN115550730A (en) | Video distribution method and device, electronic equipment and storage medium | |
CN114727158A (en) | Broadcast television safe broadcasting detection method and system based on digital watermarking technology | |
CN119182973B (en) | Video watermarking method, watermark video playing control method, device, electronic equipment and storage medium | |
CN113613015A (en) | Tamper-resistant video generation method and device, electronic equipment and readable medium | |
US12158929B1 (en) | Watermarking digital media for authenticated content verification | |
US7356159B2 (en) | Recording and reproduction apparatus, recording and reproduction method, recording and reproduction program for imperceptible information to be embedded in digital image data | |
CN119182974B (en) | Video anti-tampering watermarking method and device based on edge pixel concealment | |
Alaa | ‘Watermarking images for fact-checking and fake news inquiry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |