CN116916090A - Video stream data processing method and device, electronic equipment and storage medium - Google Patents
Video stream data processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116916090A CN116916090A CN202310779623.2A CN202310779623A CN116916090A CN 116916090 A CN116916090 A CN 116916090A CN 202310779623 A CN202310779623 A CN 202310779623A CN 116916090 A CN116916090 A CN 116916090A
- Authority
- CN
- China
- Prior art keywords
- sub
- video
- video stream
- image
- stream data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims description 8
- 238000003860 storage Methods 0.000 title claims description 8
- 238000012545 processing Methods 0.000 claims abstract description 65
- 238000000034 method Methods 0.000 claims abstract description 57
- 238000005520 cutting process Methods 0.000 claims description 18
- 230000005540 biological transmission Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 abstract description 22
- 230000007704 transition Effects 0.000 abstract description 4
- 230000015572 biosynthetic process Effects 0.000 abstract description 3
- 238000003786 synthesis reaction Methods 0.000 abstract description 3
- 230000001427 coherent effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 14
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000013139 quantization Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000032683 aging Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4331—Caching operations, e.g. of an advertisement for later insertion during playback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The embodiment of the invention provides a method and a device for processing video stream data, wherein the method comprises the following steps: acquiring coding information of multi-path video stream data; decoding each path of video stream data according to the coding information, and storing each decoded sub video image into a corresponding image cache queue; and extracting and outputting each sub video image belonging to the same original video image from each image cache queue according to the coding information. Each sub video image is stored in a corresponding image buffer queue after decoding processing, so that the sub video images are ensured to be processed according to the sequence of the original video stream, and the time-lapse consistency is maintained. The sub-video images belonging to the same original video image are extracted and output from each image cache queue according to the coding information, so that the sub-video images are ensured to keep coherent and smooth picture transition in the synthesis process, the picture tearing cracks are eliminated, and the viewing experience is improved.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method for processing video stream data, a device for processing video stream data, an electronic device, and a computer readable storage medium.
Background
With the development of audio and video technologies, demands of audio and video live broadcasting and application markets are continuously expanding, and the field of audio and video live broadcasting is changing day by day. In order to meet the diversification of the demands, it is necessary to cut one video source into multiple paths, and then encode, decode and transmit the multiple paths of video respectively.
Cutting a complete video source into multiple paths of video and encoding, decoding and transmitting the multiple paths of video respectively can result in different processing times of the different paths of video. This may result in slower processing speeds for some of the video paths, which may result in the output frames not being able to timely display the complete video source. Moreover, due to the aging difference of different paths of videos, tearing cracks can appear on the output picture at the cutting position. When the pictures of different paths of videos are combined, discontinuous or unsmooth picture transition can be generated, and viewing experience is affected.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention are directed to providing a method for processing video stream data and a corresponding apparatus for processing video stream data, which overcome or at least partially solve the foregoing problems.
In order to solve the above problems, an embodiment of the present invention discloses a method for processing video stream data, which includes: acquiring coding information of multi-path video stream data obtained by cutting an original video stream; decoding each path of video stream data according to the coding information, and storing each sub-video image of the same original video image belonging to the original video stream after decoding into an image cache queue corresponding to each path of video stream data; and respectively extracting and outputting each sub video image belonging to the same original video image from each image cache queue according to the coding information.
Optionally, the extracting and outputting each sub video image belonging to the same original video image from each image buffer queue according to the coding information includes: and respectively extracting and outputting each sub-video image belonging to the same original video image from each image buffer queue according to the coding information and the output frame rate of the decoding end.
Optionally, the extracting and outputting each sub video image belonging to the same original video image from each image buffer queue respectively includes: selecting a main cache queue and at least one slave cache queue from the image cache queues; sequentially selecting main sub-video images from the main cache queue according to the time interval corresponding to the output frame rate of the decoding end, and synchronously selecting sub-video images corresponding to the main sub-video images from at least one sub-cache queue; drawing the master sub-video image and at least one of the slave sub-video images on a canvas object; and transmitting the canvas object to a display terminal.
Optionally, the selecting, sequentially according to the time interval corresponding to the output frame rate of the decoding end, the main sub video image from the main buffer queue, and synchronously selecting, from at least one of the slave buffer queues, the slave sub video image corresponding to the main sub video image includes: starting from a first sub-video image in the main cache queue, sequentially taking the sub-video images in the main cache queue as the main sub-video images according to the time interval; the slave sub video images with the same display time stamp as the master sub video image are synchronously selected from at least one slave cache queue.
The embodiment of the invention also discloses a processing method of the video stream data, which comprises the following steps: acquiring multi-path video stream data obtained by cutting an original video stream from a video source; and carrying out coding processing on each sub-video image belonging to the same original video image of each path of video stream data, and generating coding information of each path of video stream data so as to extract and output each sub-video image belonging to the same original video image from an image cache queue corresponding to each path of video stream data according to the coding information.
Optionally, the encoding processing is performed on each sub-video image belonging to the same original video image of the original video stream in each path of the video stream data, so as to generate encoding information of each path of the video stream data, including: adding the same display time stamp to each sub video image belonging to the same original video image of the original video stream in each path of video stream data; and generating coding information of each path of video stream data according to the display time stamp.
The embodiment of the invention also discloses a device for processing video stream data, which comprises: the coding information acquisition module is used for acquiring coding information of the multi-path video stream data obtained by cutting the original video stream; the video stream decoding module is used for decoding each path of video stream data according to the coding information, and storing each sub-video image of the same original video image belonging to the original video stream after the decoding processing into an image cache queue corresponding to each path of video stream data; and the video stream transmission module is used for respectively extracting and outputting each sub video image belonging to the same original video image from each image buffer queue according to the coding information.
Optionally, the video stream transmission module is configured to extract and output each sub-video image belonging to the same original video image from each image buffer queue according to the encoding information and the output frame rate of the decoding end.
Optionally, the video streaming module includes: the buffer queue selection module is used for selecting a main buffer queue and at least one slave buffer queue from the image buffer queues; the sub-video image selection module is used for sequentially selecting main sub-video images from the main cache queue according to the time interval corresponding to the output frame rate of the decoding end, and synchronously selecting sub-video images corresponding to the main sub-video images from at least one sub-cache queue; a sub-video image drawing module for drawing the master sub-video image and at least one of the slave sub-video images on a canvas object; and the canvas image transmission module is used for transmitting the canvas object to the display terminal.
Optionally, the sub video image selection module includes: the main sub video image selection module is used for starting from a first sub video image in the main cache queue, and sequentially taking the sub video images in the main cache queue as the main sub video images according to the time interval; and the slave sub-video image selection module is used for synchronously selecting the slave sub-video images with the same display time stamp as the master sub-video image from at least one slave cache queue.
The embodiment of the invention also discloses a device for processing video stream data, which comprises: the video stream acquisition module is used for acquiring multi-path video stream data obtained by cutting an original video stream from a video source; the video stream coding module is used for coding each sub-video image belonging to the same original video image of the original video stream in each path of video stream data to generate coding information of each path of video stream data; the coding information is used for extracting and outputting each sub-video image belonging to the same original video image from an image buffer queue corresponding to each path of video stream data.
Optionally, the video stream encoding module includes: the time stamp adding module is used for adding the same display time stamp to each sub-video image belonging to the same original video image of the original video stream in each path of video stream data; and the coding information generation module is used for generating coding information of each path of video stream data according to the display time stamp.
Optionally, the encoding information generating module includes: a position information adding module, configured to add, for each of the sub-video images, position information in the original video image to which the sub-video image belongs; and the coding information determining module is used for generating the coding information according to the display time stamp and the position information.
The embodiment of the invention also discloses an electronic device, which comprises: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the electronic device to perform the method of processing video stream data as described above.
The embodiment of the invention also discloses a computer readable storage medium, which stores a computer program for causing a processor to execute the processing method of the video stream data.
The embodiment of the invention has the following advantages:
the processing scheme of the video stream data provided by the embodiment of the invention acquires the coding information of the multipath video stream data obtained by cutting the original video stream. And then decoding each path of video stream data, and storing each sub video image after decoding into an image cache queue corresponding to each path of video stream data. And then extracting and outputting each sub-video image belonging to the same original video image from each image cache queue according to the coding information.
Each sub video image in the embodiment of the invention is stored in the corresponding image buffer queue after decoding processing, so that the sub video images are ensured to be processed according to the sequence of the original video stream, and the consistency of timeliness is maintained. By extracting and outputting the sub-video images belonging to the same original video image from each image cache queue according to the coding information, the sub-video images can be ensured to keep consistent and smooth picture transition in the synthesis process, so that picture tearing cracks are eliminated, and viewing experience is improved.
Drawings
Fig. 1 is a flowchart of steps of a method for processing video stream data according to an embodiment of the present invention;
FIG. 2 is a flow chart of steps of another method for processing video stream data according to an embodiment of the present invention;
FIG. 3 is a flow chart of steps of a method for processing video stream data according to still another embodiment of the present invention;
fig. 4 is a schematic diagram of a synchronous splicing scheme of video stream data according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a decoding process of video stream data according to an embodiment of the present invention;
fig. 6 is a block diagram of a processing apparatus for video stream data according to an embodiment of the present invention;
fig. 7 is a block diagram of another apparatus for processing video stream data according to an embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
According to the embodiment of the invention, an original video stream is cut into multiple paths of video stream data, and each sub-video image belonging to the same original video image is subjected to coding processing aiming at each path of video stream data to obtain coding information. And then decoding each path of video stream data according to the coding information, and storing the sub video images after decoding into an image cache queue corresponding to each path of video stream data. And finally, respectively extracting and outputting the sub-video images belonging to the same original video image from each image cache queue according to the coding information.
Referring to fig. 1, a flowchart of steps of a method for processing video stream data according to an embodiment of the present invention is shown. The processing method of the video stream data specifically comprises the following steps:
and step 101, obtaining coding information of the multi-path video stream data obtained by cutting the original video stream.
In an embodiment of the present invention, the video source may be a PC or other media resource or the like that supports multiplexing. The original video stream originates from a video source. The original video stream may be video captured by a video source, or the original video stream may be video stored or transmitted by a video source. The original video stream may be composed of a plurality of consecutive original video images. The original video stream can be cut according to actual requirements, and the original video stream is cut into multi-path video stream data. In a specific cut, each original video image may be cut into multiple sub-video images. The method ensures that a plurality of sub-video images obtained by cutting an original video image can be spliced and restored into the original video image. For example, the original video image P01 of the original video stream V is cut to obtain sub-video images P011, P012. Further, the sub video images P011, P012 can be stitched back to the original video image P01. Similarly, the original video image P02 of the original video stream V is cut, and sub video images P021, P022 are obtained. The original video image P0n of the original video stream V is cut to obtain sub video images P0n1, P0n2. The sub video images p011, p021, … …, p0n1 form one path of video stream data V01 of the original video stream V, and the sub video images p012, p022, … …, p0n2 form the other path of video stream data V02 of the original video stream V. The coding information of each path of video stream data is obtained by coding each path of video stream data.
Step 102, decoding each path of video stream data according to the coding information, and storing each sub-video image of the same original video image belonging to the original video stream after decoding into an image buffer queue corresponding to each path of video stream data.
In an embodiment of the present invention, a corresponding image buffer queue may be created for each path of video stream data. The image buffer queue of each path of video stream data is used for storing sub-video images of the corresponding video stream data. For example, the image buffer queue D01 of the video stream data v01 is used to store decoded sub-video images p011, p021, … …, p0n1 in the video stream data v 01. The image buffer queue D02 of the video stream data v02 is used to store the decoded sub-video images p012, p022, … …, p0n2 in the video stream data v 02.
And step 103, extracting and outputting each sub-video image belonging to the same original video image from each image cache queue according to the coding information.
In the embodiment of the present invention, the sub-video image P011 belonging to the original video image P01 may be extracted from the image buffer queue D01, and the sub-video image P012 belonging to the original video image P01 may be extracted from the image buffer queue D02. And, the sub video images P011 and P012 are transmitted together to exhibit the complete original video image P01. Next, the sub-video image P021 belonging to the original video image P02 may be extracted from the image buffer queue D01, and the sub-video image P022 belonging to the original video image P02 may be extracted from the image buffer queue D02. And, the sub video images P021 and P022 are transmitted together to reveal a complete original video image P02.
The processing scheme of the video stream data provided by the embodiment of the invention acquires the coding information of the multipath video stream data obtained by cutting the original video stream. And then decoding each path of video stream data, and storing each sub video image after decoding into an image cache queue corresponding to each path of video stream data. And then extracting and outputting each sub-video image belonging to the same original video image from each image cache queue according to the coding information.
Each sub video image in the embodiment of the invention is stored in the corresponding image buffer queue after decoding processing, so that the sub video images are ensured to be processed according to the sequence of the original video stream, and the consistency of timeliness is maintained. By extracting and outputting the sub-video images belonging to the same original video image from each image cache queue according to the coding information, the sub-video images can be ensured to keep consistent and smooth picture transition in the synthesis process, so that picture tearing cracks are eliminated, and viewing experience is improved.
In an exemplary embodiment of the present invention, an implementation manner of extracting and outputting each sub-video image belonging to the same original video image from each image buffer queue according to the encoding information is to extract and output each sub-video image belonging to the same original video image from each image buffer queue according to the encoding information and the decoding end output frame rate. In practical applications, when extracting and outputting sub-video images, the decoding end is required to output a frame rate as an extraction and output time interval. The embodiment of the invention extracts and outputs the sub-video images through the output frame rate of the decoding end, and can ensure that the sub-video images are presented according to the correct time interval in the playing process. This helps to maintain continuity and smoothness of the video and ensures synchronization of the audio and video. Specifically, a master buffer queue and at least one slave buffer queue can be selected from the image buffer queues, the master sub-video image is sequentially selected from the master buffer queues according to the time interval corresponding to the output frame rate of the decoding end, and the slave sub-video image corresponding to the master sub-video image is synchronously selected from the at least one slave buffer queue. By extracting the corresponding master and slave sub-video images from the master and slave cache queues, the complete original video image can be drawn and presented. Thus, the problem of image tearing or missing in the output process can be avoided, and more accurate and continuous image display can be provided. The main sub-video image and at least one sub-video image are drawn on a canvas object, and the canvas object is transmitted to a display terminal. For multi-path video stream data, sub-video images of each path of video stream can be independently extracted and output by selecting different main buffer queues and slave buffer queues. Therefore, the simultaneous processing and display of multiple paths of video streams can be realized, and the requirement of multiple paths of videos is met. By drawing the main sub-video image and the sub-video image on the canvas object and transmitting the canvas object to the display terminal, the amount of transmission data can be reduced and the transmission efficiency can be improved. Only the drawn canvas object is required to be transmitted, rather than the data of each sub-video image, so that the bandwidth and the transmission cost are saved. For example, the master cache queue D01 and the slave cache queue D02 are selected from the image cache queues D01 and D02. The main sub video images p011 and p021 are sequentially selected from the main buffer queue D01 according to the time interval corresponding to the frame rate information FR01, and the sub video image p012 corresponding to the main sub video image p011 and the sub video image p022 corresponding to the main sub video image p021 are sequentially and synchronously selected from the sub buffer queue D02 according to the time interval corresponding to the frame rate information FR 01. Then, the main sub video image P011 and the sub video image P012 are drawn on the canvas object, and the canvas object is transmitted to the display terminal, which displays the original video image P01. After a time interval corresponding to the frame rate information FR01, the master sub-video image P021 and the slave sub-video image P022 are drawn on the canvas object, the canvas object is transmitted to the display terminal, and the display terminal displays the original video image P02.
In summary, by extracting and outputting each sub-video image according to the encoding information and the frame rate information of the original video stream and combining the mode of selecting the master buffer queue and the slave buffer queue, time synchronization, picture integrity, multi-channel video stream processing and efficient transmission can be realized, thereby improving the quality and efficiency of audio/video processing.
In an exemplary embodiment of the present invention, a primary sub-video image is sequentially selected from a primary buffer queue according to a time interval corresponding to an output frame rate of a decoding end, and a secondary sub-video image corresponding to the primary sub-video image is synchronously selected from at least one secondary buffer queue. For example, starting from the first sub video image p011 of the main buffer queue D01, the sub video images p011, p021, … …, and p0n1 in the main buffer queue D01 are respectively regarded as main sub video images at the time interval t corresponding to the frame rate information FR01, and then the sub video image p012 having the same display time stamp as the main sub video image p011, the sub video images p022 and … … having the same display time stamp as the main sub video image p021, and the sub video image p0n2 having the same display time stamp as the main sub video image p0n1 are synchronously selected from the sub buffer queue D02.
According to the embodiment of the invention, the main sub-video image and the auxiliary sub-video image are selected according to the time interval corresponding to the output frame rate of the decoding end, so that the main sub-video image and the auxiliary sub-video image can be ensured to be presented according to the correct time sequence in the playing process. Thus, the time sequence accuracy of the video can be maintained, and the time confusion and the incoherence between images are avoided. By selecting the main sub-video image from the main cache queue and selecting the corresponding sub-video image from at least one sub-cache queue, the main sub-video image and the sub-video image can be ensured to be matched in time stamp, thereby realizing the consistency of the images. This helps to avoid problems of tearing, overlapping or incompleteness of the picture. By selecting the master and slave sub-video images from different cache queues, multiple video streams can be processed simultaneously. Each buffer queue can correspond to one path of video stream, and parallel processing and display of multiple paths of videos are realized in a synchronous selection mode. By selecting the master and slave sub-video images at time intervals and extracting from the buffer queue, the amount of data transferred and processed can be reduced. Only images within a specific time interval need to be selected and processed, not all images, thereby improving processing efficiency and reducing resource consumption. In summary, the main sub video image is selected from the main buffer queue according to the time interval corresponding to the frame rate information, and the corresponding sub video image is selected from the sub buffer queue synchronously, so that the advantages of time synchronization, picture consistency, multi-channel video processing and efficiency improvement can be realized. Such an approach may ensure accuracy, continuity and integrity of the video, providing a good audiovisual experience.
In one exemplary embodiment of the present invention, one implementation of drawing the master sub-video image and the at least one slave sub-video image on the canvas object is drawing the master sub-video image and the at least one slave sub-video image on the canvas object according to the position information of the master sub-video image and the position information of the at least one slave sub-video image. For example, the position information of the main sub video image P011 indicates that the main sub video image P011 is a left half image of the original video image P01, and the main sub video image P011 is drawn on the left half of the canvas image. The position information of the main sub video image P012 indicates that the main sub video image P012 is a right half image of the original video image P01, and the main sub video image P012 is drawn on the right half of the canvas image.
The embodiment of the invention can accurately draw the main sub video image and the sub video image at the corresponding positions on the canvas object by using the position information of the main sub video image and the sub video image. Therefore, the layout and the position of the image on the canvas can be ensured to meet the requirement of the original video image, and the spatial relationship and the overall structure of the original video image are maintained. Segmentation and presentation of images may be achieved by drawing a master sub-video image and a slave sub-video image on different areas of a canvas object using location information. In this way, different parts of the master and slave sub-video images can be clearly presented, avoiding confusion and overlap between them. According to the position information, the layout mode of the main sub-video image and the sub-video image on the canvas object can be freely selected. They may be arranged horizontally, vertically, or otherwise laid out as desired to accommodate different display requirements and display effects. By drawing the master and slave sub-video images on canvas objects, multiple images may be presented simultaneously in a unified canvas. This can enhance the visual experience of the viewer, making it more intuitive to understand and perceive the content and structure of the original video image. In summary, the main sub-video image and the sub-video image are drawn on the canvas object according to the position information of the main sub-video image and the sub-video image, and the method has the advantages of accurate position, picture segmentation, flexible layout and improvement of the visual effect. By the method, accuracy, definition and comprehensiveness of the image can be ensured, and a better visual display effect is provided for a viewer.
Referring to fig. 2, a flowchart of steps of another method for processing video stream data according to an embodiment of the present invention is shown. The processing method of the video stream data specifically comprises the following steps:
in step 201, multiple paths of video stream data obtained by cutting an original video stream are obtained from a video source.
In the embodiment of the present invention, the description of step 201 may refer to the description of step 101, which is not repeated herein.
Step 202, performing encoding processing on each sub-video image belonging to the same original video image of the original video stream in each path of the video stream data, and generating encoding information of each path of video stream data, so as to extract and output each sub-video image belonging to the same original video image from an image buffer queue corresponding to each path of video stream data according to the encoding information.
In the embodiment of the invention, the encoding process can be performed for each path of video stream data. Specifically, an encoder can be used to encode one path of video stream data. If the original video stream is cut to obtain three paths of video stream data, three encoders can be utilized to respectively encode the three paths of video stream data in parallel. For example, the video stream data v01 is encoded by the encoder B01, and the video stream data v02 is encoded by the encoder B02.
In the embodiment of the invention, the coding information is used for describing the characteristics, the motion relation and the compression mode of the video stream data, plays a guiding role in the decoding process and helps a decoder to correctly restore the original video image. The encoded information resulting from the encoding process may include, but is not limited to: frame type: each sub-video image may be identified as a different type, such as a key frame (I-frame), a predicted frame (P-frame), and a reference frame (B-frame). The frame type information indicates how the decoder processes the sub-video image to restore the original video image when decoding. Motion vector: the motion vector is used to describe the motion relationship between the current sub-video image and the reference frame. It represents the position of each pixel in the current sub-video image in the reference frame for motion compensation and motion prediction. Pixel difference: the encoded information may include pixel difference data representing pixel value differences between the current sub-video image and the reference frame. By recording the difference data, the amount of data required for encoding can be reduced, enabling video compression. Quantization parameters: the encoder uses quantization parameters to control the compression ratio. Higher quantization parameters may result in a larger compression ratio, but may reduce video quality. The quantization parameter information tells the decoder how to decode and restore the image quality. Coding parameters: the encoded information may also include other parameters related to the encoding process, such as encoder configuration, encoding algorithm selection, encoding presets, etc. These parameters affect coding quality, compression efficiency, and decoding complexity.
The coding information obtained by the coding process can be used for extracting and outputting each sub-video image belonging to the same original video image from the image buffer queue.
In an exemplary embodiment of the present invention, each sub-video image belonging to the same original video image of the original video stream is encoded in each path of video stream data, and one implementation manner of generating the encoding information of each path of video stream data is to add the same display time stamp to each sub-video image belonging to the same original video image of the original video stream in each path of video stream data, and generate the encoding information of each path of video stream data according to the display time stamp. Among them, a presentation time stamp (Presentation Time Stamp, abbreviated PTS) is a concept for determining a presentation order and a time stamp of media data on a play time axis in an audio-video process. For example, the same display time stamp ms01 is added to the sub-video image P011 belonging to the original video image P01 in the video stream data v01 and the sub-video image P012 belonging to the original video image P01 in the video stream data v02, respectively. The same display time stamp ms02 is added to the sub-video image P021 belonging to the original video image P02 in the video stream data v01 and the sub-video image P022 belonging to the original video image P02 in the video stream data v02, respectively. The display time stamp of each sub-video image is taken as a part of the coding information of the video stream data.
The embodiment of the invention can ensure that each sub-video image is presented in the correct sequence on the playing time axis by adding the same display time stamp to the sub-video image. This helps to maintain synchronization of the audio and video, ensures consistency between the audio and video, and provides a good viewing experience. Each sub-video image may be associated with the original video image to which it belongs, with the display time stamp as part of the encoded information. In the decoding and presenting stage, the decoder can correctly extract and assemble the original video image according to the time stamp information, so as to ensure the integrity and accuracy of the image. For multiple paths of video stream data, sub-video images in each path of video stream can be associated by adding a display time stamp, and an independent time stamp is provided for each sub-video image. This makes the processing of the multi-path video stream more flexible and reliable. The encoded information containing the display time stamp can provide more accurate time reference, help the decoder correctly restore the image in the decoding process, and reduce the problem of image tearing or incomplete caused by time deviation. In summary, by adding the display time stamp as a part of the encoded information, time synchronization, association of video images, support of multiple video stream processing, and more accurate time reference can be achieved, thereby improving quality and effect of audio/video processing.
In an exemplary embodiment of the present invention, one implementation of generating the encoding information of each path of video stream data according to the display time stamp is to add location information in each of the original video images to which each sub video image belongs; encoding information is generated based on the display time stamp and the location information. Not only the display time stamp but also the position information need to be added to the sub-video image during the encoding process. The position information is used to record the area range of the sub-video image in the original video image to which it belongs. For example, the position information s011 of which in the original video image P01 is added for the sub video image P011, and the position information s012 of which in the original video image P01 is added for the sub video image P012.
The embodiment of the invention can accurately record the position area of each sub-video image in the original video image by adding the position information. This helps to correctly place the sub-video images in the corresponding positions of the original video images during the decoding and rendering stages, maintaining the accuracy and integrity of the images. Each sub-video image may be associated with a particular region in the original video image to which it belongs, with the location information along with the display time stamp as part of the encoding information. In the decoding process, the decoder can accurately extract and assemble specific areas of the original video image according to the position information and the display time stamp, and the accuracy and the continuity of the image are ensured. For multi-path video stream data, independent positioning and region association of sub-video images in each path of video stream can be realized by adding position information to each sub-video image. This is very useful for processing multiple video streams simultaneously, ensuring that each sub-video image is properly placed and processed. The addition of location information may provide a more accurate positioning reference that helps the decoder correctly restore a particular region of the image during decoding. This helps to reduce problems such as image tearing, region misalignment or distortion, and provides a higher quality image presentation. In summary, by adding position information to the sub-video image and generating coding information in combination with the display time stamp, accurate positioning, region association and multi-path video stream processing of the image can be realized, more accurate image restoration and display are provided, and the quality and effect of audio and video processing are further improved.
Referring to fig. 3, a flowchart of steps of yet another method for processing video stream data according to an embodiment of the present invention is shown. The processing method of the video stream data specifically comprises the following steps:
in step 301, multi-path video stream data obtained by cutting an original video stream is obtained from a video source.
In step 302, each sub-video image belonging to the same original video image of the original video stream is encoded in each path of video stream data, so as to generate encoding information of each path of video stream data.
Step 303, performing decoding processing on each path of video stream data, obtaining corresponding coding information, and storing each sub video image after decoding processing into an image buffer queue corresponding to each path of video stream data.
Step 304, each sub video image belonging to the same original video image is extracted and output from each image buffer queue according to the coding information.
Based on the above description of the embodiment of the method for processing video stream data, a synchronous splicing scheme of video stream data is described below. The scheme may involve a video source, an encoder, a decoder and a display terminal. Wherein the number of encoders may be the same as the number of paths of the multi-path video stream data output from the video source. Referring to fig. 4, a schematic diagram of a synchronous splicing scheme of video stream data according to an embodiment of the present invention is shown. The video source may be connected to a plurality of encoders through a plurality of High-Definition Multimedia Interface multimedia interfaces (HDMI for short). Each path of video stream data is transmitted to an encoder via an HDMI. Each encoder adds a display time stamp to each sub-video image during the encoding of the received video stream data. Moreover, different sub-video images belonging to the same original video image add the same display time stamp. The decoder receives the coded data of each path of video stream and the coding information of each path of video stream. The encoded information may include, but is not limited to: the identity of the encoder, the identity of the original video stream, and the location information of each sub-video image in the original video image, etc. Furthermore, the encoded information may be informed to the decoder in the form of a table of windows (one path of encoded video stream data appears as one window in the decoder). The decoder creates corresponding image cache queues for each path of video stream data according to the coding information in the window table, and stores the decoded sub-video images into the corresponding image cache queues. The decoder may also create a global canvas object to obtain sub-video images from the plurality of image cache queues at intervals according to the frame rate information of the original video stream. Specifically, a first path of video stream of a window table is used as a main window, a display time stamp of a first sub-video image in the first path of video stream is recorded, the first sub-video image is drawn on a canvas object according to position information, sub-video images with the same display time stamp are sequentially searched from image cache queues in other windows, the searched sub-video images are drawn on corresponding canvas objects according to the position information, and finally the canvas object is sent to a display terminal through HDMI.
Referring to fig. 5, a schematic diagram of a decoding process of video stream data according to an embodiment of the present invention is shown.
In step 501, the decoder receives a windowing instruction.
The windowing instruction indicates that received multipath video stream data are decoded and transmitted to the display terminal. The windowing instruction may carry a window table of multiple paths of video stream data. The window table contains the coding information for each video stream.
Step 502, the decoder creates a global canvas object.
The encoder may create canvas objects from the resolution of the original video stream. The resolution may also be carried in the windowing instructions.
In step 503, the decoder creates a respective image buffer queue for each path of video stream data.
For example, the encoder creates n image buffer queues for n paths of video stream data.
In step 504, the decoder performs synchronous decoding processing for each path of video stream data.
The decoder acquires main and sub video images from the main image buffer queue at regular intervals according to the frame rate information of the original video stream, and records the display time stamp of the main and sub video images. And acquiring the sub-video images with the same display time stamp as the main sub-video images from the residual image cache queue. And drawing the sub-video images on the canvas object according to the acquired position information of each sub-video image.
The decoder transmits 505 the canvas object to the display terminal.
The display terminal displays the sub-video image on the canvas object, so as to achieve the effect of displaying the original video image.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 6, a block diagram of a video stream data processing apparatus according to an embodiment of the present invention is shown. The processing device for video stream data specifically comprises the following modules.
A coding information obtaining module 61, configured to obtain coding information of multiple paths of video stream data obtained by cutting an original video stream;
the video stream decoding module 62 is configured to perform decoding processing on each path of the video stream data according to the encoding information, and store each sub-video image of the same original video image belonging to the original video stream after the decoding processing into an image buffer queue corresponding to each path of the video stream data;
The video stream transmission module 63 is configured to extract and output each sub-video image belonging to the same original video image from each image buffer queue according to the encoding information.
In an exemplary embodiment of the present invention, the video streaming module 63 is configured to extract and output each sub-video image belonging to the same original video image from each of the image buffer queues according to the encoding information and the decoding output frame rate.
In an exemplary embodiment of the present invention, the video streaming module 63 includes:
the buffer queue selection module is used for selecting a main buffer queue and at least one slave buffer queue from the image buffer queues;
the sub-video image selection module is used for sequentially selecting main sub-video images from the main cache queue according to the time interval corresponding to the output frame rate of the decoding end, and synchronously selecting sub-video images corresponding to the main sub-video images from at least one sub-cache queue;
a sub-video image drawing module for drawing the master sub-video image and at least one of the slave sub-video images on a canvas object;
And the canvas image transmission module is used for transmitting the canvas object to the display terminal.
In an exemplary embodiment of the present invention, the sub video image selection module includes:
the main sub video image selection module is used for starting from a first sub video image in the main cache queue, and sequentially taking the sub video images in the main cache queue as the main sub video images according to the time interval;
and the slave sub-video image selection module is used for synchronously selecting the slave sub-video images with the same display time stamp as the master sub-video image from at least one slave cache queue.
In an exemplary embodiment of the present invention, the sub-video image drawing module is configured to draw the master sub-video image and at least one of the slave sub-video images on the canvas object according to the position information of the master sub-video image and the position information of the at least one of the slave sub-video images.
Referring to fig. 7, there is shown a block diagram of another processing apparatus for video stream data according to an embodiment of the present invention. The processing device for video stream data specifically comprises the following modules.
A video stream acquisition module 71, configured to acquire, from a video source, multiple paths of video stream data obtained by cutting an original video stream;
A video stream encoding module 72, configured to encode each sub-video image belonging to the same original video image of the original video stream in each path of the video stream data, and generate encoding information of each path of the video stream data; the coding information is used for extracting and outputting each sub-video image belonging to the same original video image from an image buffer queue corresponding to each path of video stream data.
In an exemplary embodiment of the present invention, the video stream encoding module 72 includes:
the time stamp adding module is used for adding the same display time stamp to each sub-video image belonging to the same original video image of the original video stream in each path of video stream data;
and the coding information generation module is used for generating coding information of each path of video stream data according to the display time stamp.
In an exemplary embodiment of the present invention, the encoding information generating module includes:
a position information adding module, configured to add, for each of the sub-video images, position information in the original video image to which the sub-video image belongs;
and the coding information determining module is used for generating the coding information according to the display time stamp and the position information.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above description of the method for processing video stream data and the device for processing video stream data provided by the present invention applies specific examples to illustrate the principles and embodiments of the present invention, and the above description of the examples is only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Claims (10)
1. A method for processing video stream data, the method comprising:
acquiring coding information of multi-path video stream data obtained by cutting an original video stream;
decoding each path of video stream data according to the coding information, and storing each sub-video image of the same original video image belonging to the original video stream after decoding into an image cache queue corresponding to each path of video stream data;
and respectively extracting and outputting each sub video image belonging to the same original video image from each image cache queue according to the coding information.
2. The method according to claim 1, wherein extracting and outputting each sub-video image belonging to the same original video image from each image buffer queue according to the encoding information, respectively, comprises:
and respectively extracting and outputting each sub-video image belonging to the same original video image from each image buffer queue according to the coding information and the output frame rate of the decoding end.
3. The method according to claim 1 or 2, wherein extracting and outputting each sub-video image belonging to the same original video image from each image buffer queue, respectively, comprises:
Selecting a main cache queue and at least one slave cache queue from the image cache queues;
sequentially selecting main sub-video images from the main cache queue according to the time interval corresponding to the output frame rate of the decoding end, and synchronously selecting sub-video images corresponding to the main sub-video images from at least one sub-cache queue;
drawing the master sub-video image and at least one of the slave sub-video images on a canvas object;
and transmitting the canvas object to a display terminal.
4. The method according to claim 3, wherein said sequentially selecting main sub-video images from said main buffer queue at intervals corresponding to said decoding side output frame rate and synchronously selecting sub-video images corresponding to said main sub-video images from at least one of said sub-buffer queues comprises:
starting from a first sub-video image in the main cache queue, sequentially taking the sub-video images in the main cache queue as the main sub-video images according to the time interval;
the slave sub video images with the same display time stamp as the master sub video image are synchronously selected from at least one slave cache queue.
5. A method for processing video stream data, the method comprising:
acquiring multi-path video stream data obtained by cutting an original video stream from a video source;
and carrying out coding processing on each sub-video image belonging to the same original video image of each path of video stream data, and generating coding information of each path of video stream data so as to extract and output each sub-video image belonging to the same original video image from an image cache queue corresponding to each path of video stream data according to the coding information.
6. The method according to claim 5, wherein said encoding each sub-video image belonging to the same original video image of the original video stream in each path of the video stream data to generate the encoded information of each path of the video stream data comprises:
adding the same display time stamp to each sub video image belonging to the same original video image of the original video stream in each path of video stream data;
and generating coding information of each path of video stream data according to the display time stamp.
7. A processing apparatus for video stream data, the apparatus comprising:
The coding information acquisition module is used for acquiring coding information of the multi-path video stream data obtained by cutting the original video stream;
the video stream decoding module is used for decoding each path of video stream data according to the coding information, and storing each sub-video image of the same original video image belonging to the original video stream after the decoding processing into an image cache queue corresponding to each path of video stream data;
and the video stream transmission module is used for respectively extracting and outputting each sub video image belonging to the same original video image from each image buffer queue according to the coding information.
8. A processing apparatus for video stream data, the apparatus comprising:
the video stream acquisition module is used for acquiring multi-path video stream data obtained by cutting an original video stream from a video source;
the video stream coding module is used for coding each sub-video image belonging to the same original video image of the original video stream in each path of video stream data to generate coding information of each path of video stream data; the coding information is used for extracting and outputting each sub-video image belonging to the same original video image from an image buffer queue corresponding to each path of video stream data.
9. An electronic device, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the method of processing video stream data as claimed in any of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program stored therein causes a processor to execute the processing method of video stream data according to any one of claims 1 to 6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310779623.2A CN116916090A (en) | 2023-06-28 | 2023-06-28 | Video stream data processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310779623.2A CN116916090A (en) | 2023-06-28 | 2023-06-28 | Video stream data processing method and device, electronic equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116916090A true CN116916090A (en) | 2023-10-20 |
Family
ID=88352168
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310779623.2A Pending CN116916090A (en) | 2023-06-28 | 2023-06-28 | Video stream data processing method and device, electronic equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116916090A (en) |
-
2023
- 2023-06-28 CN CN202310779623.2A patent/CN116916090A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10638169B2 (en) | Codec techniquest for fast switching without a synchronization frame | |
| US7027713B1 (en) | Method for efficient MPEG-2 transport stream frame re-sequencing | |
| KR100972792B1 (en) | Apparatus and method for synchronizing stereoscopic images and apparatus and method for providing stereoscopic images using the same | |
| US10582208B2 (en) | Video encoding apparatus, video decoding apparatus, video encoding method, and video decoding method | |
| CN112073810B (en) | Multi-layout cloud conference recording method and system and readable storage medium | |
| US9693095B2 (en) | Device and method for composing programmes from different sources in baseband | |
| CN109348309B (en) | Distributed video transcoding method suitable for frame rate up-conversion | |
| US9392210B2 (en) | Transcoding a video stream to facilitate accurate display | |
| CN107690074A (en) | Video coding and restoring method, audio/video player system and relevant device | |
| WO2022021519A1 (en) | Video decoding method, system and device and computer-readable storage medium | |
| CN115052170A (en) | Method and device for directing broadcast on cloud based on SEI time code information | |
| CN112653904B (en) | Rapid video clipping method based on PTS and DTS modification | |
| CN111757121A (en) | Video stream rewinding method and device | |
| CN112087642B (en) | Cloud guide playing method, cloud guide server and remote management terminal | |
| JP2004173118A (en) | Audio-video multiplexed data generation device, reproduction device, and video decoding device | |
| CN116916090A (en) | Video stream data processing method and device, electronic equipment and storage medium | |
| KR20040065170A (en) | Video information decoding apparatus and method | |
| CN115767130B (en) | Video data processing method, device, equipment and storage medium | |
| CN115665493B (en) | Large-screen splicing device, splicer, playback control method and system supporting recording and broadcasting | |
| CN114173207B (en) | Method and system for video frame sequential transmission | |
| CN118301374A (en) | Video data display method, system, electronic device and storage medium | |
| KR20250085599A (en) | Encoding and decoding of video including a plurality of toggleable overlays | |
| CN121486598A (en) | Method for providing multimedia data, related device and computer program product | |
| JP5066557B2 (en) | Video decoding method, video decoding apparatus, and video decoding program | |
| CN117135406A (en) | Method for seamlessly and continuously playing multiple videos |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |