CN112954433A - Video processing method and device, electronic equipment and storage medium - Google Patents
Video processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112954433A CN112954433A CN202110137850.6A CN202110137850A CN112954433A CN 112954433 A CN112954433 A CN 112954433A CN 202110137850 A CN202110137850 A CN 202110137850A CN 112954433 A CN112954433 A CN 112954433A
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- data
- code stream
- data frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 43
- 238000004806 packaging method and process Methods 0.000 claims abstract description 17
- 238000000034 method Methods 0.000 claims description 46
- 230000001360 synchronised effect Effects 0.000 claims description 37
- 230000005540 biological transmission Effects 0.000 abstract description 32
- 230000000875 corresponding effect Effects 0.000 description 11
- 230000001960 triggered effect Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000012544 monitoring process Methods 0.000 description 6
- 230000002457 bidirectional effect Effects 0.000 description 3
- 238000003780 insertion Methods 0.000 description 3
- 230000037431 insertion Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/437—Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44222—Analytics of user selections, e.g. selection of programs or purchase activity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/643—Communication protocols
- H04N21/6437—Real-time Transport Protocol [RTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application discloses a video processing method, a video processing device, electronic equipment and a storage medium, wherein a video frame is obtained by coding video data to be sent according to a video coding protocol based on a video sending request; packaging video auxiliary data to be sent into a custom data frame; and obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel, so that the video data and the video auxiliary data are sent through the same channel, and the transmission flow of the video data and the video auxiliary data is simplified.
Description
Technical Field
The present disclosure relates to the field of video technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
Video technology, i.e. moving picture transmission technology, is called video service or video service in the field of telecommunications, and is often called multimedia communication, streaming media communication, etc. in the computer industry. Video communication technology is the main technology for realizing and completing video services. With the rapid development of internet technology, the application of video technology is more and more widespread, such as live video, video conference, video course, video chat, and the like.
Generally, a terminal encodes and transmits collected video data. In addition to video data, video auxiliary data such as subtitles, audio, and data generated by various application scenarios are often transmitted. The video data and the video auxiliary data need to be displayed synchronously, and the video auxiliary data and the video data are processed and transmitted separately, so that the process is complicated.
Disclosure of Invention
In view of the foregoing problems, the present invention provides a video processing method, an apparatus, an electronic device and a storage medium.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
based on the video sending request, carrying out coding processing on video data to be sent according to a video coding protocol to obtain a video frame;
packaging video auxiliary data to be sent into a custom data frame;
and obtaining a target video code stream according to the video frame and the user-defined data frame, and sending the target video code stream through a specified channel.
In a second aspect, an embodiment of the present application further provides a video processing method, where the method includes:
based on the video playing request, obtaining a target video code stream through a specified channel;
obtaining a video frame and a user-defined data frame according to the target video code stream;
decoding the video frame according to a video coding protocol to obtain video data;
acquiring video auxiliary data encapsulated in the custom data frame;
and playing the video data and the video auxiliary data.
In a third aspect, an embodiment of the present application further provides a video processing apparatus, where the apparatus includes:
the video frame acquisition module is used for coding the video data to be sent according to a video coding protocol based on the video sending request to obtain a video frame;
the custom data frame acquisition module is used for packaging the video auxiliary data to be sent into a custom data frame;
and the target video code stream acquisition module is used for acquiring a target video code stream according to the video frame and the custom data frame and sending the target video code stream through a specified channel.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods as described above.
In a fifth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be called by a processor to execute the method described above.
According to the technical scheme provided by the invention, video data to be sent are coded according to a video coding protocol to obtain video frames based on a video sending request; packaging video auxiliary data to be sent into a custom data frame; and obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel, so that the video data and the video auxiliary data are sent through the same channel, and the transmission flow of the video data and the video auxiliary data is simplified.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments, not all embodiments, of the present application. All other embodiments and drawings obtained by a person skilled in the art based on the embodiments of the present application without any inventive step are within the scope of the present invention.
FIG. 1 is a schematic diagram illustrating an application environment according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a video processing method according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating a video processing method according to another embodiment of the present application;
fig. 4 is a flow chart illustrating a video processing method according to another embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a structure of a target video stream in another embodiment of the present application;
FIG. 6 is a schematic flow chart of step S340 in another embodiment of the present application;
fig. 7 is a flow chart illustrating a video processing method according to still another embodiment of the present application;
FIG. 8 is a flow chart illustrating step S450 in still another embodiment of the present application;
fig. 9 is a schematic flow chart illustrating a video processing method according to a further embodiment of the present application;
fig. 10 is a block diagram illustrating a video processing apparatus according to an embodiment of the present application;
fig. 11 is a block diagram illustrating a video processing apparatus according to still another embodiment of the present application;
fig. 12 is a block diagram illustrating an electronic device according to an embodiment of the present application;
fig. 13 shows a block diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
With the rapid development of internet technology, the application of video technology is more and more extensive. Generally, a terminal collects video data and encodes the video data, and the encoded video data is transmitted through a specified channel. However, in the video transmission process, not only video files but also video auxiliary data such as subtitles, audio, and data generated by various application scenes are often transmitted. The video auxiliary data needs to be presented synchronously with the video data. However, the above-mentioned video auxiliary data and the encoded video data are transmitted separately, which is tedious, and the synchronous display of the video data and the video auxiliary data is affected due to network delay, packet loss, decoder performance, and other reasons.
In order to improve the above problem, the inventors of the present application provide a video processing method, an apparatus, an electronic device, and a storage medium. The method comprises the following steps: based on a video sending request, carrying out coding processing on video data to be sent according to a video coding protocol to obtain a video frame; packaging video auxiliary data to be sent into a custom data frame; and obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel, so that the video data and the video auxiliary data are sent through the same channel, and the transmission flow of the video data and the video auxiliary data is simplified.
The following description is directed to an application environment of a video method provided by an embodiment of the present invention.
Referring to fig. 1, fig. 1 illustrates a video processing system according to an embodiment of the present application, the video processing system including: a terminal 100 and a server 200.
The terminal 100 includes, but is not limited to, a laptop computer, a desktop computer, a tablet computer, a smart phone, a wearable electronic device, and the like. The terminal 100 may include an image capture device, such as a camera, that may capture raw video data. The terminal 100 may also include a sound collection device, such as a microphone, that may collect raw audio data. The terminal 100 may also include a display device, such as a display screen, that may be used to present video data. The terminal 100 may further comprise playing means for playing the video data and the video auxiliary data. The terminal 100 may also include a sound playing device, such as a speaker, which may be used to present audio data. The terminal 100 may further include a storage device that can store data of the terminal 100, such as video data, audio data, subtitle data, and the like.
The server 200 may be an independent server 200, or may be a server 200 cluster composed of a plurality of servers 200. The server 200 can realize functions of information transmission and reception, information processing, and the like. For example, the server 200 may receive a target video stream transmitted by the video capture terminal 100. The server 200 may transmit the received target video stream to other playback terminals 100.
The terminal 100 and the server 200 can communicate with each other through the internet. Optionally, the internet described above uses standard communication techniques and/or protocols. The internet is typically the internet, but can be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), any combination of mobile, wireline or wireless networks, private or virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 2, an embodiment of the present application provides a video processing method applicable to a terminal 100, and the present embodiment describes a flow of steps at the terminal 100 side, where the method may include steps S110 to S130.
Step S110, based on the video sending request, the video data to be sent is coded according to a video coding protocol to obtain a video frame.
The video data is a continuous sequence of images, and is composed of continuous images. The video data may be acquired by an image capture device of the terminal 100.
In this embodiment, the terminal 100 may be installed with an application program capable of acquiring video data, such as a camera application program, an instant messaging application program, a live broadcast application program, and the like. The image capturing device of the terminal 100 may be triggered to capture video data by calling the related functions of the application program, for example, capturing images at regular time to obtain video data when the functions of recording video, chatting video, and live broadcasting are called.
The video transmission request may include an identifier of video data to be transmitted, and the video data to be transmitted may be determined according to the identifier, and the video transmission request is used to instruct the terminal 100 to process the video data to be transmitted.
Alternatively, the video transmission request may be triggered by the user. The terminal 100 may obtain a video transmission request triggered by the user by monitoring an input operation of the user. The input operation of the user may be a contact type or a non-contact type. For example, the user may perform a contact-type input operation, such as clicking a button, an option, or the like associated with video transmission, for example, buttons or options for transmitting video, sharing video, chatting video, and playing live video. The terminal 100 can detect the click object of the user by monitoring the user operation, so as to obtain the video sending request corresponding to the object, and process the video data corresponding to the object. For another example, the user may also perform a non-contact input operation, such as a voice command, a gesture command, and the like, and the terminal 100 can detect the voice command, the gesture command, and the like input by the user through monitoring the user operation, so as to obtain a video transmission request corresponding to the voice command and the gesture command.
Alternatively, the video transmission request may be triggered by a specified event. An event triggering the video transmission request may be preset. For example, the specified event may be an event that the recording of the video is finished, and after the recording of the video is finished, the video sending request is triggered to process the video data to be sent. The designated event may also be a timed-transmission event, and the video transmission request is triggered at a timing after the video starts to be recorded so as to process the video data to be transmitted. The designated event can also be a quantitative sending event, and after the video starts to be recorded, the video sending request is triggered to process the video data to be sent when the video data to be sent reach a threshold value.
The video data is uncompressed data, and the data format of the video data may be YUV format, RGB format, or the like. The data size of video data is very large, and the video data needs to be encoded and compressed to remove redundancy in spatial and temporal dimensions. The compressed video data is convenient to transmit, so that the space can be saved, and the transmission efficiency is improved. Video data may be compressed according to a video coding protocol in general. The video coding protocol may be an h.264 coding protocol, an h.265 coding protocol, or the like, and performs coding processing on video data to be transmitted according to the video coding protocol to obtain a video frame. Video frames may comprise a variety of types depending on the video encoding protocol, for example in the h.264 encoding protocol, I-frames, P-frames, B-frames, etc. I (Intra-coded picture) frame is an Intra-coded picture frame, i.e. an I frame represents a key frame, and the data of this frame is completely retained, and decoding can be completed only by this frame data, which is also called Intra picture. P (Predictive-coded Picture) frames, i.e., forward Predictive coded image frames. The P frame represents the difference between the frame and a previous key frame (or P frame), and the difference defined by the frame needs to be superimposed on the previously buffered picture to generate the final picture when decoding. A P frame can also be understood as a difference frame, a P frame having no full picture data, only data that differs from the picture of the previous frame. B (bidirectional predicted picture) frames, i.e. bidirectional predictive coded image frames. The B frame is a bidirectional difference frame, that is, the B frame records the difference between the current frame and the previous and subsequent frames. That is, to decode a B frame, not only the previous buffer picture but also the decoded picture are acquired, and the final picture is acquired by superimposing the previous and subsequent pictures on the data of the current frame. It is to be understood that video data may be encoded into different types of video frames according to different video encoding protocols, and the invention is not limited thereto.
And step S120, packaging the video auxiliary data to be sent into a custom data frame.
The video auxiliary data includes audio data, subtitle data, index data, and the like. The video auxiliary data may be data collected by the terminal 100 at the same time when the video data is acquired, for example, when the image collecting device of the terminal 100 collects an image to obtain the video data, the sound collecting device of the terminal 100 may collect a sound to obtain the audio data. The video auxiliary data may also be data added by the user for presentation with the video data as it is played, for example, the user adding subtitle data at a relevant location of the video data.
Typically the video auxiliary data and the video data are processed separately and transmitted separately. On one hand, the video auxiliary data and the video data need to be processed respectively, the process is complicated, and CPU resources are occupied. On the other hand, the problem that the video data and the video auxiliary data arrive in sequence or the packet is lost and the like may be caused due to network delay in the process, and the video data may have been received and played, but the video auxiliary data has not been received, which increases difficulty in synchronous display of the video data and the video auxiliary data.
In order to simplify the flow of video data and video auxiliary data transmission, the embodiments of the present application solve this problem by constructing custom data frames.
In the embodiment of the present application, the custom data frame is constructed for transmitting the video auxiliary data, and may be implemented by using an SEI (Supplemental Enhancement Information) data frame in the h.264 protocol, for example.
In the embodiment of the application, the user-defined data frame is used for bearing and transmitting the video auxiliary data to be transmitted, the user-defined data frame is inserted between the video frames, and the user-defined data frame and the video frame are transmitted together, so that the data processing flow can be simplified, and the video frame and the user-defined data frame are transmitted together, so that the influence of network delay is reduced.
The type of the self-defined data frame can be set to be distinguished from the video frame, and the video frame and the self-defined data frame can be distinguished according to the type of the frame during decoding.
In some embodiments, the video auxiliary data may be compressed and encapsulated into custom data frames. The audio data may be encoded according to an audio encoding protocol, which may be, for example, an AAC or the like, to obtain audio frames. Further, the audio frame is encapsulated into the custom data frame, so that space can be greatly saved, and the custom data frame can bear and transmit more data.
In other embodiments, the video auxiliary data may also be encapsulated directly into custom data frames without compression processing. For example, the subtitle data may be directly encapsulated into custom data frames.
In still other embodiments, the same custom data frame may encapsulate one or more types of video auxiliary data at the same time.
Optionally, a specified field may be added to the header of the custom data frame to distinguish the data type encapsulated by the custom data. For example, audio data may be represented by "01" and subtitle data may be represented by "11". The data type of the video auxiliary data packaged by the custom data frame can be rapidly identified according to the designated field of the custom data frame, and the video auxiliary data is sent to the corresponding module for processing.
And S130, obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel.
In general, the transmission of video data and video auxiliary data may be based on the network protocol of the data communication. Such as RTMP (Real Time Messaging Protocol). RTMP provides that the terminal 100 and the server 200 can establish a network Connection (Net Connection) representing the underlying connectivity between the terminal 100 and the server 200. A plurality of network streams may be established between the terminal 100 and the server 200, the network streams representing channels of multimedia data, such as video data and video auxiliary data.
In the prior art, both video data and video auxiliary data are processed and transmitted separately. For example, video data is processed to obtain a video stream, and audio data is processed to form an audio stream. The video code stream and the audio code stream are transmitted through respective assigned channels through a network, for example, by establishing respective network streams. In the embodiment, the target video code stream is obtained by the video frame and the self-defined data frame
In this embodiment, the target video code stream is obtained according to the video frame and the custom data frame. It will be appreciated that the target video codestream may be comprised of a plurality of video frames and one or more custom data frames. The difference between the target video code stream and the existing video code stream is that a custom data frame packaged with video auxiliary data is inserted between video frames, the standard of the original video code stream can be not damaged (namely the video frame can still be identified and played), video auxiliary data can also be added, the target video code stream can be transmitted through a specified channel, such as an established network stream, so that the video data and the video auxiliary data are transmitted through the same specified channel, and the transmission flow of the video data and the video auxiliary data is reduced.
In the video processing method provided by an embodiment of the present application, video data to be transmitted is encoded according to a video encoding protocol based on a video transmission request to obtain a video frame; packaging video auxiliary data to be sent into a custom data frame; and obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel, so that the video data and the video auxiliary data are sent through the same channel, and the transmission flow of the video data and the video auxiliary data is simplified.
Referring to fig. 3, another embodiment of the present application provides a video processing method, which can be applied to a video processing terminal 100, and this embodiment describes a flow of steps at the terminal 100 side, where the method can include steps S210 to S250.
Step S210, based on the video transmission request, performing encoding processing on the video data to be transmitted according to a video encoding protocol to obtain a video frame.
Step S220, packaging the video auxiliary data to be transmitted into a custom data frame.
For the detailed description of steps S210 to S220, refer to steps S110 to S120, which are not described herein again.
And step S230, obtaining an initial video code stream according to all the obtained video frames.
Video data to be transmitted is processed by a video coding protocol to obtain a plurality of video frames. The plurality of video frames are arranged according to the time sequence to obtain an initial video code stream. Alternatively, the video data may be provided with a time stamp according to the acquisition time. The plurality of video frames can be arranged according to the time sequence of the time stamps to obtain an initial video code stream.
And S240, inserting a custom data frame into the initial video code stream to obtain a target video code stream.
And inserting the self-defined data frame between the video frames of the initial video code stream to obtain the target video code stream.
The setting position of the self-defined data frame can be the inter-frame of any video frame, and finally the self-defined data frame and the video frame form a target video code stream, namely the video data and the video auxiliary data can be transmitted through the same channel. When a target video code stream is received, only the user-defined data frame and the video frame are distinguished according to the type of the frame. And processing the video auxiliary data encapsulated by the user-defined data frames respectively, processing the video data encapsulated by the video frames, and realizing synchronous display by synchronizing the video data and the video auxiliary data. The video auxiliary data is provided with a timestamp. The video auxiliary data may be time stamped according to the acquisition time. For example, when audio data is acquired, time-dependent time stamps may be set to the audio data. Optionally, the timestamp of the video auxiliary data may also be custom set by the user. For example, when a user adds a corresponding subtitle when specific video data is displayed, the time stamp of the subtitle data is the same as that of the specific video data. When synchronizing video data and video auxiliary data, synchronization may be performed according to the corresponding time stamp.
And step S250, transmitting the target video code stream through a specified channel.
In a video processing method provided by another embodiment of the present application, video data to be transmitted is encoded according to a video encoding protocol based on a video transmission request to obtain a video frame; packaging video auxiliary data to be sent into a custom data frame; and obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel, so that the video data and the video auxiliary data are sent through the same channel, the transmission flow of the video data and the video auxiliary data is simplified, and the difficulty of synchronous display is reduced.
Referring to fig. 4, another embodiment of the present application provides a video processing method, which can be applied to the video processing terminal 100, and this embodiment describes a flow of steps at the video processing terminal 100 side, and the method can include steps S310 to S350.
Step S310, based on the video sending request, the video data to be sent is coded according to a video coding protocol to obtain a video frame.
Step S320, packaging the video auxiliary data to be transmitted into a custom data frame.
And step S330, obtaining an initial video code stream according to all the obtained video frames.
For the detailed description of steps S310 to S330, refer to steps S210 to S230, which are not described herein again.
And step S340, inserting the custom data frame into the initial video code stream according to the timestamp of the custom data frame to obtain a target video code stream.
Although the target video code stream can be transmitted through the same appointed channel, the data transmission flow is simplified, and the influence of network delay is reduced. When a target video code stream is received, a video frame and a custom data frame are respectively decoded to obtain video data, and data packaged by the custom data frame is obtained to obtain video auxiliary data. When a video is played, the video data and the video auxiliary data are played synchronously by means of the timestamp, and a playing device is required to have the function of synchronizing the video data and the video auxiliary data, and the synchronous function needs to be additionally developed when the playing device is developed. Moreover, the synchronization is performed by means of the time stamp, and the video data and the video auxiliary data are gradually dislocated due to the error of the time stamp and the accumulation of the error in the playing process, so that the synchronization difficulty is high.
In order to solve the problem of synchronization when video data and video playing data are played, the embodiment of the application further inserts the custom data frame into the initial video code stream according to the timestamp of the custom data frame to obtain the target video code stream.
Fig. 5 is a schematic diagram illustrating a structure of a target video stream in another embodiment of the present application. In fig. 6, the target video stream 110 includes video frames 111 and custom data frames 112. The video frames 111 include a first video frame 111a, a second video frame 111b, a third video frame 111c, a fourth video frame 111d, and a fifth video frame 111 e. The video frames 111 are arranged in the order of the timestamps, which in turn are: the first video frame 111 a-the second video frame 111 b-the third video frame 111 c-the fourth video frame 111 d-the fifth video frame 111 e. The custom data frames 112 include a first custom data frame 112a and a second custom data frame 112 b. The timestamp of the first custom data frame 112a in this embodiment is after the second video frame 111b and before the third video frame 111c, so the first custom data frame 112a is inserted between the second video frame 111b and the third video frame 111 c. The timestamp of the second custom data frame 112b is after the fourth video frame 111d and before the fifth video frame 111e, so the second custom data frame 112b is inserted between the fourth video frame 111d and the fifth video frame 111 e. And inserting the custom data frame 112 into the video frame 111 according to the timestamp of the custom data frame 112 to obtain the target video code stream 110. It is understood that the present application is not limited thereto, and the target video stream 110 may further include other necessary information, for example, the target video stream may further include a frame header for indicating a start position of the target video stream. The target video code stream may further include a frame end for indicating an end position of the target video code stream, and certainly, the target video code stream may further include other necessary information, which is not limited herein.
In some embodiments, the video frames in the initial video code stream are arranged in the order of the time stamps. And inserting the custom data frame into the initial video code stream according to the timestamp of the custom data to obtain the target video code stream. The video frames and the custom data frames in the target video code stream are arranged according to the time sequence. When the target video code stream is processed, the target video code stream can be sequentially processed according to the arrangement sequence of the video frames and the custom data frames, namely the first frame is the video frame, the first frame video frame is obtained, and the video data obtained by processing is played; the second frame is a self-defined data frame, and the self-defined data frame of the second frame is obtained and processed to obtain the video auxiliary data for playing. In this way, the video data and the auxiliary video data can be processed in sequence without additional synchronization, and the video frames and the custom data frames are processed in sequence in the order of arrangement, and the playing sequence is correlated, unlike the playing sequence, which is played independently of each other according to their respective timestamps. Thus, even if there is an error in the time stamp, the correction and correction can be achieved when the video frame and the custom data frame are sequentially processed in accordance with the order thereof. For example, the timestamp of the first video frame is 1 st second, the timestamp of the first custom data frame is 2 nd second, and the timestamp of the second video frame is 3 rd second. And if the video frame and the custom data frame are respectively processed to obtain video data and video auxiliary data, and then synchronous display is carried out.
Then the first video data is obtained according to the first video frame, the second video data is obtained according to the second video frame, the first video data is played in the 1 st second, and the second video data is played in the 3 rd second. And, obtaining first video auxiliary data according to the first custom data frame, obtaining second video auxiliary data according to the second custom data frame, playing the first video auxiliary data at 1 st second, playing the first video auxiliary data at 2 nd second, and playing the second video auxiliary data at 4 th second.
The theoretical playing sequence is:
and (3) playing video data: first video data (1 st second) -second video data (3 rd second)
Playing the video auxiliary data: first video auxiliary data (1 st second) -second video auxiliary data (2 nd second).
However, in practical situations, if video data and video auxiliary data are played separately, a fine gap exists between data of each frame due to the fact that the playing time of each frame of data is difficult to control accurately, and after the gap is accumulated, the asynchronism becomes more and more obvious. Moreover, the video data is non-linear, and the video auxiliary data may have linear data and also cause deviation. Slowly, there will be an out of synchronization situation, either the video data is playing fast or the video auxiliary data is playing fast.
The actual playing sequence is:
and (3) playing video data: first video data (1 st second) -second video data (3 rd second)
Playing the video auxiliary data: first video auxiliary data (1 st second) -second video auxiliary data (theoretically 2 nd second, which may actually be due to skew, and finally both the second and the first video data are played at 3 rd second, resulting in playing misplacement)
Therefore, there is a need to develop a synchronization method in the prior art, which continuously corrects the deviation in the process of playing the video data and the non-video data, so as to increase the playing speed of the video data when the video data is played slowly, and decrease the playing speed of the video data when the video data is played quickly. Which increases the difficulty of synchronization.
In the above example, the processing of the second video data is started after the processing of the second video auxiliary data is finished, so that the situation of dislocation of the simultaneous playing of the second video auxiliary data and the second video data does not occur, and the synchronization difficulty is greatly reduced.
Specifically, referring to fig. 6, fig. 6 shows a schematic flow chart of step S340 in another embodiment of the present application, and in the embodiment of the present application, step S340 may include:
step S341, determining whether a synchronous video frame of the custom data frame exists in the initial video code stream, where the synchronous video frame is a video frame having the same timestamp as the custom data frame in the initial video code stream.
And synchronizing the video frames with the same time stamp as the custom data frames. The two may be arranged adjacently. The error of the playing time of two frames is not large.
And step S342, if the video stream exists, the custom data frame and the synchronous video frame are adjacently arranged in the initial video code stream.
Alternatively, the custom data frame and the synchronized video frame may be processed first, and then the synchronized video frame is processed, that is, the custom data frame is arranged before the synchronized video frame. The video auxiliary data may be played first and then the video data.
Alternatively, the custom data frame and the synchronized video frame may be processed first, and then the custom data frame is processed, that is, the custom data frame is disposed after the synchronized video frame. The video data may be played first and then the video auxiliary data.
Optionally, in order to accurately control the playing time, the video frame may be set adjacent to the custom data frame, and then the playing sequence is set to control the playing.
Because the video frame can obtain the video data which can be played only through the decoding processing, the decoding processing is more complex and the required time is longer. In order to further improve the playing efficiency, if the synchronous video frame of the custom data frame exists in the initial video code stream, the custom data frame and the synchronous video frame are adjacently arranged in the initial video code stream, and the custom data frame is arranged behind the synchronous video frame. The synchronous video frame is decoded first, and the playing efficiency is improved.
And S343, if the time stamp does not exist, inserting the custom data frame in the initial video code stream according to the sequence of the time stamps.
And if the condition that the timestamps are the same does not exist, sequentially inserting the custom data frames according to the sequence of the timestamps.
And step S350, sending the target video code stream through a specified channel.
In another embodiment of the present application, a video processing method obtains a video frame by encoding video data to be transmitted according to a video encoding protocol based on a video transmission request; packaging video auxiliary data to be sent into a custom data frame; and obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel, so that video data and video auxiliary data are sent through the same channel, the transmission flow of the video data and the video auxiliary data is simplified, and the difficulty of synchronous display is reduced by setting the sequence of the video frame and the video auxiliary frame.
Referring to fig. 7, a video processing method applicable to the video processing terminal 100 is provided in yet another embodiment of the present application, which describes a flow of steps at the video processing terminal 100 side, and the method may include steps S410 to S420.
And step S410, based on the video sending request, carrying out coding processing on the video data to be sent according to a video coding protocol to obtain a video frame.
Step S420, packaging the video auxiliary data to be transmitted into a custom data frame.
And step S430, obtaining an initial video code stream according to all the obtained video frames.
Step S440, inserting the custom data frame into the initial video code stream according to the timestamp of the custom data frame to obtain a target video code stream.
For detailed description of steps S410 to S440, refer to steps S310 to S340, which are not described herein again.
And S450, setting the interframe sequence of the custom data frame according to the timestamp of the custom data frame, wherein the interframe sequence represents the playing sequence of the custom data frame and the video frame adjacent to the custom data frame.
Wherein, the inter-frame sequence can be set in the preset field of the self-defined data frame. For example, in the h.264 coding protocol, each frame has a NALU header, which occupies one byte and is divided into three parts: the forbidden _ zero _ bit, nal _ ref _ idc, nal _ unit _ type occupy 1, 2, 5 bits, respectively. Where nal _ ref _ idc may indicate the relationship between the current frame and the adjacent frame (if a video frame), such as playing at the same time, playing the current frame first, or playing the adjacent frame first.
Specifically, referring to fig. 8, fig. 8 shows a schematic flowchart of step S450 in another embodiment of the present application, and in the embodiment of the present application, step S450 may include:
step S451, if the timestamp of the custom data frame is the same as the timestamp of the adjacent video frame, setting the inter-frame sequence of the custom data frame as a first sequence, where the first sequence represents that the custom data frame and the adjacent video frame with the same timestamp are played at the same time.
If the timestamps are the same, the custom data frame and the video frame with the same timestamp are played at the same time, so that accurate playing is ensured.
And step S452, if the time stamp of the custom data frame is different from the time stamp of the adjacent video frame, setting the inter-frame sequence of the custom data frame as a second sequence, wherein the second sequence represents that the custom data frame and the adjacent video frame are sequentially played according to the arrangement sequence in the target video code stream.
If the timestamps are different, the custom data frame is played in sequence with the adjacent video frame.
And step S460, sending the target video code stream through a specified channel.
In a video processing method provided in another embodiment of the present application, a video frame is obtained by encoding video data to be transmitted according to a video encoding protocol based on a video transmission request; packaging video auxiliary data to be sent into a custom data frame; and obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel, so that video data and video auxiliary data are sent through the same channel, the transmission flow of the video data and the video auxiliary data is simplified, and accurate playing is ensured by setting the sequence between frames.
Referring to fig. 9, a video processing method applicable to the video display terminal 100 according to a further embodiment of the present application is described in the present embodiment, where the method includes steps S510 to S550.
Step S510, based on the video playing request, a target video code stream is obtained through a specified channel.
In this embodiment, the terminal 100 may be configured to have an application program capable of acquiring the target video code stream, such as a video playing application program, an instant messaging application program, a live broadcast application program, and the like. The terminal 100 may be triggered to acquire the target video code stream by calling the related functions of the application program.
The video playing request may include an identifier of video data to be acquired, a target video code stream corresponding to the data to be acquired may be determined according to the identifier, and the video playing request is used to instruct the terminal 100 to acquire the target video code stream through a specified channel.
Alternatively, the video play request may be triggered by the user. The terminal 100 may obtain a video playing request triggered by the user by monitoring an input operation of the user. The input operation of the user may be a contact type or a non-contact type. For example, the user may operate through a contact-type input, such as clicking a button, an option, or the like associated with video playing, such as a button or an option to play video, chat video, watch live, or the like. The terminal 100 can detect a click object of the user by monitoring the user operation, so as to obtain a video playing request corresponding to the object, so as to obtain a target video code stream corresponding to the object. For another example, the user may also perform a non-contact input operation, such as a voice command, a gesture command, and the like, and the terminal 100 can detect the voice command, the gesture command, and the like input by the user through monitoring the user operation, so as to obtain a video playing request corresponding to the voice command and the gesture command.
Alternatively, the video play request may be triggered by a specified event. An event triggering the video play request may be preset. For example, the specified event may be an event that the last video playing is finished, for example, when watching a television play, after the video playing of one episode is finished, the next episode is automatically played.
And step S520, obtaining a video frame and a custom data frame according to the target video code stream.
The target video code stream is composed of video frames and self-defined data frames, and the video frames and the self-defined data frames can be obtained according to the target video code stream.
Step S530, decoding the video frame according to the video coding protocol to obtain video data.
The video frames are compressed data, and uncompressed video data is needed for playing the video. The video frames may be decoded according to a video coding protocol to obtain video data. The video encoding protocol may be h.264 encoding protocol, h.265 encoding protocol, etc.
And S540, acquiring the video auxiliary data encapsulated in the custom data frame.
The custom data frame is encapsulated with video auxiliary data, and the video auxiliary data can be obtained through decapsulation. The video auxiliary data may be compressed data or uncompressed data. When the acquired video auxiliary data is compressed data, the compressed video auxiliary data needs to be decompressed further. For example, if the acquired video auxiliary data is an audio frame, the audio frame needs to be decoded to obtain audio data.
It can be understood that the video frames and the custom data frames can be sequentially processed according to the arrangement sequence of the video frames and the custom data frames in the target video code stream.
And step S550, playing the video data and the video auxiliary data.
In some embodiments, the video data in the video frame and the video auxiliary data in the custom data frame may be acquired separately, and then the video data and the video auxiliary data are synchronized according to the timestamp and then played.
In other embodiments, the video data and the video auxiliary data may be played sequentially in the order of the video frames and the custom data frames.
Optionally, if the video frame precedes the custom data frame, the video data is played first, and then the video auxiliary data is played.
Optionally, if the video frame follows the custom data frame, the video auxiliary data is played first, and then the video data is played.
In still other embodiments, if the custom data frame sets the inter-frame order, then playback is performed in the set inter-frame order.
Alternatively, if the interframe order of the custom data frames is the first order, the video auxiliary data is played in synchronization with the video data having the same timestamp.
Optionally, if the inter-frame sequence of the custom data frame is the second sequence, the video auxiliary data and the adjacent video data are sequentially played in the arranging sequence.
In a video processing method provided by another embodiment of the present application, a video frame is obtained by encoding video data to be transmitted according to a video encoding protocol based on a video transmission request; packaging video auxiliary data to be sent into a custom data frame; and obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel, so that the video data and the video auxiliary data are sent through the same channel, and the transmission flow of the video data and the video auxiliary data is simplified.
Referring to fig. 10, a video processing apparatus 300 according to an embodiment of the present invention is shown, where the video processing apparatus 300 includes: a video frame acquisition module 310, a custom data frame acquisition module 320, and a target video stream acquisition module 330.
A video frame obtaining module 310, configured to, based on the video sending request, perform coding processing on video data to be sent according to a video coding protocol to obtain a video frame;
a custom data frame obtaining module 320, configured to package video auxiliary data to be sent into a custom data frame;
and the target video code stream obtaining module 330 is configured to obtain a target video code stream according to the video frame and the custom data frame, and send the target video code stream through a specified channel.
Further, the video processing apparatus 300 further includes:
and the initial video code stream acquisition module is used for acquiring an initial video code stream according to all the acquired video frames.
And the user-defined data frame inserting module is used for inserting the user-defined data frame into the initial video code stream to obtain the target video code stream.
And the target video code stream sending module is used for sending the target video code stream through the specified channel.
Further, the video processing apparatus 300 further includes a timestamp custom data insertion module.
And the timestamp custom data insertion module is used for inserting custom data frames into the initial video code stream according to the timestamps of the custom data frames so as to obtain the target video code stream.
Further, the timestamp custom data insertion module comprises a synchronous video frame judgment unit, a first setting unit and a second setting unit.
And the synchronous video frame judging unit is used for judging whether a synchronous video frame of the user-defined data frame exists in the initial video code stream, wherein the synchronous video frame is a video frame which has the same timestamp as the user-defined data frame in the initial video code stream.
And the first setting unit is used for adjacently setting the self-defined data frame and the synchronous video frame in the initial video code stream if the self-defined data frame and the synchronous video frame exist.
The first setting unit is further configured to, if the synchronous video frame of the custom data frame exists in the initial video code stream, set the custom data frame and the synchronous video frame adjacently in the initial video code stream, and set the custom data frame behind the synchronous video frame.
And the second setting unit is used for inserting the custom data frames into the initial video code stream according to the sequence of the timestamps if the custom data frames do not exist.
Further, the video processing apparatus 300 further includes an inter-frame order setting module.
And the interframe sequence setting module is used for setting the interframe sequence of the custom data frames according to the time stamps of the custom data frames, wherein the interframe sequence represents the playing sequence of the custom data frames and the video frames adjacent to the custom data frames.
Further, the inter-frame order setting module includes a first order unit and a second order unit.
And the first sequence unit is used for setting the inter-frame sequence of the custom data frames to be a first sequence if the time stamps of the custom data frames are the same as the time stamps of the adjacent video frames, and the first sequence represents that the custom data frames and the adjacent video frames with the same time stamps are played simultaneously.
And the second sequence unit is used for setting the inter-frame sequence of the custom data frames to be a second sequence if the time stamps of the custom data frames are different from the time stamps of the adjacent video frames, and the second sequence represents that the custom data frames and the adjacent video frames are sequentially played according to the arrangement sequence in the target video code stream.
Referring to fig. 11, a video processing apparatus 400 according to another embodiment of the present invention is shown, where the video processing apparatus 400 includes: a target video code stream acquiring module 410, a video frame and custom data frame acquiring module 420, a video data acquiring module 430, a video auxiliary data acquiring module 440 and a playing module 450.
And a target video code stream obtaining module 410, configured to obtain the target video code stream through the specified channel based on the video playing request.
And a video frame and custom data frame obtaining module 420, configured to obtain a video frame and a custom data frame according to the target video code stream.
The video data obtaining module 430 is configured to decode the video frame according to a video coding protocol to obtain video data.
The video auxiliary data obtaining module 440 is configured to obtain the video auxiliary data encapsulated in the custom data frame.
The playing module 450 is configured to play the video data and the video auxiliary data.
Referring to fig. 12, based on the video processing method, another electronic device 500 including a processor capable of executing the video processing method is provided in the embodiment of the present application, where the electronic device 500 further includes one or more processors 510, a memory 520, and one or more application programs. The memory 520 stores programs that can execute the content of the foregoing embodiments, and the processor 510 can execute the programs stored in the memory.
The Memory 520 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 520 may be used to store instructions, programs, code sets, or instruction sets. The memory 520 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a video play request, a video send request, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use (such as video data, video auxiliary data), and the like.
Referring to fig. 13, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 600 has stored therein a program code 610, the program code 610 being capable of being invoked by a processor to perform the method described in the above method embodiments.
The computer-readable storage medium 600 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 600 includes a non-volatile computer-readable storage medium. The computer readable storage medium 600 has storage space for program code 610 for performing any of the method steps described above. The program code 610 can be read from or written to one or more computer program products. The program code may be compressed, for example, in a suitable form.
The video processing method, the video processing device, the electronic equipment and the storage medium provided by the invention have the advantages that the video data to be sent are coded according to the video coding protocol based on the video sending request to obtain the video frame; packaging video auxiliary data to be sent into a custom data frame; and obtaining a target video code stream according to the video frame and the custom data frame, and sending the target video code stream through a specified channel, so that the video data and the video auxiliary data are sent through the same channel, and the transmission flow of the video data and the video auxiliary data is simplified.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (11)
1. A method of video processing, the method comprising:
based on the video sending request, carrying out coding processing on video data to be sent according to a video coding protocol to obtain a video frame;
packaging video auxiliary data to be sent into a custom data frame;
and obtaining a target video code stream according to the video frame and the user-defined data frame, and sending the target video code stream through a specified channel.
2. The method of claim 1, wherein obtaining a target video stream according to the video frame and the custom data frame, and sending the target video stream through a specified channel comprises:
obtaining an initial video code stream according to all the obtained video frames;
inserting the user-defined data frame into the initial video code stream to obtain the target video code stream;
and sending the target video code stream through a specified channel.
3. The method of claim 2, wherein the inserting the custom data frame into the initial video code stream to obtain the target video code stream comprises:
and inserting the custom data frame into the initial video code stream according to the timestamp of the custom data frame to obtain a target video code stream.
4. The method of claim 3, wherein the inserting the custom data frame into the initial video code stream according to the timestamp of the custom data frame to obtain a target video code stream comprises:
judging whether a synchronous video frame of the user-defined data frame exists in the initial video code stream, wherein the synchronous video frame is a video frame which has the same timestamp as the user-defined data frame in the initial video code stream;
if yes, the user-defined data frame and the synchronous video frame are adjacently arranged in the initial video code stream;
and if the time stamp does not exist, inserting the custom data frame into the initial video code stream according to the sequence of the time stamps.
5. The method of claim 4, wherein the adjacently setting the custom data frame and the synchronized video frame in the initial video stream if the custom data frame and the synchronized video frame exist comprises:
and if the synchronous video frame of the user-defined data frame exists in the initial video code stream, the user-defined data frame and the synchronous video frame are adjacently arranged in the initial video code stream, and the user-defined data frame is arranged behind the synchronous video frame.
6. The method of claim 3, wherein after inserting the custom data frame into the initial video code stream according to the timestamp of the custom data frame to obtain a target video code stream, the method further comprises:
and setting the interframe sequence of the custom data frame according to the timestamp of the custom data frame, wherein the interframe sequence represents the playing sequence of the custom data frame and the video frame adjacent to the custom data frame.
7. The method of claim 6, wherein setting the interframe order of the custom data frame according to the timestamp of the custom data frame comprises:
if the time stamp of the user-defined data frame is the same as the time stamp of the adjacent video frame, setting the inter-frame sequence of the user-defined data frame as a first sequence, wherein the first sequence represents that the user-defined data frame and the adjacent video frame with the same time stamp are played simultaneously;
and if the time stamp of the user-defined data frame is different from the time stamp of the adjacent video frame, setting the inter-frame sequence of the user-defined data frame as a second sequence, wherein the second sequence represents that the user-defined data frame and the adjacent video frame are sequentially played according to the arrangement sequence in the target video code stream.
8. A method of video processing, the method comprising:
based on the video playing request, obtaining a target video code stream through a specified channel;
obtaining a video frame and a user-defined data frame according to the target video code stream;
decoding the video frame according to a video coding protocol to obtain video data;
acquiring video auxiliary data encapsulated in the custom data frame;
and playing the video data and the video auxiliary data.
9. A video processing apparatus, characterized in that the apparatus comprises:
the video frame acquisition module is used for coding the video data to be sent according to a video coding protocol based on the video sending request to obtain a video frame;
the custom data frame acquisition module is used for packaging the video auxiliary data to be sent into a custom data frame;
and the target video code stream acquisition module is used for acquiring a target video code stream according to the video frame and the custom data frame and sending the target video code stream through a specified channel.
10. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
11. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110137850.6A CN112954433B (en) | 2021-02-01 | 2021-02-01 | Video processing method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110137850.6A CN112954433B (en) | 2021-02-01 | 2021-02-01 | Video processing method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112954433A true CN112954433A (en) | 2021-06-11 |
CN112954433B CN112954433B (en) | 2024-01-09 |
Family
ID=76241051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110137850.6A Active CN112954433B (en) | 2021-02-01 | 2021-02-01 | Video processing method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112954433B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113507628A (en) * | 2021-06-30 | 2021-10-15 | 深圳市圆周率软件科技有限责任公司 | Video data processing method and related equipment |
CN113923513A (en) * | 2021-09-08 | 2022-01-11 | 浙江大华技术股份有限公司 | Video processing method and device |
CN115695858A (en) * | 2022-11-08 | 2023-02-03 | 天津萨图芯科技有限公司 | SEI encryption-based virtual film production video master film coding and decoding system, method and platform |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018033152A1 (en) * | 2016-08-19 | 2018-02-22 | 中兴通讯股份有限公司 | Video playing method and apparatus |
CN109348252A (en) * | 2018-11-01 | 2019-02-15 | 腾讯科技(深圳)有限公司 | Video broadcasting method, video transmission method, device, equipment and storage medium |
-
2021
- 2021-02-01 CN CN202110137850.6A patent/CN112954433B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018033152A1 (en) * | 2016-08-19 | 2018-02-22 | 中兴通讯股份有限公司 | Video playing method and apparatus |
CN109348252A (en) * | 2018-11-01 | 2019-02-15 | 腾讯科技(深圳)有限公司 | Video broadcasting method, video transmission method, device, equipment and storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113507628A (en) * | 2021-06-30 | 2021-10-15 | 深圳市圆周率软件科技有限责任公司 | Video data processing method and related equipment |
CN113923513A (en) * | 2021-09-08 | 2022-01-11 | 浙江大华技术股份有限公司 | Video processing method and device |
CN113923513B (en) * | 2021-09-08 | 2024-05-28 | 浙江大华技术股份有限公司 | Video processing method and device |
CN115695858A (en) * | 2022-11-08 | 2023-02-03 | 天津萨图芯科技有限公司 | SEI encryption-based virtual film production video master film coding and decoding system, method and platform |
CN115695858B (en) * | 2022-11-08 | 2024-09-13 | 天津萨图芯科技有限公司 | SEI (solid-state imaging device) encryption-based virtual film-making video master film coding and decoding control method |
Also Published As
Publication number | Publication date |
---|---|
CN112954433B (en) | 2024-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12096046B2 (en) | Live streaming method and system, server, and storage medium | |
US11184627B2 (en) | Video transcoding system, method, apparatus, and storage medium | |
CN112954433B (en) | Video processing method, device, electronic equipment and storage medium | |
US8918533B2 (en) | Video switching for streaming video data | |
CN112752115B (en) | Live broadcast data transmission method, device, equipment and medium | |
US20160337424A1 (en) | Transferring media data using a websocket subprotocol | |
EP1439705A2 (en) | Method and apparatus for processing, transmitting and receiving dynamic image data | |
CN112073543B (en) | Cloud video recording method and system and readable storage medium | |
CN102868939A (en) | Method for synchronizing audio/video data in real-time video monitoring system | |
CN112584231B (en) | Video live broadcast method and device, edge device of CDN (content delivery network) and user terminal | |
JP6377784B2 (en) | A method for one-to-many audio-video streaming with audio-video synchronization capture | |
US20180176278A1 (en) | Detecting and signaling new initialization segments during manifest-file-free media streaming | |
US11570226B2 (en) | Protocol conversion of a video stream | |
CN111770390B (en) | Data processing method, device, server and storage medium | |
CN110996122B (en) | Video frame transmission method, device, computer equipment and storage medium | |
US8223270B2 (en) | Transmitter, receiver, transmission method, reception method, transmission program, reception program, and video content data structure | |
CN112565224B (en) | Video processing method and device | |
CN115209163B (en) | Data processing method and device, storage medium and electronic equipment | |
CN105979284B (en) | Mobile terminal video sharing method | |
CN112584088B (en) | Method for transmitting media stream data, electronic device and storage medium | |
CN102843566B (en) | Communication method and equipment for three-dimensional (3D) video data | |
CN110351576B (en) | Method and system for rapidly displaying real-time video stream in industrial scene | |
WO2024082561A1 (en) | Video processing method and apparatus, computer, readable storage medium, and program product | |
CN112565799B (en) | Video data processing method and device | |
JP2010081227A (en) | Moving image decoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |