Disclosure of Invention
In view of the above, embodiments of the present invention are proposed to provide a method and apparatus for video data processing that overcome or at least partially solve the above-mentioned problems.
In order to solve the above problem, an embodiment of the present invention discloses a method for processing video data, which is applied to a video network, wherein the video network comprises a video network server and a video network playing terminal, and the method comprises:
when an original byte code stream of video data is received, identifying a network abstraction layer unit from the original byte code stream;
determining whether the type of the network abstraction layer unit is a preset type;
when the type of the network abstraction layer unit is determined to be a preset type, extracting the network abstraction layer unit;
recombining the extracted network abstraction layer units to obtain a target byte code stream;
and sending the target byte code stream to a video network server through a video network, wherein the video network server is used for sending the target byte code stream to a video network playing terminal.
Preferably, the network abstraction layer unit includes a start code, and the step of identifying the network abstraction layer unit from the original byte code stream of the video data when the original byte code stream is received includes:
receiving an original byte code stream of video data;
detecting whether the original byte code stream contains the start code or not;
and if so, determining the original byte code stream from the start code to the next start code as a network abstraction layer unit.
Preferably, the step of determining whether the type of the network abstraction layer unit is a preset type includes:
acquiring the type byte code of the network abstraction layer unit;
judging whether the type byte code is a preset byte code or not;
and if so, determining that the type of the network abstraction layer unit is a preset type.
Preferably, the step of obtaining the type bytecode of the network abstraction layer unit includes:
acquiring byte codes of the designated positions of the network abstraction layer units;
and taking the byte code of the specified position as a type byte code.
Preferably, the step of recombining the extracted network abstraction layer units to obtain the target byte code stream includes:
determining the sequence of each network abstraction layer unit in the original byte code stream;
sequencing the extracted network abstraction layer units according to the sequence;
and recombining the sequenced network abstraction layer units to obtain a target byte code stream.
Preferably, the step of sending the target byte code stream to a video network server through a video network includes:
packaging the target byte code stream to obtain a video data packet;
and sending the video data packet to the video networking server through a data communication link distributed by the video networking server.
The embodiment of the invention also provides a device for processing video data, which is applied to the video network, wherein the video network comprises a video network server and a video network playing terminal, and the device comprises:
the device comprises an identification module, a storage module and a processing module, wherein the identification module is used for identifying a network abstraction layer unit from an original byte code stream of video data when the original byte code stream is received;
the preset type determining module is used for determining whether the type of the network abstraction layer unit is a preset type;
the extraction module is used for extracting the network abstraction layer unit when the type of the network abstraction layer unit is determined to be a preset type;
the target byte code stream combination module is used for recombining the extracted network abstraction layer units to obtain a target byte code stream;
and the sending module is used for sending the target byte code stream to a video network server through a video network, and the video network server is used for sending the target byte code stream to a video network playing terminal.
Preferably, the network abstraction layer unit includes a start code, and the identification module includes:
the receiving submodule is used for receiving an original byte code stream of the video data;
the detection submodule is used for detecting whether the original byte code stream contains the start code or not;
and the determining submodule is used for determining the original byte code stream from the start code to the next start code as a network abstraction layer unit.
Preferably, the preset type determining module includes:
a type byte code obtaining submodule, configured to obtain a type byte code of the network abstraction layer unit;
the preset byte code judging submodule is used for judging whether the type byte code is a preset byte code or not;
and the preset type determining submodule is used for determining that the type of the network abstraction layer unit is a preset type.
Preferably, the type bytecode obtaining sub-module includes:
a specified bytecode acquiring unit, configured to acquire a bytecode at a specified location of the network abstraction layer unit;
a type bytecode determining unit, configured to use the bytecode at the specified location as a type bytecode.
Preferably, the target byte code stream combination module includes:
the order determination submodule is used for determining the order of each network abstraction layer unit in the original byte code stream;
the sequencing submodule is used for sequencing the extracted network abstraction layer units according to the sequence;
and the combining submodule is used for recombining the sequenced network abstraction layer units to obtain the target byte code stream.
Preferably, the sending module includes:
the encapsulation submodule is used for encapsulating the target byte code stream to obtain a video data packet;
and the sending submodule is used for sending the video data packet to the video networking server through a data communication link distributed by the video networking server.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, after an original byte code stream of video data is received, a network abstraction layer unit is identified from the original byte code stream, and whether the type of the network abstraction layer unit is a preset type is determined; when the type of the network abstraction layer unit is a preset type, the network abstraction layer unit is extracted, then the extracted network abstraction layer unit is recombined to obtain a target byte code stream, and the target byte code stream is sent to a video network playing terminal.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a method for processing video data according to the present invention is shown, where the method may be applied to a video network, where the video network includes a video network server and a video network playing terminal, and the method specifically includes the following steps:
step 101, when an original byte code stream of video data is received, identifying a network abstraction layer unit from the original byte code stream;
in the embodiment of the invention, the video data can be data collected by a monitoring terminal (camera) of the video network, the video data collected by the monitoring terminal of the video network can be coded into an original byte code stream of the video data by an H.264 coding standard, compared with other existing video coding standards, the H.264 coding can provide more excellent image quality under the same bandwidth, and the compression efficiency under the same image quality is improved by about 2 times compared with other standards (MPEG 2). The h.264 encoding method may be to encode the video data into a byte stream format, for example, into a 16-ary byte stream.
H.264 is to address the differences in network transmission among different applications. Two layers are defined: VCL (Video Coding Layer) is responsible for efficient Video content presentation, NAL (Network abstraction Layer) is responsible for packetizing and transmitting data in the appropriate manner required by the Network. The NAL uses NALU (network abstraction layer unit) as a unit to support the transmission of encoded data in a packet switching technology based network. Therefore, after receiving the original byte stream of the video data, the network abstraction layer unit can be identified, and specifically, step 101 may include the following sub-steps:
substep 1011, receiving an original byte code stream of the video data;
in the embodiment of the present invention, the original byte code stream may be a hexadecimal byte code stream, for example, the following byte code streams are the hexadecimal original byte code stream:
00 00 00 01 2C 00 10 32 20 6E 75 6D 20 30 00 80 00 00 00 01 26 05 1A DC……
the above is merely an example of the original byte code stream in hexadecimal, and certainly, the original byte code stream in other binary systems may also be the original byte code stream in other binary systems, for example, the original byte code stream in binary system may also be converted from binary system to hexadecimal system when the embodiment of the present invention is implemented, and the binary system of the original byte code stream in the embodiment of the present invention is not limited herein.
Substep 1012, detecting whether the original byte code stream contains the start code;
in order to enable a decoder to conveniently detect the boundary of the network abstraction layer unit, and sequentially take out the network abstraction layer unit for decoding, the h.264 adds a start code in front of each network abstraction layer unit: "00000001" or "000001", on some types of media, for the convenience of addressing, the byte code stream is required to be aligned in length, or is an integer multiple of a certain constant, so that a few bytes of "0" are added before the start code for padding, for example, the byte code stream is required to be 4 bytes long, and a byte of "00" is added before the byte code "000001".
In the embodiment of the invention, whether the initial code is contained in the original byte code stream or not can be detected, whether the received original byte code stream contains '000001' or not can be detected, if the received original byte code stream contains '000001', the network abstraction layer unit is detected, otherwise, whether the network abstraction layer unit contains '00000001' or not is continuously detected, and if the received original byte code stream contains '00000001', the network abstraction layer unit is detected, so that the cyclic detection is performed in sequence until the original byte code stream is ended. Specifically, the detection may be performed cyclically one byte by one byte, for example, when the bytes "00" and "00" are detected in sequence, whether the next byte is "01" or not is detected, if yes, the start code "000001" is detected, if no, when all of the consecutive three bytes are detected as "00", whether the next byte is "01" or not is detected, and if yes, the start code "00000001" is detected. Certainly, a method of matching the start code integrally and circularly may also be adopted, for example, the start code "000001" is integrally matched with 3 bytes in the original byte code stream, specifically, the nth byte in the original byte code stream and the first byte and the second byte after the nth byte are taken to form three bytes to match with the start code "000001", after matching is completed, the (N + 1) th byte and the first byte and the second byte after the (N + 1) th byte are taken to form new three bytes to continue matching, so that the length (3 or 4) of the start code is taken as a period, and the byte sections with the word length of the period are taken one by one to match with the start code.
In the embodiment of the invention, the network abstraction layer unit in the original byte code stream can be effectively identified by circularly searching the start code, so that the type of the network abstraction layer unit can be determined subsequently, and whether the network abstraction layer unit is video data or not can be further determined.
And a substep 1013, determining the original byte code stream from the start code to the next start code as a network abstraction layer unit.
Since h.264 adds a start code before each network abstraction layer unit: 00000001 or 000001 as the start of the network abstraction layer unit, when the next start code is detected, it represents that the current network abstraction layer unit is finished, and the next network abstraction layer unit is started, so the original byte code stream from the start code to the next start code can be determined as the network abstraction layer unit.
In the embodiment of the invention, after the original byte code stream of the video data is received, whether the original byte code stream has the start code or not is circularly detected, and if the original byte code stream has the start code, the byte code stream between the two start codes is determined to be a network abstraction layer unit.
Step 102, determining whether the type of the network abstraction layer unit is a preset type;
in the original byte code stream of the video data, some useless data added by a third party monitoring resource or byte code segments which are generated in an abnormal state and are irrelevant to the video data may be included, and the irrelevant byte code segments also form a network abstraction layer unit. In the h.264 standard, each network abstraction layer unit is provided with a type, so that whether the network abstraction layer unit is video data can be determined by the type of the network abstraction layer unit, the concrete step 102 may include the following sub-steps:
substep 1021, obtaining the type byte code of the network abstraction layer unit;
in h.264, the byte of the first bit after the start code is defined to represent the type of the network abstraction layer unit, and if the start code is "00000001" of the four-bit byte code, the fifth bit is the type byte code, so that the byte code of the specified position of the network abstraction layer unit can be obtained, and the byte code of the specified position is taken as the type byte code.
For example, in the following original byte code stream, 2C, 26 after 00000001 are type byte codes.
00 00 00 01 2C 00 10 32 20 6E 75 6D 20 30 00 80 00 00 00 01 26 05 1A DC……
Thus, when a network abstraction layer unit is identified, the bytecode at the specified location can be obtained as the type bytecode of the network abstraction layer unit.
Substep 1022, determining whether the type bytecode is a preset bytecode;
in practical application, the video is composed of a plurality of still images, so for a video with little change, when encoding, a complete image frame a may be encoded first, and the following image frame B only encodes the difference with the image frame a, so that the size of the image frame B is only a part of the complete image frame a, and if the change of the image frame C following the image frame B is not great, the image frame C may be encoded in the form of the image frame B, so that the loop continues, the video is called a sequence (the sequence is a piece of data with the same characteristics), when a certain image and the preceding image have great change and cannot be generated by referring to the preceding image frame, the previous sequence is ended, the next sequence is started, that is, a complete image frame a1 is regenerated, the following image frame is generated by referring to the complete image frame a1, only the content of the difference with the complete image frame a1 is written, in h.264, a complete image frame is defined as an I-frame and the remaining image frames as P-frames, and in order to enable accurate decoding by a decoder, a parameter set is defined, which is a set of rarely changed data providing decoding information, including a sequence parameter set acting on a series of consecutive coded images and a picture parameter set acting on one or more individual images in a coded video sequence.
In h.264, the values of the sequence parameter set, the picture parameter set, the I frame, and the P frame are defined as follows:
0x67 Sequence Parameter Set (SPS)
0x68 Picture Parameter Set (PPS)
0x65 frame I (IDR)
0x61 frame P (non-IDR Slice)
Therefore, the preset bytecodes can be set to 0x67, 0x68, 0x65, and 0x61, after the type bytecode of the network abstraction layer unit is obtained, it can be determined whether the type bytecode is one of the preset bytecodes, if so, the substep 1023 is executed, otherwise, the procedure jumps out, and the type of the next network abstraction layer unit is continuously determined.
Of course, the values of the sequence parameter set, the picture parameter set, the I frame, and the P frame may also be decimal values, for example:
sequence Parameter Set (SPS)
Image parameter set (PPS)
I frame (IDR)
P frame (non-IDR Slice)
The above values 7, 8, 5, and 1 are obtained by hexadecimal conversion of 0x67, 0x68, 0x65, and 0x61 into decimal, and may be represented by other binary systems, such as binary systems, for example.
Sub-step 1023 of determining that the type of said network abstraction layer unit is a preset type.
When the type bytecode of the network abstraction layer unit is a preset bytecode, for example, the type bytecode of the network abstraction layer unit is one of 0x67, 0x68, 0x65, and 0x61, it is determined that the type of the network abstraction layer unit is the preset type, that is, the network abstraction layer unit is one of a sequence parameter set, a picture parameter set, an I frame, and a P frame, which is a network abstraction layer unit related to video data. And if the data is not the preset type, the data indicates that the network abstraction layer unit is redundant or abnormal.
In the embodiment of the invention, after the network abstraction layer unit is identified, the type byte code of the network abstraction layer unit is obtained, the type of the network abstraction layer unit is determined through the type byte code, and the byte code stream of video data and the byte code stream of non-video data can be determined from the original byte code stream, so that the byte code stream of the non-video data is removed.
103, when the type of the network abstraction layer unit is determined to be a preset type, extracting the network abstraction layer unit;
and when the type of the network abstraction layer unit is determined to be a preset type, extracting the network abstraction layer unit and storing the network abstraction layer unit into a cache.
In the embodiment of the present invention, the preset types are a sequence parameter set, a picture parameter set, an I frame, and a P frame related to a video, and thus the extracted network abstraction layer unit is necessarily video data.
And 104, recombining the extracted network abstraction layer units to obtain a target byte code stream.
In practical applications, the method may be to reassemble a frame of data after extracting the network abstraction layer unit of the frame of data, and the specific step 104 may include the following sub-steps:
substep 1041, determining the sequence of each network abstraction layer unit in the original byte code stream;
substep 1042, sorting the extracted network abstraction layer units according to the sequence;
and a substep 1043 of recombining the sequenced network abstraction layer units to obtain a target byte code stream.
In the embodiment of the invention, the order extracted by each network abstraction layer unit can be used as the sequence, and then the target byte code stream can be obtained by rearranging after sequencing according to the sequence.
The network abstraction layer units combined into the target byte code stream are all video data, so that redundant byte code stream fragments irrelevant to the video data do not exist in the target byte code stream, and the quality of the video, the fault-tolerant capability of a playing terminal decoder and the decoding efficiency can be improved.
And 105, sending the target byte code stream to a video network server through a video network, wherein the video network server is used for sending the target byte code stream to a video network playing terminal.
In a preferred embodiment of the present invention, step 105 comprises the following sub-steps:
substep 1051, packaging the target byte code stream to obtain a video data packet;
in the embodiment of the invention, after the target byte code stream is combined into one frame, the target byte code stream needs to be packaged for data transmission. For example, PS encapsulation of h.264 video data. In the target byte code stream, the sequence parameter set and the picture parameter set usually precede the I frame, so the sequence parameter set, the picture parameter set and the network abstraction layer unit of the I frame can be encapsulated into a PS packet, which includes a PS packet header and a PES packet header, and for the P frame, the PS packet header and the PES packet header can be directly added to encapsulate into a PS packet.
Substep 1052 sending the video data packet to the video networking server over the video networking server's assigned data communication link.
In practical application, the video data can be collected, processed and packaged at the monitoring terminal of the Ethernet and then sent to the video networking server, or the video networking terminal can be collected, processed and packaged and then sent to the video networking server.
The video network is a network with centralized control function, comprising a main control server and a lower level network device, the lower level network device comprises a terminal, one of the core concepts of the video network is that a table is configured by a downlink communication link which is notified by the main control server to a switching device and is used for the current service, and then the transmission of data packets is carried out based on the configured table, namely, the communication method in the video network comprises the following steps:
the video network server configures a downlink data communication link of the current service:
and transmitting the data packet of the current service sent by the source terminal (such as an Ethernet terminal or a video network terminal) to the target terminal (such as a video network server) according to the downlink data communication link.
In the embodiment of the present invention, configuring the downlink data communication link of the current service includes: informing the switching equipment related to the downlink data communication link of the current service to allocate a table;
further, transmitting according to the downlink data communication link includes: the configured table is consulted, and the switching equipment transmits the received data packet through the corresponding port.
In particular to the embodiment of the present invention, before the ethernet terminal or the video networking terminal sends the video data packet to the video networking service, in the process of connecting with the video networking, the video networking server allocates a data communication link between the ethernet terminal or the video networking terminal and the video networking server to the ethernet terminal or the video networking terminal according to the address of the ethernet terminal or the video networking terminal, the data communication link information includes a device (such as an ethernet gateway or a switch) and a port involved in the transmission process, so that the video data packet after being encapsulated can be sent to the video networking server according to the data communication link information, in particular, taking the ethernet terminal as an example, the video data packet is transmitted to the ethernet gateway connected with the video networking terminal according to the ethernet protocol in the ethernet network, after the ethernet gateway is subjected to protocol conversion, according to the data communication link information, and transmitting the video data packet to next equipment according to the designated port until the video data packet reaches the video networking server, transmitting the video data packet to a video networking terminal of the on-demand request after the video networking server receives the video data packet, and decoding and playing the video data packet by the video networking terminal.
In the embodiment of the invention, redundant byte code segments in the original byte code stream of the video data are removed and recombined into the target byte code stream, the target byte code stream is sent to the video networking server after being packaged, and the target byte code stream is sent to the video networking terminal for decoding and playing by the video networking server, for example, a set top box and the like for decoding, so that the fault-tolerant capability and the decoding efficiency of the set top box are improved, and the playing quality of the video is improved.
To better illustrate the embodiments of the present invention, the following illustrates the embodiments of the present invention as an example:
referring to fig. 2, a flow chart of an example of video data processing according to the present invention is shown, where the video data processing flow includes:
s1: receiving a hexadecimal original byte code stream of H.264 video data;
s2: searching 000001 in the original byte code stream, and executing S3 if the 000001 is found; otherwise, executing S4;
s3: acquiring a fifth byte code (if the start code is 000001, one byte 00 is supplemented previously);
s4: searching 00000001 in the original byte code stream, if the original byte code stream is searched, executing S3, otherwise, returning to S1;
s5: and judging whether the value of the fifth byte code is equal to 5, 7, 8 or 1, if so, executing S6, and if not, executing S7.
S6, combining into a target byte code stream;
and S7, jumping out.
In this example, a hexadecimal raw byte stream is received, for example:
00 00 00 01 2C 00 10 32 20 6E 75 6D 20 30 00 80 00 00 00 01 26 05 1A DC 36 31 31 46 52 41 4D 45
firstly searching '000001' in the original byte code stream, searching '00000001' if the '000001' is not searched, judging whether the value of decimal system is 5 or 7 or 8 or 1 if the '000001' or '00000001' is existed, converting the '2C' of hexadecimal system into decimal system, extracting the original byte code stream from the '000001' or '00000001' to the next '000001' or '00000001' if the value of decimal system is 5 or 7 or 8 or 1, and re-synthesizing the target byte code stream, thus removing redundant byte code segments in the original byte code stream (the value of one byte after '000001' or '00000001' is not 5 or 7 or 8 or 1) and ensuring that the target byte code stream is all video data.
In the embodiment of the invention, after an original byte code stream of video data is received, a network abstraction layer unit is identified from the original byte code stream, and whether the type of the network abstraction layer unit is a preset type is determined; when the type of the network abstraction layer unit is a preset type, the network abstraction layer unit is extracted, then the extracted network abstraction layer unit is recombined to obtain a target byte code stream, and the target byte code stream is sent to a video network playing terminal.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 3, a block diagram of an embodiment of a video data processing apparatus according to the present invention is shown, the apparatus is applied to a video network, the video network includes a video network server and a video network playing terminal, the apparatus includes:
the device comprises an identification module 201, a processing module and a processing module, wherein the identification module is used for identifying a network abstraction layer unit from an original byte code stream of video data when the original byte code stream is received;
a preset type determining module 202, configured to determine whether the type of the network abstraction layer unit is a preset type;
the extracting module 203 is configured to extract the network abstraction layer unit when it is determined that the type of the network abstraction layer unit is a preset type;
the target byte code stream combination module 204 is used for recombining the extracted network abstraction layer units to obtain a target byte code stream;
a sending module 205, configured to send the target byte code stream to a video networking server through a video networking, where the video networking server is configured to send the target byte code stream to a video networking play terminal.
In a preferred embodiment of the present invention, the network abstraction layer unit includes a start code, and the identification module 201 includes:
the receiving submodule is used for receiving an original byte code stream of the video data;
the detection submodule is used for detecting whether the original byte code stream contains the start code or not;
and the determining submodule is used for determining the original byte code stream from the start code to the next start code as a network abstraction layer unit.
In a preferred embodiment of the present invention, the preset type determining module 202 includes:
a type byte code obtaining submodule, configured to obtain a type byte code of the network abstraction layer unit;
the preset byte code judging submodule is used for judging whether the type byte code is a preset byte code or not;
and the preset type determining submodule is used for determining that the type of the network abstraction layer unit is a preset type.
The type bytecode obtaining submodule includes:
a specified bytecode acquiring unit, configured to acquire a bytecode at a specified location of the network abstraction layer unit;
a type bytecode determining unit, configured to use the bytecode at the specified location as a type bytecode.
In a preferred embodiment of the present invention, the target byte-stream combining module 204 includes:
the order determination submodule is used for determining the order of each network abstraction layer unit in the original byte code stream;
the sequencing submodule is used for sequencing the extracted network abstraction layer units according to the sequence;
and the combining submodule is used for recombining the sequenced network abstraction layer units to obtain the target byte code stream.
In a preferred embodiment of the present invention, the sending module 205 includes:
the encapsulation submodule is used for encapsulating the target byte code stream to obtain a video data packet;
and the sending submodule is used for sending the video data packet to the video networking server through a data communication link distributed by the video networking server.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and apparatus for processing video data provided by the present invention are described in detail above, and the principle and the implementation of the present invention are explained herein by applying specific examples, and the description of the above embodiments is only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.