CN113329269A - Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium - Google Patents
Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113329269A CN113329269A CN202010128221.2A CN202010128221A CN113329269A CN 113329269 A CN113329269 A CN 113329269A CN 202010128221 A CN202010128221 A CN 202010128221A CN 113329269 A CN113329269 A CN 113329269A
- Authority
- CN
- China
- Prior art keywords
- video
- data
- video data
- format
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The embodiment of the application provides a video coding method, a video decoding method, a video coding device, an electronic device and a storage medium, wherein the video coding method comprises the following steps: acquiring a video to be coded; splitting video data of a video to be coded into multiple paths of video data based on multiple paths of channels corresponding to video frame formats of the video to be coded; and respectively encoding each path of video data. According to the channel corresponding to the video frame format, the video to be coded is split into multiple paths of video data, each path of video data is coded respectively, each path of video data only comprises partial data of the video to be coded, the requirements on the functions of the coder are reduced, down sampling is not needed, and the loss of image quality in the coding and decoding processes can be reduced.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a video encoding method, a video decoding method, an apparatus, an electronic device, and a storage medium.
Background
With the improvement of the requirement of people on the resolution of videos, the display technology of ultra-high-definition videos is gradually applied to various scenes, such as ultra-high-definition videos displayed in urban traffic management, ultra-high-definition videos displayed in fire control command centers, ultra-high-definition videos displayed on walls of shopping malls, and the like.
In the prior art, in the process of encoding and decoding an ultra-high-definition video, the limitation of an encoding function of an encoder is considered, and a video frame with ultra-high-definition resolution cannot be encoded, so that at an encoding end, a down-sampling process is performed on the video frame to reduce the resolution of the video frame, and then the down-sampled video frame is encoded. And at a decoding end, performing up-sampling on the decoded video frame to obtain the video frame with ultrahigh definition resolution.
However, with the above method, the down-sampling of the image during encoding inevitably causes image quality loss, and even if the up-sampling is performed after decoding, the image quality cannot be effectively restored, and the image quality loss is large during the video encoding.
Disclosure of Invention
Embodiments of the present application provide a video encoding method and apparatus, a video decoding method and apparatus, an electronic device, and a storage medium, so as to reduce loss of image quality during encoding and decoding. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a video encoding method, which is applied to an encoding end, and the method includes:
acquiring a video to be coded;
splitting video data of the video to be coded into multi-channel video data based on a multi-channel corresponding to the video frame format of the video to be coded;
and respectively encoding each path of video data.
In a possible implementation, the video frames of the video to be encoded have timestamps, each path of video data includes multiple pieces of frame data, and each piece of frame data into which the same video frame is split has the same timestamp as the split video frame.
In a possible implementation manner, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
and splitting the video data of the video to be coded into multi-channel video data corresponding to the multi-channel one by one on the basis of the multi-channel corresponding to the video frame format of the video to be coded.
In a possible implementation manner, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
based on a multi-channel corresponding to the video frame format of the video to be coded, splitting the video data of the video to be coded into multi-channel video data corresponding to the multi-channel packets one by one according to the multi-channel packets of the multi-channel, wherein at least one channel packet comprises at least two channels.
In a possible implementation, the video frame format of the video to be encoded is one of RGB format, CMYK format, HSL format, HSV format, HIS format and YUV format.
In a second aspect, an embodiment of the present application provides a video decoding method, which is applied to a decoding end, and the method includes:
acquiring video data to be decoded, wherein the video data to be decoded comprises multi-channel coded video data, and the video data is obtained by splitting each video frame of a target video according to a multi-channel corresponding to the video frame format of the target video;
decoding each path of coded video data to obtain each path of video data;
and merging the video data of each path to obtain the target video.
In a possible implementation, the video frame of the target video has a timestamp, each path of video data includes a plurality of frame data, and each frame data into which the same video frame is split has the same timestamp as the split video frame.
In one possible embodiment, the video frame format of the target video includes but is not limited to: RGB format, CMYK format, HSL format, HSV format, HIS format, YUV format.
In a third aspect, an embodiment of the present application provides a video encoding apparatus, which is applied to an encoding end, and the apparatus includes:
the video acquisition module is used for acquiring a video to be coded;
the video splitting module is used for splitting the video data of the video to be coded into multi-channel video data based on the multi-channel corresponding to the video frame format of the video to be coded;
and the video coding module is used for coding each path of video data respectively.
In a possible implementation, the video frames of the video to be encoded have timestamps, each path of video data includes multiple pieces of frame data, and each piece of frame data into which the same video frame is split has the same timestamp as the split video frame.
In one possible embodiment, the video frame format of the video to be encoded includes, but is not limited to: RGB format, CMYK format, HSL format, HSV format, HIS format, YUV format.
In a possible implementation manner, the video splitting module is specifically configured to: and splitting the video data of the video to be coded into multi-channel video data corresponding to the multi-channel one by one on the basis of the multi-channel corresponding to the video frame format of the video to be coded.
In a possible implementation manner, the video splitting module is specifically configured to: based on a multi-channel corresponding to the video frame format of the video to be coded, splitting the video data of the video to be coded into multi-channel video data corresponding to the multi-channel packets one by one according to the multi-channel packets of the multi-channel, wherein at least one channel packet comprises at least two channels.
In a fourth aspect, an embodiment of the present application provides a video decoding apparatus, which is applied to a decoding end, and the apparatus includes:
the data receiving module is used for acquiring video data to be decoded, wherein the video data to be decoded comprises multi-channel coded video data, and the video data is obtained by splitting each video frame of a target video according to a multi-channel corresponding to a video frame format of the target video;
the video decoding module is used for decoding each path of coded video data to obtain each path of video data;
and the video merging module is used for merging the video data of each path to obtain the target video.
In a possible implementation, the video frame of the target video has a timestamp, each path of video data includes a plurality of frame data, and each frame data into which the same video frame is split has the same timestamp as the split video frame.
In one possible embodiment, the video frame format of the target video includes but is not limited to: RGB format, CMYK format, HSL format, HSV format, HIS format, YUV format.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the video encoding method according to any of the first aspect described above when executing the program stored in the memory.
In a sixth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the video decoding method according to any of the second aspects when executing the program stored in the memory.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program, when executed by a processor, implements the video encoding method according to any of the first aspect.
In an eighth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program, when executed by a processor, implements the video decoding method according to any one of the second aspects.
The video coding and decoding method, device, electronic equipment and storage medium provided by the embodiment of the application acquire a video to be coded; splitting video data of a video to be coded into multiple paths of video data based on multiple paths of channels corresponding to video frame formats of the video to be coded; and respectively encoding each path of video data. According to the channel corresponding to the video frame format, the video to be coded is split into multiple paths of video data, each path of video data is coded respectively, each path of video data only comprises partial data of the video to be coded, the requirements on the functions of the coder are reduced, down sampling is not needed, and the loss of image quality in the coding and decoding processes can be reduced. Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first schematic diagram of a video encoding method applied to an encoding end according to an embodiment of the present application;
fig. 2 is a second schematic diagram of a video encoding method applied to an encoding end according to an embodiment of the present application;
fig. 3 is a third schematic diagram of a video encoding method applied to an encoding end according to an embodiment of the present application;
fig. 4 is a fourth schematic diagram of a video encoding method applied to an encoding end according to an embodiment of the present application;
fig. 5 is a fifth schematic diagram of a video encoding method applied to an encoding end according to an embodiment of the present application;
fig. 6 is a first schematic diagram of a video encoding method applied to a decoding end according to an embodiment of the present application;
FIG. 7 is a second exemplary diagram of a video encoding method applied to a decoding end according to an embodiment of the present application;
FIG. 8 is a third exemplary diagram of a video encoding method applied to a decoding end according to an embodiment of the present application;
fig. 9 is a schematic diagram of a video transmission method according to an embodiment of the present application;
fig. 10 is a schematic diagram of a video encoding apparatus applied to an encoding end according to an embodiment of the present application;
fig. 11 is a schematic diagram of a video decoding apparatus applied to a decoding end according to an embodiment of the present application;
fig. 12 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to reduce the loss of image quality in the encoding and decoding processes, an embodiment of the present application provides a video encoding method applied to an encoding end, and referring to fig. 1, the method includes:
s101, obtaining a video to be coded.
The video coding method applied to the coding end in the embodiment of the application can be realized by electronic equipment with coding capability, and the specific electronic equipment can be a video camera or a hard disk video recorder and the like. The video to be coded is composed of video frames, and can be collected by electronic equipment or a camera connected with the electronic equipment.
S102, splitting video data of the video to be coded into multiple paths of video data based on multiple paths of channels corresponding to video frame formats of the video to be coded.
The video to be coded is composed of video frames, the video frame formats of all the video frames in the same video are the same, each video frame format corresponds to a plurality of channels, and the video frame format can adopt any color model in a color space. For example, RGB (Red, Green, Blue, Red, Green, Blue) format, CMYK (Cyan, Magenta, Yellow, Black) format, HSL (Hue, Saturation, brightness) format, HSV (Hue, Saturation, Value) format, HIS (Hue, Intensity, Saturation) format, YUV format, and the like.
In one possible embodiment, the channels are color channels. For example, the RGB format corresponds to R channel, G channel, and B channel; the CMYK format corresponds to C, M, Y, and K channels. And splitting the video to be coded into data corresponding to each channel according to the channel corresponding to the video frame format to obtain multi-channel video data.
The video to be coded is split into multiple paths of video data, and each path of video data comprises data of at least one channel.
For example, when the video frame format is the HSL format, the video to be encoded is split into H-channel video data, S-channel video data, and L-channel video data. When the video frame format is HSV format, splitting the video to be coded into H channel video data, S channel video data and V channel video data. When the video frame format is the HIS format, splitting the video to be coded into H channel video data, I channel video data and S channel video data. When the video frame format is a YUV format, a video to be coded is split into Y-channel video data, U-channel video data and V-channel video data, wherein Y represents brightness, namely a gray-scale value, and U and V represent chroma, and the functions of describing image color and saturation and specifying the color of a pixel are achieved.
For example, when the video frame format is the HSL format, the video to be encoded is split into two paths of video data, where one path of video data includes data of the H channel and the S channel, and the other path of video data includes data of the L channel.
And S103, respectively coding each path of video data.
And each path of video data is coded respectively, and each path of video data only comprises partial data of the video to be coded, so that the requirement on the function of the coder can be reduced.
In the embodiment of the application, a video to be coded is split into multiple paths of video data according to a channel corresponding to a video frame format, each path of video data is coded respectively, each path of video data only comprises partial data of the video to be coded, the requirements on the functions of a coder are reduced, down sampling is not needed, and the loss of image quality in the coding and decoding processes can be reduced.
In a possible implementation, the video frames of the video to be encoded have time stamps, each path of video data includes a plurality of frame data, and each frame data into which the same video frame is split has the same time stamp as the split video frame.
For example, the video frame format corresponds to channels 1-3, and the video frame a may be split into channel 1 frame data a, channel 2 frame data b, and channel 3 frame data c, where the time stamp of the video frame a is the same as the time stamp of the channel 1 frame data a, channel 2 frame data b, and channel 3 frame data c.
In the embodiment of the application, the frame data inherits the time stamp of the corresponding video frame, so that the frame data of different video frames can be distinguished through the time stamp, and the synthesis of the subsequent frame data is facilitated.
In a possible implementation manner, referring to fig. 2, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
s201, based on a multi-channel corresponding to a video frame format of a video to be coded, splitting video data of the video to be coded into multi-channel video data corresponding to the multi-channel one by one.
Optionally, when the video frame format of the video to be encoded is an RGB format, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
and splitting the video to be coded into R channel video data, G channel video data and B channel video data according to the channels corresponding to the RGB format.
Optionally, when the video frame format of the video to be encoded is a CMYK format, splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded, where the splitting includes:
and splitting the video to be coded into C channel video data, M channel video data, Y channel video data and K channel video data according to the channels corresponding to the CMYK format.
Optionally, when the video frame format of the video to be encoded is the HSL format, the splitting the video data of the video to be encoded into multiple paths of video data based on the multiple paths corresponding to the video frame format of the video to be encoded includes:
and splitting the video to be coded into H channel video data, S channel video data and L channel video data according to the channel corresponding to the HSL format.
Optionally, when the video frame format of the video to be encoded is an HSV format, splitting the video data of the video to be encoded into multiple paths of video data based on the multiple paths corresponding to the video frame format of the video to be encoded, where the splitting includes:
and splitting the video to be coded into H channel video data, S channel video data and V channel video data according to the channel corresponding to the HSV format.
Optionally, when the video frame format of the video to be encoded is the HIS format, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
and splitting the video to be coded into H channel video data, I channel video data and S channel video data according to the channel corresponding to the HIS format.
Optionally, when the video frame format of the video to be encoded is a YUV format, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
and splitting the video to be coded into Y-channel video data, U-channel video data and V-channel video data according to the channel corresponding to the YUV format.
In a possible implementation manner, referring to fig. 3, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
s301, based on a multi-channel corresponding to a video frame format of a video to be coded, splitting video data of the video to be coded into multi-channel video data corresponding to a plurality of channel packets one by one according to a plurality of channel packets of the multi-channel, wherein at least one channel packet comprises at least two channels.
The channel grouping can be customized according to the actual situation.
Optionally, when the video frame format of the video to be encoded is an RGB format, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
according to channels corresponding to an RGB format, splitting a video to be coded into first video data and second video data, wherein the first video data comprise data of two channels in R, G, B channels, and the second video data comprise data of another channel in R, G, B channels except the first video data.
For example, the first video data includes data of R, G channels, and the second video data includes data of B channels; or, the first video data includes R, B channel data, and the second video data includes G channel data; or, the first video data includes data of G, B channels, the second video data includes data of R channels, and the like.
Optionally, when the video frame format of the video to be encoded is a CMYK format, splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded, where the splitting includes:
according to color channels corresponding to CMYK formats, splitting a video to be coded into first video data and second video data, wherein the first video data comprise data of at least two of C, M, Y, K channels, and the second video data comprise data of other channels except the first video data in C, M, Y, K channels.
For example, the first video data includes data of C, M, Y channels, and the second video data includes data of K channels; or, the first video data includes C, M, K channel data, and the second video data includes Y channel data; or, the first video data includes C, Y, K channel data, and the second video data includes M channel data; or, the first video data includes M, Y, K channel data, and the second video data includes C channel data; or, the first video data includes C, M channel data, and the second video data includes Y, K channel data; or, the first video data includes C, Y channel data, and the second video data includes M, K channel data; or, the first video data includes C, K channel data and the second video data includes M, Y channel data.
Optionally, when the video frame format of the video to be encoded is a CMYK format, splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded, where the splitting includes:
according to color channels corresponding to CMYK formats, a video to be coded is split into first video data, second video data and third video data, wherein the first video data comprise data of two channels in C, M, Y, K channels, and the second video data and the third video data respectively comprise one of the data of the two channels except the first video data in C, M, Y, K channels.
For example, the first video data includes data of C, M channels, the second video data includes data of K channels, and the third video data includes data of Y channels; or, the first video data includes C, Y channel data, the second video data includes M channel data, and the third video data includes K channel data; or, the first video data includes C, K channel data, the second video data includes M channel data, and the third video data includes Y channel data; or, the first video data includes M, Y channel data, the second video data includes C channel data, and the third video data includes K channel data; or, the first video data includes M, K channel data, the second video data includes C channel data, and the third video data includes Y channel data; or, the first video data includes Y, K-channel data, the second video data includes C-channel data, and the third video data includes M-channel data.
Optionally, when the video frame format of the video to be encoded is the HSL format, the splitting the video data of the video to be encoded into multiple paths of video data based on the multiple paths corresponding to the video frame format of the video to be encoded includes:
according to the channel corresponding to the HSL format, splitting the video to be coded into first video data and second video data, wherein the first video data comprise H, S, L channel data of two channels, and the second video data comprise H, S, L channel data of another channel except the first video data.
For example, the first video data includes data of H, S channels, and the second video data includes data of L channels; or, the first video data includes H, L channel data, and the second video data includes S channel data; alternatively, the first video data includes S, L-channel data, and the second video data includes H-channel data.
Optionally, when the video frame format of the video to be encoded is an HSV format, splitting the video data of the video to be encoded into multiple paths of video data based on the multiple paths corresponding to the video frame format of the video to be encoded, where the splitting includes:
according to the channels corresponding to the HSV format, splitting a video to be coded into first video data and second video data, wherein the first video data comprises data of two channels in H, S, V channels, and the second video data comprises data of another channel except the first video data in H, S, V channels.
For example, the first video data includes data of H, S channels, and the second video data includes data of V channels; or, the first video data includes H, V channel data, and the second video data includes S channel data; alternatively, the first video data includes S, V-channel data, and the second video data includes H-channel data.
Optionally, when the video frame format of the video to be encoded is the HIS format, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
according to a channel corresponding to the HIS format, splitting a video to be coded into first video data and second video data, wherein the first video data comprise H, I, S channel data of two channels, and the second video data comprise H, I, S channel data of another channel except the first video data.
For example, the first video data includes data of H, I channels, and the second video data includes data of S channels; or, the first video data includes H, S channel data, and the second video data includes I channel data; alternatively, the first video data includes I, S-channel data, and the second video data includes H-channel data.
Optionally, when the video frame format of the video to be encoded is a YUV format, the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded includes:
according to the channels corresponding to the YUV format, splitting a video to be coded into first video data and second video data, wherein the first video data comprises data of two channels in Y, U, V channels, and the second video data comprises data of another channel except the first video data in Y, U, V channels.
For example, the first video data includes data of Y, U channels, and the second video data includes data of V channels; or, the first video data includes Y, V channel data, and the second video data includes U channel data; alternatively, the first video data includes U, V-channel data, and the second video data includes Y-channel data.
In a possible implementation, referring to fig. 4, after the encoding of the video data of each path respectively, the method further includes:
s401, packaging the coded video data of each path respectively, and sending the packaged video data of each path.
And respectively encapsulating each path of encoded video data to obtain multiple paths of encapsulated video data, and sending each path of encapsulated video data. And each path of coded video data is independently packaged, a plurality of transmission lines can be adopted for transmission, and compared with the method that each frame of video frame is packaged into one data packet, the limitation of a single transmission line bandwidth on the video resolution can be reduced.
In a possible implementation, referring to fig. 5, after the encoding of the video data of each path respectively, the method further includes:
and S501, for each path of encoded video data, encapsulating each frame of data split from the same video frame into one data packet to obtain a frame data packet corresponding to each video frame, and sending each frame of data packet.
The data into which the video frame is split is called frame data, and each path of video data comprises a plurality of frame data. One data packet includes data of each frame divided from the same video frame. For example, the video frame a is split into channel 1 frame data a, channel 2 frame data b, and channel 3 frame data c, and then the channel 1 frame data a, channel 2 frame data b, and channel 3 frame data c are encapsulated into one data packet.
In the embodiment of the application, each frame of data obtained by splitting each same frame of video frame is encapsulated into one data packet, so that the synthesis of the decoded video frame is facilitated, even if the packet loss occurs, only a certain frame of video frame is lost, the situation that the data of a certain channel is lost after the video frame is synthesized can not occur, and the display effect can be increased.
An embodiment of the present application further provides a video decoding method, applied to a decoding end, referring to fig. 6, where the method includes:
s601, video data to be decoded is obtained, wherein the video data to be decoded comprises multi-channel coded video data, and the video data is obtained by splitting each video frame of a target video according to a multi-channel corresponding to a video frame format of the target video.
The video encoding method applied to the decoding end in the embodiment of the application can be realized by electronic equipment with decoding capability, and specifically, the electronic equipment can be a hard disk video recorder or a spliced display screen and the like. The multi-channel encoded video data can be obtained by any of the above-mentioned video encoding methods applied to the encoding end, and will not be described herein again. In a possible embodiment, the video frame format of the target video includes but is not limited to: RGB format, CMYK format, HSL format, HSV format, HIS format, YUV format.
S602, decoding each path of coded video data to obtain each path of video data.
And decoding the coded video data by using a decoder to obtain each path of video data.
And S603, merging the video data of each path to obtain a target video.
And merging the data belonging to the same video frame in each path of video data according to the channel corresponding to each path of video data, thereby obtaining the target video.
In the embodiment of the application, the video data to be decoded comprises multiple paths of encoded video data, the video data is obtained by splitting each video frame of a target video according to a channel corresponding to a video frame format of the target video, each path of video data only comprises partial data of the video to be encoded, the requirements on the function of an encoder are reduced, down sampling is not needed, up sampling is not needed after decoding, and loss of image quality in the encoding and decoding processes can be reduced.
In one possible implementation, the video frame of the target video has a timestamp, each path of video data includes a plurality of frame data, and each frame data into which the same video frame is split has the same timestamp as the split video frame.
For example, the video frame format corresponds to channels 1-3, and the video frame a may be split into channel 1 frame data a, channel 2 frame data b, and channel 3 frame data c, where the time stamp of the video frame a is the same as the time stamp of the channel 1 frame data a, channel 2 frame data b, and channel 3 frame data c.
In the process of synthesizing the video frames, the frame data with the same timestamp can be synthesized into one video frame according to the timestamp of each frame data, so as to obtain the target video.
In the embodiment of the application, the frame data inherits the time stamp of the corresponding video frame, so that the frame data of different video frames can be distinguished through the time stamp, and the frame data can be conveniently synthesized into the video frame.
In one possible embodiment, the channels are color channels. For example, the RGB format corresponds to R channel, G channel, and B channel; the CMYK format corresponds to C, M, Y, and K channels. And splitting the video to be coded into data corresponding to each channel according to the channel corresponding to the video frame format to obtain multi-channel video data.
In one possible embodiment, referring to fig. 7, in the case where the video frame format of the target video is RGB format, and the video data includes R-channel video data, G-channel video data, and B-channel video data; the merging the video data of each path to obtain the target video includes:
and S701, merging the R channel video data, the G channel video data and the B channel video data to obtain a target video.
In one possible implementation, referring to fig. 8, in the case that the video frame format of the target video is RGB format, and the video data includes first video data and second video data, wherein the first video data includes R, G, B channels and the second video data includes R, G, B channels other than the first video data; the merging the video data of each path to obtain the target video includes:
s801, merging the first video data and the second video data to obtain a target video.
In one possible implementation, in the case that the video frame format of the target video is CMYK format, and the video data includes C-channel video data, M-channel video data, Y-channel video data, and K-channel video data; the merging the video data of each path to obtain the target video includes: and merging the C channel video data, the M channel video data, the Y channel video data and the K channel video data to obtain the target video.
In a possible implementation, in the case that the video frame format of the video to be encoded is CMYK format, and the video data includes first video data and second video data, wherein the first video data includes C, M, Y, K data of at least two channels of the channels, and the second video data includes C, M, Y, K data of other channels except the first video data; the merging the video data of each path to obtain the target video includes: and merging the first video data and the second video data to obtain the target video.
In one possible implementation, in the case that the video frame format of the video to be encoded is CMYK format, and the video data includes first video data, second video data and third video data, wherein the first video data includes data of two of C, M, Y, K channels, and the second video data and the third video data respectively include one of data of two of C, M, Y, K channels except the first video data; the merging the video data of each path to obtain the target video includes: and merging the first video data, the second video data and the third video data to obtain the target video.
In one possible implementation, in the case that the video frame format of the target video is the HSL format, and the video data includes H-channel video data, S-channel video data, and L-channel video data; the merging the video data of each path to obtain the target video includes: and merging the H channel video data, the S channel video data and the L channel video data to obtain the target video.
In one possible implementation, in the case that the video frame format of the target video is the HSL format, and the video data includes first video data and second video data, wherein the first video data includes data of two of H, S, L channels, and the second video data includes data of another one of H, S, L channels except the first video data; the merging the video data of each path to obtain the target video includes: and merging the first video data and the second video data to obtain the target video.
In one possible implementation, in the case that the video frame format of the target video is HSV format, and the video data includes H-channel video data, S-channel video data, and V-channel video data; the merging the video data of each path to obtain the target video includes: and merging the H channel video data, the S channel video data and the V channel video data to obtain the target video.
In one possible implementation, in the case that the video frame format of the target video is HSV format, and the video data includes first video data and second video data, wherein the first video data includes data of two of H, S, V channels, and the second video data includes data of another one of H, S, V channels except the first video data; the merging the video data of each path to obtain the target video includes: and merging the first video data and the second video data to obtain the target video.
In one possible implementation, in the case that the video frame format of the target video is the HIS format, and the video data includes H channel video data, I channel video data, and S channel video data; the merging the video data of each path to obtain the target video includes: and merging the H channel video data, the I channel video data and the S channel video data to obtain the target video.
In one possible implementation, in the case that the video frame format of the target video is the HIS format, and the video data includes first video data and second video data, wherein the first video data includes data of two of H, I, S channels, and the second video data includes data of another one of H, I, S channels except the first video data; the merging the video data of each path to obtain the target video includes: and merging the first video data and the second video data to obtain the target video.
In one possible implementation, in the case that the video frame format of the target video is YUV format, and the video data includes Y-channel video data, U-channel video data, and V-channel video data; the merging the video data of each path to obtain the target video includes: and merging the Y-channel video data, the U-channel video data and the V-channel video data to obtain the target video.
In one possible implementation, in the case that the video frame format of the target video is YUV format, and the video data includes first video data and second video data, wherein the first video data includes data of two of Y, U, V channels, and the second video data includes data of another one of Y, U, V channels except the first video data; the merging the video data of each path to obtain the target video includes: and merging the first video data and the second video data to obtain the target video.
Referring to fig. 9, the following description will be given by taking an example in which a video frame format is an RGB format, where a video source outputs a video, video acquisition is performed through a video input port of an encoding end, an acquired video image is stored in a memory of an acquisition end, and an acquisition timestamp is recorded; splitting the acquired video image into three paths of video data, wherein the three paths of video data comprise R-channel video data, G-channel video data and B-channel video data; sending each path of video data into coding hardware for video coding, and packaging code streams; packaging each packaged video data and sending the video data through a network; and the decoding end receives and decodes the packaged video data sent by the encoding end through the network to obtain multiple paths of video data, and combines the video data of all paths to restore the video data into a video in an RGB format. And outputting the restored video through a video output port.
An embodiment of the present application further provides a video encoding apparatus, applied to an encoding end, referring to fig. 10, the apparatus including:
the video acquisition module 11 is configured to acquire a video to be encoded;
the video splitting module 12 is configured to split video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded;
and the video coding module 13 is configured to code each channel of video data respectively.
In a possible implementation manner, the video frames of the video to be encoded have timestamps, each path of video data includes multiple pieces of frame data, and each piece of frame data into which the same video frame is split has the same timestamp as the split video frame.
In a possible implementation manner, the video frame format of the video to be encoded includes, but is not limited to: RGB format, CMYK format, HSL format, HSV format, HIS format, YUV format.
In a possible implementation manner, the video splitting module 12 is specifically configured to: based on the multi-channel corresponding to the video frame format of the video to be coded, splitting the video data of the video to be coded into multi-channel video data corresponding to the multi-channel one by one.
In a possible implementation manner, the video splitting module 12 is specifically configured to: the video data of the video to be coded is split into multi-channel video data corresponding to a plurality of channel groups one by one based on a plurality of channels corresponding to the video frame format of the video to be coded according to the plurality of channel groups of the multi-channel, wherein at least one channel group comprises at least two channels.
An embodiment of the present application provides a video decoding apparatus, applied to a decoding end, referring to fig. 11, the apparatus includes:
a data receiving module 21, configured to obtain video data to be decoded, where the video data to be decoded includes video data after multiple paths of encoding, and the video data is obtained by splitting each video frame of a target video according to multiple paths corresponding to a video frame format of the target video;
the video decoding module 22 is configured to decode each path of encoded video data to obtain each path of video data;
and the video merging module 23 is configured to merge the video data of each channel to obtain the target video.
In one possible implementation, the video frame of the target video has a timestamp, each path of video data includes a plurality of frame data, and each frame data into which the same video frame is split has the same timestamp as the split video frame.
In a possible embodiment, the video frame format of the target video includes but is not limited to: RGB format, CMYK format, HSL format, HSV format, HIS format, YUV format.
In a possible implementation manner, in the case that the video frame format of the target video is RGB format, and the video data includes R video data, G video data, and B video data; the video merging module 23 is specifically configured to: and merging the R video data, the G video data and the B video data to obtain the target video.
In a possible implementation manner, in the case that the video frame format of the target video is RGB format, and the video data includes first video data and second video data, wherein the first video data includes R, G, B channels of data of two channels, and the second video data includes R, G, B channels of data of another channel except the first video data; the video merging module 23 is specifically configured to: and merging the first video data and the second video data to obtain the target video.
An embodiment of the present application further provides an electronic device, including: a processor and a memory;
the memory is used for storing computer programs;
when the processor is used for executing the computer program stored in the memory, the following steps are realized:
acquiring a video to be coded, wherein the video to be coded comprises a plurality of frames of video;
splitting a video to be coded into a plurality of paths of video data according to a channel corresponding to a video frame format of the video to be coded;
and respectively encoding each path of video data.
Optionally, referring to fig. 12, the electronic device according to the embodiment of the present application further includes a communication interface 902 and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete communication with each other through the communication bus 904.
Optionally, when the processor is configured to execute the computer program stored in the memory, the processor can further implement any of the above video encoding methods applied to the encoding end.
An embodiment of the present application further provides an electronic device, including: a processor and a memory;
the memory is used for storing computer programs;
the processor is configured to implement any of the above-described video encoding methods applied to the decoding side when executing the computer program stored in the memory.
The communication bus mentioned in the electronic device may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements any of the above video encoding methods applied to an encoding end.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements any of the above video encoding methods applied to a decoding end.
It should be noted that, in this document, the technical features in the various alternatives can be combined to form the scheme as long as the technical features are not contradictory, and the scheme is within the scope of the disclosure of the present application. Relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.
Claims (20)
1. A video coding method applied to an encoding end, the method comprising:
acquiring a video to be coded;
splitting video data of the video to be coded into multi-channel video data based on a multi-channel corresponding to the video frame format of the video to be coded;
and respectively encoding each path of video data.
2. The method of claim 1, wherein the video frames of the video to be encoded have time stamps, each path of video data comprises a plurality of frame data, and each frame data into which the same video frame is split has the same time stamp as the split video frame.
3. The method according to claim 1 or 2, wherein the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded comprises:
and splitting the video data of the video to be coded into multi-channel video data corresponding to the multi-channel one by one on the basis of the multi-channel corresponding to the video frame format of the video to be coded.
4. The method according to claim 1 or 2, wherein the splitting the video data of the video to be encoded into multiple paths of video data based on multiple paths corresponding to the video frame format of the video to be encoded comprises:
based on a multi-channel corresponding to the video frame format of the video to be coded, splitting the video data of the video to be coded into multi-channel video data corresponding to the multi-channel packets one by one according to the multi-channel packets of the multi-channel, wherein at least one channel packet comprises at least two channels.
5. The method of claim 1, wherein the video frame format of the video to be encoded is one of RGB format, CMYK format, HSL format, HSV format, HIS format and YUV format.
6. A video decoding method, applied to a decoding end, the method comprising:
acquiring video data to be decoded, wherein the video data to be decoded comprises multi-channel coded video data, and the video data is obtained by splitting each video frame of a target video according to a multi-channel corresponding to the video frame format of the target video;
decoding each path of coded video data to obtain each path of video data;
and merging the video data of each path to obtain the target video.
7. The method of claim 6, wherein the video frame of the target video has a time stamp, each path of video data comprises a plurality of frame data, and each frame data into which the same video frame is split has the same time stamp as the split video frame.
8. The method of claim 6, wherein the video frame format of the target video includes but is not limited to: RGB format, CMYK format, HSL format, HSV format, HIS format, YUV format.
9. A video coding apparatus applied to an encoding side, the apparatus comprising:
the video acquisition module is used for acquiring a video to be coded;
the video splitting module is used for splitting the video data of the video to be coded into multi-channel video data based on the multi-channel corresponding to the video frame format of the video to be coded;
and the video coding module is used for coding each path of video data respectively.
10. The apparatus of claim 9, wherein the video frame of the video to be encoded has a time stamp, each path of video data comprises a plurality of frame data, and each frame data into which the same video frame is split has the same time stamp as the split video frame.
11. The apparatus of claim 9, wherein the video frame format of the video to be encoded includes but is not limited to: RGB format, CMYK format, HSL format, HSV format, HIS format, YUV format.
12. The apparatus according to any one of claims 9 to 11, wherein the video splitting module is specifically configured to: and splitting the video data of the video to be coded into multi-channel video data corresponding to the multi-channel one by one on the basis of the multi-channel corresponding to the video frame format of the video to be coded.
13. The apparatus according to any one of claims 9 to 11, wherein the video splitting module is specifically configured to: based on a multi-channel corresponding to the video frame format of the video to be coded, splitting the video data of the video to be coded into multi-channel video data corresponding to the multi-channel packets one by one according to the multi-channel packets of the multi-channel, wherein at least one channel packet comprises at least two channels.
14. A video decoding apparatus, applied to a decoding end, the apparatus comprising:
the data receiving module is used for acquiring video data to be decoded, wherein the video data to be decoded comprises multi-channel coded video data, and the video data is obtained by splitting each video frame of a target video according to a multi-channel corresponding to a video frame format of the target video;
the video decoding module is used for decoding each path of coded video data to obtain each path of video data;
and the video merging module is used for merging the video data of each path to obtain the target video.
15. The apparatus of claim 14, wherein the video frame of the target video has a timestamp, each path of video data comprises a plurality of frame data, and each frame data into which the same video frame is split has the same timestamp as the split video frame.
16. The apparatus of claim 14, wherein the video frame format of the target video includes but is not limited to: RGB format, CMYK format, HSL format, HSV format, HIS format, YUV format.
17. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implements the video encoding method of any of claims 1-5.
18. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the video decoding method according to any one of claims 6 to 8 when executing the program stored in the memory.
19. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the video encoding method of any one of claims 1 to 5.
20. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements the video decoding method of any one of claims 6 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010128221.2A CN113329269A (en) | 2020-02-28 | 2020-02-28 | Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010128221.2A CN113329269A (en) | 2020-02-28 | 2020-02-28 | Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113329269A true CN113329269A (en) | 2021-08-31 |
Family
ID=77412727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010128221.2A Pending CN113329269A (en) | 2020-02-28 | 2020-02-28 | Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113329269A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113784123A (en) * | 2021-11-11 | 2021-12-10 | 腾讯科技(深圳)有限公司 | Video encoding method and apparatus, storage medium, and electronic device |
CN115150639A (en) * | 2022-09-01 | 2022-10-04 | 北京蔚领时代科技有限公司 | Weak network resisting method and device based on distributed encoder |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101282437A (en) * | 2008-04-19 | 2008-10-08 | 青岛海信电器股份有限公司 | Decoding device |
CN101663896A (en) * | 2007-04-23 | 2010-03-03 | 汤姆森许可贸易公司 | Method and apparatus for encoding video data, method and apparatus for decoding encoded video data and encoded video signal |
US20100091132A1 (en) * | 2008-10-09 | 2010-04-15 | Silicon Motion, Inc. | Image capturing device and image preprocessing method thereof |
CN101889449A (en) * | 2007-06-28 | 2010-11-17 | 三菱电机株式会社 | Image encoder and image decoder |
CN105325000A (en) * | 2013-06-12 | 2016-02-10 | 三菱电机株式会社 | Image encoding device, image encoding method, image decoding device, and image decoding method |
CN106464887A (en) * | 2014-03-06 | 2017-02-22 | 三星电子株式会社 | Image decoding method and device thereof, image encoding method and device thereof |
CN106658011A (en) * | 2016-12-09 | 2017-05-10 | 深圳市云宙多媒体技术有限公司 | Panoramic video coding and decoding methods and devices |
CN109862361A (en) * | 2019-02-03 | 2019-06-07 | 北京深维科技有限公司 | A kind of webp image encoding method, device and electronic equipment |
CN109963185A (en) * | 2017-12-26 | 2019-07-02 | 杭州海康威视数字技术股份有限公司 | Video data transmitting method, image display method, device, system and equipment |
CN109993817A (en) * | 2017-12-28 | 2019-07-09 | 腾讯科技(深圳)有限公司 | A kind of implementation method and terminal of animation |
CN110719496A (en) * | 2018-07-11 | 2020-01-21 | 杭州海康威视数字技术股份有限公司 | Multi-path code stream packaging and playing method, device and system |
-
2020
- 2020-02-28 CN CN202010128221.2A patent/CN113329269A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101663896A (en) * | 2007-04-23 | 2010-03-03 | 汤姆森许可贸易公司 | Method and apparatus for encoding video data, method and apparatus for decoding encoded video data and encoded video signal |
CN101889449A (en) * | 2007-06-28 | 2010-11-17 | 三菱电机株式会社 | Image encoder and image decoder |
CN101282437A (en) * | 2008-04-19 | 2008-10-08 | 青岛海信电器股份有限公司 | Decoding device |
US20100091132A1 (en) * | 2008-10-09 | 2010-04-15 | Silicon Motion, Inc. | Image capturing device and image preprocessing method thereof |
CN105325000A (en) * | 2013-06-12 | 2016-02-10 | 三菱电机株式会社 | Image encoding device, image encoding method, image decoding device, and image decoding method |
CN106464887A (en) * | 2014-03-06 | 2017-02-22 | 三星电子株式会社 | Image decoding method and device thereof, image encoding method and device thereof |
CN106658011A (en) * | 2016-12-09 | 2017-05-10 | 深圳市云宙多媒体技术有限公司 | Panoramic video coding and decoding methods and devices |
CN109963185A (en) * | 2017-12-26 | 2019-07-02 | 杭州海康威视数字技术股份有限公司 | Video data transmitting method, image display method, device, system and equipment |
CN109993817A (en) * | 2017-12-28 | 2019-07-09 | 腾讯科技(深圳)有限公司 | A kind of implementation method and terminal of animation |
CN110719496A (en) * | 2018-07-11 | 2020-01-21 | 杭州海康威视数字技术股份有限公司 | Multi-path code stream packaging and playing method, device and system |
CN109862361A (en) * | 2019-02-03 | 2019-06-07 | 北京深维科技有限公司 | A kind of webp image encoding method, device and electronic equipment |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113784123A (en) * | 2021-11-11 | 2021-12-10 | 腾讯科技(深圳)有限公司 | Video encoding method and apparatus, storage medium, and electronic device |
CN113784123B (en) * | 2021-11-11 | 2022-03-15 | 腾讯科技(深圳)有限公司 | Video encoding method and apparatus, storage medium, and electronic device |
CN115150639A (en) * | 2022-09-01 | 2022-10-04 | 北京蔚领时代科技有限公司 | Weak network resisting method and device based on distributed encoder |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111263208B (en) | Picture synthesis method and device, electronic equipment and storage medium | |
US11671550B2 (en) | Method and device for color gamut mapping | |
US11257195B2 (en) | Method and device for decoding a high-dynamic range image | |
EP3616395B1 (en) | Method and device for color gamut mapping | |
KR20200081386A (en) | Method and device for generating a second image from a first image | |
KR102523233B1 (en) | Method and device for decoding a color picture | |
EP3242482A1 (en) | Method and apparatus for encoding/decoding a high dynamic range picture into a coded bitstream | |
CN113170157A (en) | Color conversion in layered coding schemes | |
EP3267685B1 (en) | Transmission device, transmission method, receiving device, and receiving method | |
EP3619912B1 (en) | Method and device for color gamut mapping | |
EP3477947A1 (en) | Method and device for obtaining a second image from a first image when the dynamic range of the luminance of said first image is greater than the dynamic range of the luminance of said second image | |
KR20210028654A (en) | Method and apparatus for processing medium dynamic range video signals in SL-HDR2 format | |
EP3453175B1 (en) | Method and apparatus for encoding/decoding a high dynamic range picture into a coded bistream | |
CN113329269A (en) | Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium | |
EP3562165B1 (en) | Custom data indicating nominal range of samples of media content | |
EP3714603B1 (en) | Method and apparatus for colour correction during hdr to sdr conversion | |
EP3367684A1 (en) | Method and device for decoding a high-dynamic range image | |
CN114422735A (en) | Video recorder, video data processing method and device and electronic equipment | |
CN114422734B (en) | Video recorder, video data processing method and device and electronic equipment | |
US20170180741A1 (en) | Video chrominance information coding and video processing | |
KR20180054623A (en) | Determination of co-localized luminance samples of color component samples for HDR coding / decoding | |
CN119788870A (en) | Data processing method, device and system | |
EP3528201A1 (en) | Method and device for controlling saturation in a hdr image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210831 |