CN118870030B - Preprocessing method for image coding, computer equipment, medium and chip - Google Patents
Preprocessing method for image coding, computer equipment, medium and chip Download PDFInfo
- Publication number
- CN118870030B CN118870030B CN202411304554.0A CN202411304554A CN118870030B CN 118870030 B CN118870030 B CN 118870030B CN 202411304554 A CN202411304554 A CN 202411304554A CN 118870030 B CN118870030 B CN 118870030B
- Authority
- CN
- China
- Prior art keywords
- image
- downsampled
- resolution
- encoding
- chromaticity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application relates to the technical field of computers and provides a preprocessing method, computer equipment, a medium and a chip for image coding. The method comprises the steps of receiving a first image, processing the first image according to a brightness level and a chromaticity level to obtain a first brightness map and a first chromaticity map of the first image respectively, dividing the first brightness map based on a first coding maximum resolution of a first coding chip to obtain a plurality of first brightness map blocks, downsampling the first chromaticity map according to a first compression ratio to obtain a downsampled first chromaticity map, and then selectively dividing the downsampled first chromaticity map based on the first coding maximum resolution to obtain a plurality of downsampled first chromaticity map blocks. Thus, the preprocessing result of the first image is obtained and used for encoding through the first encoding chip, which is beneficial to improving the encoding efficiency and improving the picture quality.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a preprocessing method for image encoding, a computer device, a medium, and a chip.
Background
With the development of ultra-high definition display technology and video recording technology, large-sized ultra-high definition television display terminal devices, intelligent terminal devices with ultra-high definition video recording function, and the like are widely used, and for this purpose, it is often necessary to display ultra-high definition image video data to a user or to transmit the ultra-high definition image video data from one node to another node. For the image video data of the ultra-high definition, if the encoding chip having the processing capability of the corresponding ultra-high definition is used for image encoding, the problems of high chip cost, large chip occupation area and the like are faced, and thus the integration of the encoding chip into various devices is not facilitated. The middle-low end encoding chip generally does not support the processing capability of ultra-high definition resolution, such as 4K encoding capability and 8K encoding capability. In the prior art, the resolution of the original image video data is reduced, then the encoding is performed by using a middle-low end encoding chip, then an up-sampling algorithm, such as pixel interpolation, is performed at a decoding end, and finally the playing is performed with ultra-high definition resolution, such as 4K resolution and 8K resolution. But because the resolution of the original image video data is reduced at the source, this results in a loss of information, even if the picture quality is subsequently improved by up-sampling algorithms such as pixel interpolation, there is still a blurring or unrealistic sense of detail on the picture. In addition, from the source of ultra-high definition image video data, such as a live broadcast or a server of a program content provider, to a television display terminal device that is ultimately presented to a user, the entire transmission chain may be routed through multiple transmission nodes, such as a data distribution node and a network transmission node, at which repeated encoding and decoding of the image video data to be transmitted may be involved, thus taking into account the differences in processing power of the respective image encodings of the different transmission nodes, as well as the impact on the quality of the image that is ultimately presented to the user.
Therefore, the application provides a preprocessing method, computer equipment, medium and chip for image coding, which are used for solving the technical problems in the prior art.
Disclosure of Invention
In a first aspect, the present application provides a preprocessing method for image encoding. The preprocessing method comprises the steps of receiving a first image, processing the first image according to a brightness level and a chromaticity level to obtain a first brightness map of the first image and a first chromaticity map of the first image respectively, dividing the first brightness map based on a first coding maximum resolution of a first coding chip to obtain a plurality of first brightness map blocks, enabling the resolution of the maximum first brightness map blocks in the plurality of first brightness map blocks to be not higher than the first coding maximum resolution, downsampling the first chromaticity map according to a first compression ratio to obtain a downsampled first chromaticity map, then selectively dividing the downsampled first chromaticity map based on the first coding maximum resolution to obtain a plurality of downsampled first chromaticity map blocks, and using the plurality of first brightness map blocks and the downsampled first chromaticity map blocks as the first image preprocessing blocks or using the plurality of first brightness map blocks and the downsampled first chromaticity map blocks as the first image preprocessing results.
According to the application, the additional cost of hardware up-sampling is avoided, the information loss caused by resolution reduction is not needed to be simulated or approximated through high-complexity algorithm operation, the picture resolution is improved through a block segmentation and coding multiplexing mode, the brightness information of the first image and the chromaticity information of the first image are respectively extracted from the first image, the first image is respectively displayed on the brightness layer and the chromaticity layer by the obtained first brightness image and the first chromaticity image, the corresponding processing flow is respectively carried out on the first brightness image and the first chromaticity image, the original information of the first brightness image is reserved, the requirement of a user on the sensitivity of change of the brightness information is met, the coding efficiency and the picture quality are improved, the resolution of the first chromaticity image is reduced through downsampling, the first chromaticity image after downsampling is obtained, the coding chip with low-resolution coding capacity is adapted, the coding speed is improved, the influence brought by controlling the image distortion is realized through the first compression ratio is normalized, the first image is selectively divided based on the first maximum resolution, the picture is encoded through the adaptive chip with high-resolution, the video image is encoded through the high-resolution, the image is encoded through the adaptive chip with high-resolution, the picture processing efficiency is further realized, the picture image is encoded through the picture processing operation is advanced, and the picture processing is further realized, the picture processing is advanced, and the picture processing is advanced.
In a possible implementation manner of the first aspect of the present application, when the image resolution of the downsampled first chromaticity diagram is higher than the first encoding maximum resolution, the downsampled first chromaticity diagram is divided so as to obtain the plurality of downsampled first chromaticity diagram blocks, and the plurality of first luminance diagram blocks and the plurality of downsampled first chromaticity diagram blocks are used as the preprocessing result of the first image, and when the image resolution of the downsampled first chromaticity diagram is not higher than the first encoding maximum resolution, the downsampled first chromaticity diagram is not divided, and the plurality of first luminance diagram blocks and the downsampled first chromaticity diagram are used as the preprocessing result of the first image.
In a possible implementation manner of the first aspect of the present application, the first compression ratio is not higher than fifty percent, and the image resolution of the downsampled first chromaticity diagram is determined based on the first image resolution and the first compression ratio.
In a possible implementation manner of the first aspect of the present application, the preprocessing result of the first image is encoded by the first encoding chip, so as to obtain a plurality of code streams, and the plurality of code streams are marked, synchronized and packaged, so as to obtain the encoding result of the first image.
In a possible implementation manner of the first aspect of the present application, the first image resolution is 4K resolution or 8K resolution, and the first encoding maximum resolution is 1080P.
In a possible implementation manner of the first aspect of the present application, processing the first image according to the luminance level and the chrominance level, so as to obtain a first luminance map of the first image and a first chrominance map of the first image, respectively, includes determining a triplet of each pixel point in the first image, where the triplet includes a first element representing luminance and a second element and a third element representing chrominance, constructing the first luminance map of the first image using the first element representing luminance in the triplet of each pixel point in the first image, and constructing the first chrominance map of the first image using the second element representing chrominance and the third element in the triplet of each pixel point in the first image.
In a possible implementation manner of the first aspect of the present application, the first image is in an RGB image format, and the triplet of each pixel point in the first image is obtained based on a conversion formula from the RGB image format to the YUV image format.
In a possible implementation manner of the first aspect of the present application, the transmission chain of the first image associates a plurality of transmission nodes, the first encoding chip is disposed on a first transmission node of the plurality of transmission nodes, and the preprocessing method is applied to preprocessing before image encoding, by the first encoding chip, of the first image received by the first transmission node on the first transmission node.
In a possible implementation manner of the first aspect of the present application, the plurality of transmission nodes includes an image video recording node, an image video compression node, an image video distribution node, a network transmission node, a terminal device receiving node, and an image video decompression node.
In a possible implementation manner of the first aspect of the present application, the first transmission node is any transmission node of the plurality of transmission nodes, and the first coding maximum resolution of the first coding chip represents a coding processing capability of the first transmission node.
In a second aspect, embodiments of the present application further provide a computer device, the computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing a method according to any one of the implementations of any one of the above aspects when the computer program is executed.
In a third aspect, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when run on a computer device, cause the computer device to perform a method according to any one of the implementations of any one of the above aspects.
In a fourth aspect, embodiments of the present application also provide a computer program product comprising instructions stored on a computer-readable storage medium, which when run on a computer device, cause the computer device to perform a method according to any one of the implementations of any one of the above aspects.
In a fifth aspect, an embodiment of the present application further provides a preprocessing chip for image encoding. The preprocessing chip comprises a receiving module, a first processing module and an output module, wherein the receiving module is used for receiving a first image, the first image is provided with a first image resolution, the first processing module is used for processing the first image according to a brightness level and a chromaticity level to respectively obtain a first brightness map of the first image and a first chromaticity map of the first image, the second processing module is used for dividing the first brightness map based on a first coding maximum resolution of a first coding chip to obtain a plurality of first brightness map blocks, the resolution of the largest first brightness map blocks in the plurality of first brightness map blocks is not higher than the first coding maximum resolution, and downsampling the first chromaticity map according to a first compression ratio to obtain a downsampled first chromaticity map, then, the downsampled first chromaticity map is selectively divided based on the first coding maximum resolution to obtain a plurality of downsampled first chromaticity map blocks, and the output module is used for preprocessing the first brightness map blocks and the first chromaticity map blocks by using the first brightness map blocks and the first brightness map blocks as a first chroma map or a first chroma map after the first picture is preprocessed result.
According to the application, the additional cost of hardware up-sampling is avoided, the information loss caused by resolution reduction is not needed to be simulated or approximated through high-complexity algorithm operation, the picture resolution is improved through a block segmentation and coding multiplexing mode, the brightness information of the first image and the chromaticity information of the first image are respectively extracted from the first image, the first image is respectively displayed on the brightness layer and the chromaticity layer by the obtained first brightness image and the first chromaticity image, the corresponding processing flow is respectively carried out on the first brightness image and the first chromaticity image, the original information of the first brightness image is reserved, the requirement of a user on the sensitivity of the change of the brightness information is met, the coding efficiency and the picture quality are improved, the resolution of the first chromaticity image is reduced through downsampling, the first chromaticity image after downsampling is obtained, the coding chip with low-resolution coding capacity is facilitated, the coding speed is improved, the influence brought by controlling the image distortion is realized through the first compression ratio is normalized, the first image is further processed through the chip with high-resolution after the first brightness image is selectively divided, the picture is coded with the high-resolution, the picture is coded with high-resolution is coded through the chip is further processed, the picture processing efficiency is improved, the picture processing is further realized, the picture processing is coded with the picture processing method is advanced, and the picture processing is advanced.
In a possible implementation manner of the fifth aspect of the present application, when the image resolution of the downsampled first chromaticity diagram is higher than the first encoding maximum resolution, the second processing module divides the downsampled first chromaticity diagram to obtain the plurality of downsampled first chromaticity diagram blocks, and the output module uses the plurality of first luminance diagram blocks and the plurality of downsampled first chromaticity diagram blocks as a preprocessing result of the first image, and when the image resolution of the downsampled first chromaticity diagram is not higher than the first encoding maximum resolution, the second processing module does not divide the downsampled first chromaticity diagram, and the output module uses the plurality of first luminance diagram blocks and the downsampled first chromaticity diagram as a preprocessing result of the first image.
In a possible implementation manner of the fifth aspect of the present application, the preprocessing result of the first image is encoded by the first encoding chip, so as to obtain a plurality of code streams, and the plurality of code streams are marked, synchronized and packaged, so as to obtain the encoding result of the first image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a preprocessing method for image coding according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a preprocessing chip for image encoding according to an embodiment of the present application;
fig. 3 is a schematic diagram of a first image transmission chain according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that in the description of the application, "at least one" means one or more than one, and "a plurality" means two or more than two. In addition, the words "first," "second," and the like, unless otherwise indicated, are used solely for the purposes of description and are not to be construed as indicating or implying a relative importance or order.
Fig. 1 is a schematic flow chart of a preprocessing method for image coding according to an embodiment of the present application. As shown in fig. 1, the pretreatment method includes the following steps.
Step S110, a first image is received, wherein the first image has a first image resolution.
Step S120, the first image is processed according to the brightness level and the chromaticity level, so that a first brightness image of the first image and a first chromaticity image of the first image are obtained respectively.
Step S130, dividing the first luminance map based on a first coding maximum resolution of a first coding chip to obtain a plurality of first luminance map blocks, wherein the resolution of the maximum first luminance map block in the plurality of first luminance map blocks is not higher than the first coding maximum resolution, downsampling the first chrominance map according to a first compression ratio to obtain a downsampled first chrominance map, and then selectively dividing the downsampled first chrominance map based on the first coding maximum resolution to obtain a plurality of downsampled first chrominance map blocks.
Step S140, using the plurality of first luminance map blocks and the downsampled first chromaticity diagram as a preprocessing result of the first image, or using the plurality of first luminance map blocks and the plurality of downsampled first chromaticity diagram blocks as a preprocessing result of the first image, where the preprocessing result of the first image is used for encoding by the first encoding chip.
Referring to fig. 1, the preprocessing method for image encoding shown in fig. 1 may be applied to applications such as ultra-high definition display and ultra-high definition image video recording, and is used for preprocessing ultra-high definition image video data before an image video encoding operation, so that preprocessed image video data, such as preprocessed images, preprocessed video frames, and the like, may be subjected to the image video encoding operation through a middle-low-end chip, where such a middle-low-end chip generally does not support the processing capability of ultra-high definition resolution. For example, ultra-high definition image video data has ultra-high definition resolution such as 4K resolution and 8K resolution, and if image encoding is performed using an encoding chip having processing capability of the corresponding ultra-high definition resolution, there are problems in that the chip cost is high, the chip occupation area is large, and the like, and it is also disadvantageous to integrate the encoding chip into various devices. In applications such as ultra-high definition display and ultra-high definition video recording, from the source of the ultra-high definition video data, such as a live broadcast or a server of a program content provider, to a television display terminal device that is ultimately presented to a user, the entire transmission chain may be routed through multiple transmission nodes, such as a data distribution node and a network transmission node, where repeated encoding and decoding of the image video data to be transmitted may be involved, and thus differences in processing power in the respective image encoding of the different transmission nodes may need to be considered, as well as the impact on the quality of the image that is ultimately presented to the user. The following describes in detail the steps shown in fig. 1, how the preprocessing method for image coding shown in fig. 1 can adapt the processing capability of the encoding chip by preprocessing the original high-resolution image (including the image frames in the video data), so as to achieve the enhancement of the image coding efficiency, and can also utilize the mature encoding chip adapted to the low-resolution image to perform the image video coding operation.
With continued reference to fig. 1, at step S110, a first image is received, wherein the first image has a first image resolution. Here, the first image is image data in original image video data of high resolution that needs to be encoded or an image frame in video data. In some examples, the first image resolution refers to high resolution employed in applications such as ultra-high definition display and ultra-high definition image video recording, e.g., 4K resolution and 8K resolution. In some examples, the first image resolution may refer to any resolution higher than the processing power of the encoding chip, e.g., the power of the encoding chip is adapted to 1080P resolution, then the first image resolution may refer to any resolution higher than 1080P resolution, e.g., 1980P resolution. When the image resolution of the image to be encoded is higher than the processing capability of the encoding chip used for image encoding (such as the encoding maximum resolution of the encoding chip), it means that the image data to be processed exceeds the processing capability range of the encoding chip in design, and therefore the image data cannot be supported by the encoding capability of the encoding chip, which may cause encoding errors and low encoding efficiency. If the original image data is subjected to a resolution-reducing operation to reduce the pixel density, that is, to convert an original high-resolution image into a low-resolution image, this means that the picture information is lost, and then although the picture quality can be improved by a restoration algorithm or a resolution-increasing operation, for example, the pixel density is increased by an upsampling algorithm such as pixel interpolation, the details on the picture still become blurred or unrealistic in a manner that the lost information is presumed or approximated by the algorithm.
With continued reference to fig. 1, after step S110, step S120 is performed. In step S120, the first image is processed according to a luminance level and a chrominance level, so as to obtain a first luminance map of the first image and a first chrominance map of the first image, respectively. Here, the first luminance map includes luminance information of the first image, that is, gray scale values, and the first luminance map may be, for example, a black-and-white image, and the first chrominance map includes chrominance information of the first image, that is, hue and saturation. Processing the first image according to the luminance level and the chrominance level means that luminance information of the first image and chrominance information of the first image are extracted from the first image, respectively, such that the obtained first luminance map and first chrominance map reveals the first image from the luminance level and the chrominance level, respectively. The first luminance graph and the first chromaticity graph can be processed correspondingly, so that the principle that the sensitivity degree of human eyes to chromaticity is lower than that of human eyes to luminance (because the number of cells for identifying luminance in human eyes is generally larger than that of cells for identifying chromaticity) can be utilized, and the image coding of a high-resolution image can be realized on the basis of a coding chip with low-resolution coding capability, so that the coding efficiency and the picture quality can be improved. It should be appreciated that in step S120, the first image may be processed by any suitable algorithm, model, hardware and circuitry in terms of luminance level and chrominance level to obtain a first luminance map of the first image and a first chrominance map of the first image, respectively. In some examples, the first image may be in a YUV data format, and each pixel on the high resolution original image may be represented as a triplet of YUV, where Y represents luminance, i.e., gray scale values, and U and V represent chrominance, i.e., hue and saturation. Thus, the first luminance map may be constructed by extracting Y for each pixel, and the first chrominance maps may be constructed by extracting U and V for each pixel. In other examples, the first image may be in an RGB data format, that is, a data format corresponding to three primary colors (red, green, blue), and the first image in the YUV data format may be obtained by a conversion formula from the RGB data format to the YUV data format, and then luminance information and chrominance information may be extracted therefrom, respectively, to obtain the first luminance map of the first image and the first chrominance map of the first image, respectively. In other examples, the feature extraction of the luminance feature information and the feature extraction of the chrominance feature information may be performed by two neural network models or two neural network branches, respectively, to obtain the first luminance map of the first image and the first chrominance map of the first image, respectively.
With continued reference to fig. 1, after step S120, step S130 is performed. In step S130, the first luminance map is divided based on a first coding maximum resolution of a first coding chip to obtain a plurality of first luminance map blocks, a resolution of a maximum first luminance map block of the plurality of first luminance map blocks is not higher than the first coding maximum resolution, the first chrominance map is downsampled according to a first compression ratio to obtain a downsampled first chrominance map, and then the downsampled first chrominance map is selectively divided based on the first coding maximum resolution to obtain a plurality of downsampled first chrominance map blocks. In order to be able to flexibly adapt the coding capability of any coding chip, how the first luminance map is processed is specified on the basis of the first coding maximum resolution of the first coding chip. The first luminance map blocks are obtained by dividing the first luminance map, and the resolution of the largest first luminance map block in the first luminance map blocks is ensured not to be higher than the first coding maximum resolution, so that the coding capability of the first coding chip can be fully utilized, and the resolution of each luminance map block in the first luminance map blocks is ensured not to exceed the first coding maximum resolution. Thus, by the first encoding chip, the encoding operation of each of the plurality of first luminance map partitions can be handled. Here, the first luminance map may be divided in any suitable manner to obtain a plurality of first luminance map blocks, for example, divided in a horizontal direction, divided in a vertical direction, or a predetermined division template such as a four-lattice template, a nine-lattice template may be used. The plurality of first luminance map tiles may have the same size, or may have different sizes. The resolution of the largest first luminance map partition among the plurality of first luminance map partitions is generally highest, and therefore, by normalizing the resolution of the largest first luminance map partition among the plurality of first luminance map partitions to be not higher than the first encoding maximum resolution, the processing capability of the first encoding chip is achieved. For example, the first luminance map may have a resolution of 3K by 2K, and the first encoding maximum resolution of the first encoding chip is 1080P resolution (e.g., an encoding chip that adapts an image with a resolution of 1920x 1080P), by equally dividing the first luminance map into four blocks such that the resolution of the largest first luminance map block is 1.5K by 1K, so that the horizontal direction resolution and the vertical direction resolution of each block may each adapt the processing capability of the first encoding chip. Considering the principle that the sensitivity of human eyes to chromaticity is lower than that of human eyes to luminance (because the number of cells for recognizing luminance in human eyes is generally larger than that of cells for recognizing chromaticity), the first luminance map is divided based on the first coding maximum resolution of the first coding chip to obtain a plurality of first luminance map blocks, so that the original information of the first luminance map is reserved, the requirement that users are sensitive to the change of luminance information is met, and the coding efficiency and the picture quality are improved. In addition, in step S130, the first chromaticity diagram is downsampled according to a first compression ratio to obtain a downsampled first chromaticity diagram, and then the downsampled first chromaticity diagram is selectively divided based on the first coding maximum resolution to obtain a plurality of downsampled first chromaticity diagram blocks. Here, the first compression ratio means that the resolution of the original image is reduced by a certain ratio, for example, 20%,30%,50%, and thus information in the original image is lost. In order to control the effect of image distortion, the first compression ratio for downsampling should not exceed 50%, which means that the original resolution of the first chrominance map is compressed at most half, for example compressing the original 8K resolution to 4K resolution. In order to be able to flexibly adapt the coding capability of any coding chip, how the first chromaticity diagram is processed is specified on the basis of a first coding maximum resolution of the first coding chip, and to this end, the downsampled first chromaticity diagram is selectively partitioned on the basis of the first coding maximum resolution, taking into account that the original resolution of the first chromaticity diagram is unknown, so that a plurality of downsampled first chromaticity diagram partitions are obtained. Thus, when the first chromaticity diagram is downsampled according to the first compression ratio, the resulting downsampled first chromaticity diagram may still have a resolution higher than the first encoding maximum resolution, e.g. the original resolution of the first chromaticity diagram is 8K resolution, and after downsampling according to the compression ratio of at most 50%, the resulting downsampled first chromaticity diagram has a resolution of 4K resolution, which is still higher than the first encoding maximum resolution of 1080P resolution. The processing capability of the adaptive first coding chip is realized by normalizing the resolution of the largest downsampled first chromaticity diagram block in the plurality of downsampled first chromaticity diagram blocks to be not higher than the first coding maximum resolution. And when the first chromaticity diagram is downsampled according to the first compression ratio, the resolution of the downsampled first chromaticity diagram may not be higher than the first coding maximum resolution, and the downsampled first chromaticity diagram may be directly used instead of being divided, so that the coding efficiency is improved.
With continued reference to fig. 1, after step S130, step S140 is performed. In step S140, the plurality of first luminance map blocks and the downsampled first chromaticity diagram are used as a preprocessing result of the first image, or the plurality of first luminance map blocks and the plurality of downsampled first chromaticity diagram blocks are used as a preprocessing result of the first image, where the preprocessing result of the first image is used for encoding by the first encoding chip. As described above, the luminance information of the first image and the chromaticity information of the first image are extracted from the first image, respectively, and the thus-obtained first luminance map and first chromaticity map reveal the first image from the luminance level and the chromaticity level, respectively. The first luminance graph and the first chromaticity graph can be processed correspondingly, so that the principle that the sensitivity degree of human eyes to chromaticity is lower than that of human eyes to luminance (because the number of cells for identifying luminance in human eyes is generally larger than that of cells for identifying chromaticity) can be utilized, and the image coding of a high-resolution image can be realized on the basis of a coding chip with low-resolution coding capability, so that the coding efficiency and the picture quality can be improved. The first brightness map is divided based on the first coding maximum resolution of the first coding chip aiming at the extraction result of the brightness information of the first brightness map, namely the first image, so that a plurality of first brightness map blocks are obtained, original information of the first brightness map is reserved, the requirement that a user is sensitive to the change of the brightness information is met, and the coding efficiency and the picture quality are improved. And for the first chromaticity diagram, namely the extraction result of chromaticity information of the first image, downsampling the first chromaticity diagram according to a first compression ratio to obtain a downsampled first chromaticity diagram, and then selectively dividing the downsampled first chromaticity diagram based on the first coding maximum resolution to obtain a plurality of downsampled first chromaticity diagram blocks. In this way, the resolution of the first color map is reduced through downsampling, so that the downsampled first color map is obtained, the coding chip with low resolution coding capability is adapted, the coding speed is improved, meanwhile, the influence caused by image distortion is controlled through the specification of the first compression ratio, and the flexible adaptation of the coding capability of the coding chip and the resolution of an original image is realized through selectively dividing the downsampled first color map based on the maximum resolution of the first coding.
In short, the preprocessing method for image coding shown in fig. 1 avoids the additional cost of up-sampling of hardware, does not need to simulate or approach the information loss caused by resolution reduction through algorithm operation with high complexity, improves the picture resolution through a partitioning segmentation and coding multiplexing mode, extracts the brightness information of a first image and the chromaticity information of the first image from the first image respectively, the first brightness image and the first chromaticity image obtained in this way show the first image from the brightness layer and the chromaticity layer respectively, carries out corresponding processing flow on the first brightness image and the first chromaticity image respectively, reserves the original information of the first brightness image, also meets the requirement of relatively sensitive change of the brightness information by a user, is favorable for improving the coding efficiency and improving the picture quality, reduces the resolution of the first chromaticity image through downsampling so as to obtain the first chromaticity image after downsampling, is favorable for adapting a coding chip with low resolution coding capability and improving the coding speed, and simultaneously realizes the influence brought by controlling the image distortion through the first compression ratio, and also meets the requirements of the chip with high resolution after the first image is coded through the first maximum resolution, and the chip is adaptive to realize the flexible picture processing operation and the picture with the high resolution after the picture is coded.
Referring to fig. 1, in one possible embodiment, when the image resolution of the downsampled first chromaticity diagram is higher than the first encoding maximum resolution, the downsampled first chromaticity diagram is divided to obtain the plurality of downsampled first chromaticity diagram blocks, and the plurality of first luminance diagram blocks and the plurality of downsampled first chromaticity diagram blocks are used as the preprocessing result of the first image, and when the image resolution of the downsampled first chromaticity diagram is not higher than the first encoding maximum resolution, the downsampled first chromaticity diagram is not divided, and the plurality of first luminance diagram blocks and the downsampled first chromaticity diagram are used as the preprocessing result of the first image. In some embodiments, the first compression ratio is not higher than fifty percent, and the image resolution of the downsampled first chromaticity diagram is determined based on the first image resolution and the first compression ratio. In this way, the resolution of the first color map is reduced through downsampling, so that the downsampled first color map is obtained, the coding chip with low resolution coding capability is adapted, the coding speed is improved, meanwhile, the influence caused by image distortion is controlled through the specification of the first compression ratio, and the flexible adaptation of the coding capability of the coding chip and the resolution of an original image is realized through selectively dividing the downsampled first color map based on the maximum resolution of the first coding.
In one possible implementation manner, the preprocessing result of the first image is encoded by the first encoding chip so as to obtain a plurality of code streams, and the plurality of code streams are marked, synchronized and packaged so as to obtain the encoding result of the first image. In this way, the original high-resolution image (including the image frame in the video data) is preprocessed, so that the processing capacity of the encoding chip is adapted, the image encoding efficiency is improved, and the mature encoding chip adapted to the low-resolution image can be used for performing image video encoding operation.
In one possible implementation, the first image resolution is 4K resolution or 8K resolution, and the first encoding maximum resolution is 1080P. In this way, the original high-resolution image (including the image frame in the video data) is preprocessed, so that the processing capacity of the encoding chip is adapted, the image encoding efficiency is improved, and the mature encoding chip adapted to the low-resolution image can be used for performing image video encoding operation.
In one possible implementation, processing the first image according to the luminance level and the chrominance level to obtain a first luminance map of the first image and a first chrominance map of the first image, respectively, includes determining a triplet for each pixel in the first image, the triplet including a first element representing luminance and second and third elements representing chrominance, constructing the first luminance map of the first image using the first element representing luminance in the triplet for each pixel in the first image, and constructing the first chrominance map of the first image using the second element representing chrominance and the third element in the triplet for each pixel in the first image. In some embodiments, the first image is in an RGB image format, and the triplet of each pixel in the first image is derived based on a conversion formula from the RGB image format to the YUV image format. Thus, the luminance information of the first image and the chromaticity information of the first image are extracted from the first image, respectively, and the thus obtained first luminance map and first chromaticity map reveal the first image from the luminance level and the chromaticity level, respectively. The first luminance graph and the first chromaticity graph can be processed correspondingly, so that the principle that the sensitivity degree of human eyes to chromaticity is lower than that of human eyes to luminance (because the number of cells for identifying luminance in human eyes is generally larger than that of cells for identifying chromaticity) can be utilized, and the image coding of a high-resolution image can be realized on the basis of a coding chip with low-resolution coding capability, so that the coding efficiency and the picture quality can be improved.
In a possible implementation manner, the transmission chain of the first image is associated with a plurality of transmission nodes, the first encoding chip is disposed on a first transmission node of the plurality of transmission nodes, and the preprocessing method is applied to preprocessing before the first image received by the first transmission node is subjected to image encoding by the first encoding chip on the first transmission node. Here, the plurality of transmission nodes may each employ a different encoding chip, and thus may have different encoding processing capabilities. For example, one transmitting node may support encoding a 4K resolution image and another transmitting node may not support encoding a 4K resolution image. Repeated encoding and decoding of image video data to be transmitted may be involved at these transmission nodes, so that differences in processing power in the respective image encodings of the different transmission nodes, and the impact on the image quality ultimately presented to the user, need to be taken into account. The preprocessing method for image coding shown in fig. 1, in order to flexibly adapt to the coding capability of any coding chip, specifies how to process the first luminance map based on the first coding maximum resolution of the first coding chip and specifies how to process the first chrominance map based on the first coding maximum resolution of the first coding chip, so that the image coding of the high resolution image can be realized on the basis of the coding chip adapting to the low resolution coding capability, and the coding efficiency and the picture quality are improved.
In some embodiments, the plurality of transmission nodes includes an image video recording node, an image video compression node, an image video distribution node, a network transmission node, a terminal device receiving node, an image video decompression node. In some embodiments, the first transmission node is any one of the plurality of transmission nodes, and the first encoding maximum resolution of the first encoding chip represents encoding processing capability of the first transmission node. It should be understood that the plurality of transmission nodes may be of any composition and connection, and may be defined only in a portion of the transmission chain. The preprocessing method for image coding shown in fig. 1 realizes image coding of high-resolution images on the basis of a coding chip adapting to low-resolution coding capability, improves coding efficiency and improves picture quality.
Fig. 2 is a schematic diagram of a preprocessing chip for image encoding according to an embodiment of the present application. As shown in fig. 2, the preprocessing chip comprises a receiving module 210 for receiving a first image, wherein the first image has a first image resolution, a first processing module 220 for processing the first image according to a luminance level and a chrominance level, thereby obtaining a first luminance map of the first image and a first chrominance map of the first image, respectively, a second processing module 230 for dividing the first luminance map based on a first encoding maximum resolution of a first encoding chip, thereby obtaining a plurality of first luminance map blocks, the resolution of the maximum first luminance map blocks in the plurality of first luminance map blocks being not higher than the first encoding maximum resolution, and downsampling the first chrominance map according to a first compression ratio, thereby obtaining a downsampled first chrominance map, then, selectively dividing the downsampled first chrominance map based on the first encoding maximum resolution, thereby obtaining a plurality of downsampled first chrominance map blocks, and an output module 240 for using the plurality of first luminance map blocks and the first luminance map blocks as a result of the processing of the downsampled first chrominance map blocks. The preprocessing result of the first image is used for encoding through the first encoding chip.
In a word, the preprocessing chip shown in fig. 2 avoids the additional cost of up-sampling hardware, does not need to simulate or approach the information loss caused by resolution reduction through high-complexity arithmetic operation, improves the picture resolution through a block segmentation and coding multiplexing mode, extracts the brightness information of a first image and the chromaticity information of the first image from the first image respectively, the obtained first brightness image and first chromaticity image display the first image from a brightness layer and a chromaticity layer respectively, carries out corresponding processing flow on the first brightness image and the first chromaticity image respectively, retains the original information of the first brightness image, meets the requirement of users on sensitivity to change of the brightness information, is beneficial to improving the coding efficiency and improving the picture quality, reduces the resolution of the first chromaticity image through downsampling so as to obtain a downsampled first chromaticity image, is beneficial to adapting to the coding chip with low-resolution coding capability and improving the coding speed, and simultaneously realizes the influence brought by controlling the image distortion through the first compression ratio.
Referring to fig. 2, in one possible embodiment, when the image resolution of the downsampled first chromaticity diagram is higher than the first encoding maximum resolution, the second processing module 230 divides the downsampled first chromaticity diagram to obtain the plurality of downsampled first chromaticity diagram blocks, and the output module 240 uses the plurality of first luminance diagram blocks and the plurality of downsampled first chromaticity diagram blocks as the preprocessing result of the first image, and when the image resolution of the downsampled first chromaticity diagram is not higher than the first encoding maximum resolution, the second processing module 230 does not divide the downsampled first chromaticity diagram, and the output module 240 uses the plurality of first luminance diagram blocks and the downsampled first chromaticity diagram as the preprocessing result of the first image. In this way, the resolution of the first color map is reduced through downsampling, so that the downsampled first color map is obtained, the coding chip with low resolution coding capability is adapted, the coding speed is improved, meanwhile, the influence caused by image distortion is controlled through the specification of the first compression ratio, and the flexible adaptation of the coding capability of the coding chip and the resolution of an original image is realized through selectively dividing the downsampled first color map based on the maximum resolution of the first coding.
In one possible implementation manner, the preprocessing result of the first image is encoded by the first encoding chip so as to obtain a plurality of code streams, and the plurality of code streams are marked, synchronized and packaged so as to obtain the encoding result of the first image. In this way, the original high-resolution image (including the image frame in the video data) is preprocessed, so that the processing capacity of the encoding chip is adapted, the image encoding efficiency is improved, and the mature encoding chip adapted to the low-resolution image can be used for performing image video encoding operation.
Fig. 3 is a schematic diagram of a first image transmission chain according to an embodiment of the present application. As shown in fig. 3, the transmission chain of the first image associates a plurality of transmission nodes, the first encoding chip is disposed on a first transmission node of the plurality of transmission nodes, and the preprocessing method is applied to preprocessing before image encoding is performed on the first image received by the first transmission node through the first encoding chip on the first transmission node. The plurality of transmission nodes include an image video recording node 310, an image video compression node 320, an image video distribution node 330, a network transmission node 340, a terminal device receiving node 350, and an image video decompression node 360. Here, the first transmission node is any one of the plurality of transmission nodes, and the first encoding maximum resolution of the first encoding chip represents encoding processing capability of the first transmission node.
Referring to fig. 1,2 and 3, repeated encoding and decoding of image video data to be transmitted may be involved at these transmission nodes, so that differences in processing power in the respective image encodings of the different transmission nodes, and the impact on the image quality ultimately presented to the user, need to be taken into account. The preprocessing method for image coding shown in fig. 1 and the preprocessing chip for image coding shown in fig. 2, in order to flexibly adapt the coding capability of an arbitrary coding chip, specify how to process the first luminance map based on the first coding maximum resolution of the first coding chip and specify how to process the first chrominance map based on the first coding maximum resolution of the first coding chip, thereby realizing image coding for high resolution images on the basis of the coding chip adapting low resolution coding capability, improving coding efficiency and improving picture quality.
Fig. 4 is a schematic diagram of a computing device 400 including one or more processors 410, a communication interface 420, and a memory 430, according to an embodiment of the present application. The processor 410, communication interface 420, and memory 430 are interconnected by a bus 440. Optionally, the computing device 400 may further include an input/output interface 450, where the input/output interface 450 is connected to an input/output device for receiving parameters set by a user, etc. The computing device 400 can be used to implement some or all of the functionality of the device embodiments or system embodiments of the present application described above, and the processor 410 can be used to implement some or all of the operational steps of the method embodiments of the present application described above. For example, specific implementations of the computing device 400 performing various operations may refer to specific details in the above-described embodiments, such as the processor 410 being configured to perform some or all of the steps of the above-described method embodiments or some or all of the operations of the above-described method embodiments. For another example, in an embodiment of the present application, the computing device 400 may be used to implement some or all of the functionality of one or more components of the apparatus embodiments described above, and the communication interface 420 may be used in particular for communication functions and the like necessary to implement the functionality of those apparatuses, components, and the processor 410 may be used in particular for processing functions and the like necessary to implement the functionality of those apparatuses, components.
It should be appreciated that the computing device 400 of fig. 4 may include one or more processors 410, and that the processors 410 may cooperatively provide processing power in a parallelized connection, a serialized connection, a serial-parallel connection, or any connection, or that the processors 410 may constitute a processor sequence or processor array, or that the processors 410 may be separated into primary and secondary processors, or that the processors 410 may have different architectures such as heterogeneous computing architectures. In addition, the computing device 400 shown in FIG. 4, the associated structural and functional descriptions are exemplary and not limiting. In some example embodiments, computing device 400 may include more or fewer components than shown in fig. 4, or combine certain components, or split certain components, or have a different arrangement of components.
The processor 410 may have various specific implementations, for example, the processor 410 may include one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphic processing unit, GPU), a neural network processor (neural-network processing unit, NPU), a tensor processor (tensor processing unit, TPU), or a data processor (data processing unit, DPU), and the embodiment of the present application is not limited in particular. Processor 410 may also be a single-core processor or a multi-core processor. Processor 410 may be comprised of a combination of a CPU and hardware chips. The hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (FPGA) GATE ARRAY, generic array logic (GENERIC ARRAY logic, GAL), or any combination thereof. Processor 410 may also be implemented solely with logic devices incorporating processing logic, such as an FPGA or Digital Signal Processor (DSP), etc. The communication interface 420 may be a wired interface, which may be an ethernet interface, a local area network (local interconnect network, LIN), etc., or a wireless interface, which may be a cellular network interface, or use a wireless lan interface, etc., for communicating with other modules or devices.
The memory 430 may be a nonvolatile memory such as a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM, EPROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. Memory 430 may also be volatile memory, which may be random access memory (random access memory, RAM) used as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA RATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM). Memory 430 may also be used to store program code and data such that processor 410 invokes the program code stored in memory 430 to perform some or all of the operational steps of the method embodiments described above, or to perform corresponding functions in the apparatus embodiments described above. Moreover, computing device 400 may contain more or fewer components than shown in FIG. 4, or may have a different configuration of components.
Bus 440 may be a peripheral component interconnect express (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIe) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, a unified bus (unified bus, ubus or UB), a computer quick link (compute express link, CXL), a cache coherent interconnect protocol (cache coherent interconnect for accelerators, CCIX), or the like. The bus 440 may be divided into an address bus, a data bus, a control bus, and the like. The bus 440 may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus. But is shown with only one bold line in fig. 4 for clarity of illustration, but does not represent only one bus or one type of bus.
The method and the device provided by the embodiment of the application are based on the same inventive concept, and because the principle of solving the problem by the method and the device is similar, the embodiment, the implementation, the example or the implementation of the method and the device can be mutually referred, and the repetition is not repeated. Embodiments of the present application also provide a system comprising a plurality of computing devices, each of which may be structured as described above. The functions or operations that may be implemented by the system may refer to specific implementation steps in the above method embodiments and/or specific functions described in the above apparatus embodiments, which are not described herein.
Embodiments of the present application also provide a computer-readable storage medium having stored therein computer instructions which, when executed on a computer device (e.g., one or more processors), implement the method steps of the method embodiments described above. The specific implementation of the processor of the computer readable storage medium in executing the above method steps may refer to specific operations described in the above method embodiments and/or specific functions described in the above apparatus embodiments, which are not described herein again.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. The application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Embodiments of the application may be implemented, in whole or in part, in software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. The computer program product includes one or more computer instructions. When loaded or executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc. that contain one or more collections of available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, tape), optical media, or semiconductor media. The semiconductor medium may be a solid state disk, or may be a random access memory, flash memory, read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, register, or any other form of suitable storage medium.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. Each flow and/or block of the flowchart and/or block diagrams, and combinations of flows and/or blocks in the flowchart and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments. It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the spirit or scope of the embodiments of the application. The steps in the method of the embodiment of the application can be sequentially adjusted, combined or deleted according to actual needs, and the modules in the system of the embodiment of the application can be divided, combined or deleted according to actual needs. The present application is also intended to include such modifications and alterations if they come within the scope of the claims and the equivalents thereof.
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411304554.0A CN118870030B (en) | 2024-09-19 | 2024-09-19 | Preprocessing method for image coding, computer equipment, medium and chip |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411304554.0A CN118870030B (en) | 2024-09-19 | 2024-09-19 | Preprocessing method for image coding, computer equipment, medium and chip |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118870030A CN118870030A (en) | 2024-10-29 |
CN118870030B true CN118870030B (en) | 2024-12-17 |
Family
ID=93160424
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411304554.0A Active CN118870030B (en) | 2024-09-19 | 2024-09-19 | Preprocessing method for image coding, computer equipment, medium and chip |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118870030B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116547969A (en) * | 2020-11-19 | 2023-08-04 | 华为技术有限公司 | Processing method of chroma subsampling format in image decoding based on machine learning |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
PL2941872T3 (en) * | 2013-01-02 | 2019-03-29 | Dolby Laboratories Licensing Corporation | Backward-compatible coding for ultra high definition video signals with enhanced dynamic range |
CN106162180A (en) * | 2016-06-30 | 2016-11-23 | 北京奇艺世纪科技有限公司 | A kind of image coding/decoding method and device |
US10834400B1 (en) * | 2016-08-19 | 2020-11-10 | Fastvdo Llc | Enhancements of the AV1 video codec |
CN117693937A (en) * | 2021-07-01 | 2024-03-12 | 抖音视界有限公司 | Utilizing codec information during super resolution procedure |
CN119032575A (en) * | 2022-04-15 | 2024-11-26 | 现代自动车株式会社 | Video coding method and apparatus using improved in-loop filter for chroma components |
-
2024
- 2024-09-19 CN CN202411304554.0A patent/CN118870030B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116547969A (en) * | 2020-11-19 | 2023-08-04 | 华为技术有限公司 | Processing method of chroma subsampling format in image decoding based on machine learning |
Also Published As
Publication number | Publication date |
---|---|
CN118870030A (en) | 2024-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11373275B2 (en) | Method for generating high-resolution picture, computer device, and storage medium | |
US11012489B2 (en) | Picture file processing method, picture file processing device, and storage medium | |
WO2020057182A1 (en) | Image compression method and apparatus | |
WO2019105179A1 (en) | Intra-frame prediction method and device for color component | |
JP6703032B2 (en) | Backward compatibility extended image format | |
EP4336829A1 (en) | Feature data encoding method and apparatus and feature data decoding method and apparatus | |
WO2023010754A1 (en) | Image processing method and apparatus, terminal device, and storage medium | |
CN113487524B (en) | Image format conversion method, apparatus, device, storage medium, and program product | |
CN109413434B (en) | Image processing method, device, system, storage medium and computer equipment | |
WO2020224551A1 (en) | Information compression/decompression methods and apparatuses, and storage medium | |
US20240223790A1 (en) | Encoding and Decoding Method, and Apparatus | |
CN114040246A (en) | Image format conversion method, device, equipment and storage medium of graphic processor | |
WO2023010750A1 (en) | Image color mapping method and apparatus, electronic device, and storage medium | |
CN101919248A (en) | Byte representation for enhanced image compression | |
CN116508320A (en) | Chroma subsampling format processing method in image decoding based on machine learning | |
CN113365016A (en) | Real-time map image data acquisition system and method | |
CN118870030B (en) | Preprocessing method for image coding, computer equipment, medium and chip | |
CN114170082A (en) | Video playing method, image processing method, model training method, device and electronic equipment | |
CN115278301B (en) | Video processing method, system and equipment | |
US12095981B2 (en) | Visual lossless image/video fixed-rate compression | |
CN111861877A (en) | Method and apparatus for video superdivision variability | |
CN112967194B (en) | Target image generation method and device, computer readable medium and electronic equipment | |
US11170260B2 (en) | Techniques for determining importance of encoded image components for artificial intelligence tasks | |
CN108933945B (en) | GIF picture compression method, device and storage medium | |
CN117808857B (en) | A self-supervised 360° depth estimation method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |