[go: up one dir, main page]

CN115118979B - Image encoding method, image decoding method, device, equipment and storage medium - Google Patents

Image encoding method, image decoding method, device, equipment and storage medium Download PDF

Info

Publication number
CN115118979B
CN115118979B CN202210635198.5A CN202210635198A CN115118979B CN 115118979 B CN115118979 B CN 115118979B CN 202210635198 A CN202210635198 A CN 202210635198A CN 115118979 B CN115118979 B CN 115118979B
Authority
CN
China
Prior art keywords
block
image
image block
joint coding
coding mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210635198.5A
Other languages
Chinese (zh)
Other versions
CN115118979A (en
Inventor
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210635198.5A priority Critical patent/CN115118979B/en
Publication of CN115118979A publication Critical patent/CN115118979A/en
Application granted granted Critical
Publication of CN115118979B publication Critical patent/CN115118979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses an image encoding method, an image decoding device, equipment and a storage medium, and belongs to the technical field of computer vision processing. The method comprises the following steps: acquiring a first chrominance residual block and a second chrominance residual block of any image block of a target image; determining a target joint coding mode based on chromaticity residual information of a reference pixel point corresponding to any image block; based on a target joint coding mode, performing joint coding on a first chroma residual block and a second chroma residual block of any image block to obtain a joint chroma residual block; the encoding result of any one image block is determined based on the joint chrominance residual block. The decoding sequence of the reference pixel points is positioned in front of the image block, so that when the coding result of the image block is decoded, the target joint coding mode can be determined based on the chromaticity residual information of the reference pixel points corresponding to the image block. Identification information for identifying the target joint coding mode is saved, and the data volume of the coding result of the image block is reduced.

Description

Image encoding method, image decoding device, image encoding apparatus, image decoding apparatus, and storage medium
Technical Field
The embodiment of the application relates to the technical field of computer vision processing, in particular to an image encoding method, an image decoding device, image encoding equipment, image decoding equipment and a storage medium.
Background
In the field of computer vision processing technology, images are a common type of data. By encoding the image, the data volume of the image can be compressed, the storage space required for storing the image is reduced, and the channel resources occupied by transmitting the image are reduced.
In the related art, a target image may be divided into a plurality of image blocks, and encoding processing is performed on each image block to obtain an encoding result of each image block, thereby implementing encoding processing on the target image. For any one image block, a first chrominance residual block and a second chrominance residual block of the image block may be acquired first. And determining a target joint coding mode based on the first chroma residual block and the second chroma residual block. And then, carrying out joint coding on the first chroma residual block and the second chroma residual block based on the target joint coding mode to obtain a joint chroma residual block. Then, the encoding result of the image block is determined based on the identification information and the joint chroma residual block. The identification information is used to identify a target joint coding mode employed by the image block.
In the above technology, the coding result of the image block includes the identification information and the joint chroma residual block, and the data size of the coding result of the image block is larger, which results in larger data size of the coded image and affects coding performance.
Disclosure of Invention
The application provides an image coding method, an image decoding device, equipment and a storage medium, which can be used for solving the problem of large data volume of a coding result of an image block in the related technology.
In one aspect, there is provided an image encoding method, the method comprising:
For any one image block in a plurality of image blocks included in a target image, acquiring a first chroma residual block and a second chroma residual block of the any one image block;
Determining a target joint coding mode based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the reference pixel points are pixel points around any image block and the decoding sequence is positioned before any image block;
based on the target joint coding mode, performing joint coding on the first chroma residual block and the second chroma residual block of any image block to obtain a joint chroma residual block;
and determining the coding result of any image block based on the joint chroma residual block.
In another aspect, there is provided an image decoding method, the method including:
For any image block in a plurality of image blocks included in a target image, acquiring a coding result of the any image block, wherein the coding result of the any image block comprises a joint chroma residual block;
Determining a target joint coding mode based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the reference pixel points are pixel points around any image block and the decoding sequence is positioned before any image block;
Decoding the joint chroma residual block based on the target joint coding mode to obtain a first chroma residual block and a second chroma residual block of any image block;
And determining a decoding result of the any image block based on the first chroma residual block and the second chroma residual block of the any image block.
In another aspect, there is provided an image encoding apparatus, the apparatus including:
An acquisition module, configured to acquire, for any one image block of a plurality of image blocks included in a target image, a first chrominance residual block and a second chrominance residual block of the any one image block;
The determining module is used for determining a target joint coding mode based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the reference pixel points are pixel points around any image block, and the decoding sequence is positioned in front of any image block;
The coding module is used for carrying out joint coding on the first chroma residual block and the second chroma residual block of any image block based on the target joint coding mode to obtain a joint chroma residual block;
the determining module is further configured to determine a coding result of the any one image block based on the joint chroma residual block.
In one possible implementation manner, the determining module is configured to determine distortion values of a plurality of joint coding modes based on chroma residual information of a reference pixel point corresponding to the any one image block, where the distortion values of the joint coding modes are used to characterize image quality after the any one image block is coded based on the joint coding modes; a target joint coding mode is determined from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes.
In one possible implementation, the chroma residual information includes an original first chroma residual and an original second chroma residual; the determining module is configured to determine, for any one joint coding mode, a predicted second chroma residual of the reference pixel point corresponding to the any one joint coding mode by using an original first chroma residual of the reference pixel point; and determining a distortion value corresponding to any one joint coding mode based on the original second chromaticity residual of the reference pixel point and the predicted second chromaticity residual of the reference pixel point corresponding to any one joint coding mode.
In one possible implementation manner, the number of the reference pixel points is a plurality; the determining module is configured to calculate, for any reference pixel point, a difference between an original second chroma residual of the any reference pixel point and a predicted second chroma residual of the any reference pixel point corresponding to the any joint coding mode, to obtain a distortion value of any reference pixel point corresponding to the any joint coding mode; and determining distortion values corresponding to any one joint coding mode based on the distortion values of the plurality of reference pixel points corresponding to the any one joint coding mode.
In a possible implementation manner, the determining module is configured to determine, based on distortion values of the plurality of joint coding modes, a joint coding mode corresponding to a minimum distortion value from the plurality of joint coding modes; determining the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value, wherein the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value is used for representing the coding quality when coding is performed based on the joint coding mode corresponding to the minimum distortion value; and determining the joint coding mode corresponding to the minimum distortion value as the target joint coding mode based on the fact that the rate distortion cost of the joint coding mode corresponding to the minimum distortion value is not greater than that of the non-joint coding mode.
In a possible implementation manner, the determining module is further configured to determine distortion values of a plurality of joint coding modes based on chroma residual information of a reference pixel point corresponding to the any one image block; determining a joint coding mode corresponding to a minimum distortion value from the plurality of joint coding modes based on the distortion values of the plurality of joint coding modes; and determining the coding result of any image block based on the first chroma residual block and the second chroma residual block of any image block based on the rate distortion cost of the joint coding mode corresponding to the minimum distortion value is larger than the rate distortion cost of the non-joint coding mode.
In a possible implementation manner, the determining module is configured to use the joint chroma residual block and target identification information as a coding result of the any one image block, where the target identification information is used to characterize whether the any one image block adopts joint coding.
In another aspect, there is provided an image decoding apparatus, the apparatus including:
An obtaining module, configured to obtain, for any one image block of a plurality of image blocks included in a target image, a coding result of the any one image block, where the coding result of the any one image block includes a joint chroma residual block;
The determining module is used for determining a target joint coding mode based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the reference pixel points are pixel points around any image block, and the decoding sequence is positioned in front of any image block;
The decoding module is used for decoding the joint chroma residual block based on the target joint coding mode to obtain a first chroma residual block and a second chroma residual block of any image block;
the determining module is further configured to determine a decoding result of the any one image block based on the first chroma residual block and the second chroma residual block of the any one image block.
In one possible implementation manner, the determining module is configured to determine distortion values of a plurality of joint coding modes based on chroma residual information of a reference pixel point corresponding to the any one image block, where the distortion values of the joint coding modes are used to characterize image quality after the any one image block is coded based on the joint coding modes; a target joint coding mode is determined from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes.
In a possible implementation manner, the encoding result of any image block further includes target identification information; and if the target identification information characterizes that any image block adopts joint coding, the coding result of any image block comprises a joint chroma residual block.
In a possible implementation manner, the encoding result of any image block further includes target identification information; if the target identification information indicates that the any image block is not subjected to joint coding, the coding result of the any image block comprises a first chromaticity residual block and a second chromaticity residual block of the any image block;
the determining module is further configured to determine a decoding result of the any one image block based on the first chroma residual block and the second chroma residual block of the any one image block.
In another aspect, there is provided an electronic device including a processor and a memory, where at least one computer program is stored in the memory, where the at least one computer program is loaded and executed by the processor, so that the electronic device implements any one of the above-mentioned image encoding methods or implements any one of the above-mentioned image decoding methods.
In another aspect, there is provided a computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to cause an electronic device to implement any one of the above-described image encoding methods or any one of the above-described image decoding methods.
In another aspect, there is provided a computer program or a computer program product, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to cause an electronic device to implement any one of the above-mentioned image encoding methods or to implement any one of the above-mentioned image decoding methods.
The technical scheme provided by the application has at least the following beneficial effects:
The technical scheme provided by the application determines the target joint coding mode of any image block based on the chromaticity residual information of the reference pixel point corresponding to the image block. Since the reference pixel points are pixel points around any image block and the decoding order is located before any image block, when the encoding result of any image block is decoded, the chroma residual information of the reference pixel point corresponding to any image block can be obtained, and the target joint encoding mode can be determined based on the chroma residual information of the reference pixel point corresponding to any image block. The method saves the identification information for identifying the target joint coding mode, reduces the data volume of the coding result of the image block and improves the coding performance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation environment of an image encoding method or an image decoding method according to an embodiment of the present application;
FIG. 2 is a flowchart of an image encoding method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a reference pixel according to an embodiment of the present application;
fig. 4 is a flowchart of an image decoding method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an image encoding method according to an embodiment of the present application;
fig. 6 is a schematic diagram of an image decoding method according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of an image encoding device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an image decoding apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of an image encoding method or an image decoding method according to an embodiment of the present application, and as shown in fig. 1, the implementation environment includes a terminal device 101 and a server 102. The image encoding method or the image decoding method in the embodiment of the present application may be performed by the terminal device 101, by the server 102, or by both the terminal device 101 and the server 102.
The terminal device 101 may be a smart phone, a game console, a desktop computer, a tablet computer, a laptop computer, a smart television, a smart car device, a smart voice interaction device, a smart home appliance, etc. The server 102 may be a server, or a server cluster formed by a plurality of servers, or any one of a cloud computing platform and a virtualization center, which is not limited in this embodiment of the present application. The server 102 may be in communication connection with the terminal device 101 via a wired network or a wireless network. The server 102 may have functions of data processing, data storage, data transceiving, etc., which are not limited in the embodiment of the present application. The number of terminal devices 101 and servers 102 is not limited, and may be one or more.
The technical scheme provided by the embodiment of the application can be realized based on Cloud Technology (Cloud Technology). Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
Based on the above-mentioned implementation environment, the embodiment of the present application provides an image encoding method, taking the flowchart of the image encoding method provided in the embodiment of the present application shown in fig. 2 as an example, where the method may be executed by the terminal device 101 or the server 102 in fig. 1, or may be executed by both the terminal device 101 and the server 102. For convenience of description, the terminal device 101 or the server 102 that performs the image encoding method in the embodiment of the present application will be referred to as an electronic device, and the method may be performed by the electronic device. As shown in fig. 2, the method includes steps S201 to S204.
Step S201, for any one of a plurality of image blocks included in a target image, acquires a first chrominance residual block and a second chrominance residual block of any one image block.
The embodiment of the application does not limit the acquisition mode, the content and the like of the target image. Illustratively, an arbitrary image photographed by a user is acquired as a target image, or an arbitrary image captured from a network is acquired as a target image, or an arbitrary frame image extracted from a video is acquired as a target image.
The target image may be divided into a plurality of image blocks. The embodiment of the application does not limit the division manner, and the target image is divided into a predetermined number of image blocks, or the target image is divided into a plurality of image blocks of a predetermined size by sliding a window of a predetermined size, for example. The size of any image block is not limited in the embodiments of the present application, and illustratively, any image block is a pixel block greater than or equal to 4*4, and any image block is a pixel block less than or equal to 32×32.
Any one image block includes pixel information of a plurality of pixel points, and the pixel information of any one pixel point includes an original luminance value (may be represented by Y) of the pixel point, an original first chrominance value (may be represented by Cb) of the pixel point, and an original second chrominance value (may be represented by Cr) of the pixel point. The image composed of Y, cb and Cr of each pixel belongs to an image in YUV (color coding method) video format.
It should be noted that, the original first chrominance value of each pixel point in any image block may form a chrominance block, and the chrominance block may be referred to as an original first chrominance block. Likewise, the original second chroma values for each pixel in any one image block may constitute another chroma block, which may be denoted as the original second chroma block.
Any method may be employed to determine the predicted first chroma block and the predicted second chroma block for any image block based on intra prediction. Wherein when the target image is any frame image extracted from the video, any method can be used to determine the predicted first chroma block and the predicted second chroma block of any image block based on the inter-prediction mode. The predicted first chrominance block of any one image block comprises the predicted first chrominance value of each pixel point in the image block, and the predicted second chrominance block of any one image block comprises the predicted second chrominance value of each pixel point in the image block.
Subtracting the predicted first chrominance block of any image block from the original first chrominance block of the image block to obtain a first chrominance residual block of the image block. And subtracting the predicted first chrominance value of the corresponding pixel point in the predicted first chrominance block of the image block from the original first chrominance value of each pixel point in the original first chrominance block of any image block to obtain a first chrominance residual of the corresponding pixel point in the first chrominance residual block of the image block. The first chrominance residual of the pixel point calculated in this way may be referred to as the original first chrominance residual of the pixel point. The first chrominance residual block of the image block includes the original first chrominance residual of each pixel point in the image block.
Similarly, the original second chroma block of any one image block is subtracted from the predicted second chroma block of that image block to obtain a second chroma residual block of that image block. Specifically, subtracting the predicted second chroma value of the corresponding pixel point in the predicted second chroma block of the image block from the original second chroma value of each pixel point in the original second chroma block of any image block to obtain a second chroma residual of the corresponding pixel point in the second chroma residual block of the image block. The second chrominance residual of the pixel point calculated in this way may be referred to as the original second chrominance residual of the pixel point. The second block of chroma residuals for the image block includes original second chroma residuals for each pixel in the image block.
In step S202, the target joint coding mode is determined based on the chroma residual information of the reference pixel corresponding to any one of the image blocks, where the reference pixel is a pixel surrounding any one of the image blocks and the decoding order is located before any one of the image blocks.
For an image, since the pixel information of two adjacent pixels is similar, the chromaticity residual information of two adjacent pixels is also similar. At this time, the chromaticity residual information of one pixel point may be approximately represented by the chromaticity residual information of another pixel point adjacent to the one pixel point.
For any one image block, the related art determines a target joint coding mode corresponding to the image block based on a first chrominance residual block and a second chrominance residual block of the image block, while the embodiment of the present application determines the target joint coding mode based on the chrominance residual information of the pixel point based on the principle of "characterizing the chrominance residual information of another pixel point adjacent to the pixel point approximately by the chrominance residual information of the pixel point".
In addition, for any image block, the target joint coding mode is determined by using the chromaticity residual information of the pixel points positioned in front of the image block in decoding order, so that when the coding result of the image block is obtained in decoding, the chromaticity residual information of the pixel points positioned in front of the image block in decoding order can be obtained, and the target joint coding mode is determined based on the chromaticity residual information of the pixel points positioned in front of the image block in decoding order in coding.
In summary, the reference pixel point corresponding to any image block is the pixel point around the image block and the decoding order is located before the image block. Referring to fig. 3, fig. 3 is a schematic diagram of a reference pixel according to an embodiment of the present application, and for the image block shown in fig. 3, the image block is an 8×8 pixel block. In the embodiment of the application, decoding is performed in a sequence from left to right and from top to bottom. Therefore, the decoding order of the pixels on the left side of the image block is located before the image block, and the decoding order of the pixels on the upper side of the image block is also located before the image block. 8 pixel points around the image block and positioned at the left side of the image block and 8 pixel points around the image block and positioned at the upper part of the image block can be used as reference pixel points corresponding to the image block.
After determining the reference pixel point corresponding to any image block, the chroma residual information of the reference pixel point can be obtained, and the target joint coding mode is determined based on the chroma residual information of the reference pixel point.
In one possible implementation, determining the target joint coding mode based on the chroma residual information of the reference pixel point corresponding to any one image block includes: determining distortion values of a plurality of joint coding modes based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the distortion values of the joint coding modes are used for representing image quality after any image block is coded based on the joint coding modes; a target joint coding mode is determined from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes.
The embodiment of the application does not limit the number of the joint coding modes. Illustratively, the number of joint coding modes is three, denoted joint coding mode 1, joint coding mode 2, joint coding mode 3, respectively. The distortion value of each joint coding mode may be determined based on the chroma residual information of the reference pixel point corresponding to any one of the image blocks. The larger the distortion value of the joint coding mode, the worse the image quality after any one image block is coded based on the joint coding mode, and therefore the distortion value of the joint coding mode is inversely proportional to the image quality after any one image block is coded based on the joint coding mode. And determining a target joint coding mode from the joint coding modes based on distortion values of the joint coding modes, so that the joint coding mode corresponding to higher image quality can be determined, and the image quality after coding is improved.
Optionally, the chroma residual information includes an original first chroma residual and an original second chroma residual; determining distortion values of a plurality of joint coding modes based on chromaticity residual information of reference pixel points corresponding to any image block comprises: for any joint coding mode, determining a predicted second chroma residual of a reference pixel point corresponding to any joint coding mode by using the original first chroma residual of the reference pixel point; and determining a distortion value corresponding to any one joint coding mode based on the original second chromaticity residual of the reference pixel point and the predicted second chromaticity residual of the reference pixel point corresponding to any one joint coding mode.
In the embodiment of the application, the chromaticity residual information of the reference pixel point comprises an original first chromaticity residual of the reference pixel point and an original second chromaticity residual of the reference pixel point. The predicted first chrominance value and the predicted second chrominance value of the reference pixel point may be determined based on inter-prediction or intra-prediction in any way. Subtracting the predicted first chrominance value of the reference pixel point from the original first chrominance value of the reference pixel point to obtain an original first chrominance residual error of the reference pixel point. Subtracting the predicted second chromaticity value of the reference pixel from the original second chromaticity value of the reference pixel to obtain an original second chromaticity residual of the reference pixel.
Alternatively, the encoding order and decoding order of each image block in the target image are the same. Since the decoding order of the reference pixel point is located before any image block, the encoding order and decoding order of the image block where the reference pixel point is located are located before any image block. A first chrominance residual block and a second chrominance residual block of an image block where the reference pixel point is located can be obtained. Because the first chrominance residual block of the image block where the reference pixel point is located includes the original first chrominance residual of each pixel point in the image block where the reference pixel point is located, the original first chrominance residual of the reference pixel point can be extracted from the first chrominance residual block of the image block where the reference pixel point is located. Similarly, since the second chroma residual block of the image block where the reference pixel point is located includes the original second chroma residual of each pixel point in the image block where the reference pixel point is located, the original second chroma residual of the reference pixel point can be extracted from the second chroma residual block of the image block where the reference pixel point is located.
Through statistical analysis processing on the target image, a symbol relation can be obtained, wherein the symbol relation is used for representing whether the positive and negative of the original first chromaticity residual error and the original second chromaticity residual are the same or not. For example, if the original first chrominance residual is positive and the original second chrominance residual is also positive, or the original first chrominance residual is negative and the original second chrominance residual is also negative, the sign relationship is +1. If the original first chrominance residual is positive and the original second chrominance residual is negative, or the original first chrominance residual is negative and the original second chrominance residual is positive, the symbol relationship is-1.
The predicted second chroma residual for the reference pixel point corresponding to any one joint coding mode may be determined based on the original first chroma residual and the symbol relationship for the reference pixel point. The original first chrominance residual error of the reference pixel point and the symbol relation can be multiplied to obtain a product result, and the product result is multiplied by the weight corresponding to any one joint coding mode to obtain the predicted second chrominance residual error of the reference pixel point corresponding to the joint coding mode. I.e. calculating Cr' = CSign ×cb×a. Wherein, cr' characterizes the predicted second chroma residual of the reference pixel point corresponding to any one of the joint coding modes, CSign characterizes the symbol relationship. Cb characterizes the original first chrominance residual of the reference pixel point, and a characterizes the weight corresponding to any joint coding mode.
In the embodiment of the application, the number of the joint coding modes is multiple, and the weights corresponding to different joint coding modes are different. The number of joint coding modes is three as an example, and further explanation will be made below.
For joint coding mode 1, the weight corresponding to joint coding mode 1 is 0.5. The original first chrominance residual error of the reference pixel point and the symbol relation can be multiplied to obtain a product result, and the product result is multiplied by 0.5 to obtain a predicted second chrominance residual error of the reference pixel point corresponding to the joint coding mode 1. I.e. Cr' = (CSign Cb)/2 is calculated. Wherein, cr' characterizes the predicted second chroma residual of the reference pixel point corresponding to the joint coding mode 1, CSign characterizes the symbol relationship. Cb characterizes the original first chroma residual of the reference pixel point.
For joint coding mode 2, the weight corresponding to joint coding mode 2 is 1. The original first chrominance residual of the reference pixel point and the symbol relationship can be multiplied to obtain a product result, and the product result is used as a predicted second chrominance residual of the reference pixel point corresponding to the joint coding mode 2. I.e. Cr' = CSign Cb is calculated. Wherein, cr' characterizes the predicted second chroma residual of the reference pixel point corresponding to the joint coding mode 2, CSign characterizes the symbol relationship. Cb characterizes the original first chroma residual of the reference pixel point.
For joint coding mode 3, the weight corresponding to joint coding mode 3 is 2. The original first chrominance residual error of the reference pixel point and the symbol relation can be multiplied to obtain a product result, and the product result is multiplied by 2 to obtain a predicted second chrominance residual error of the reference pixel point corresponding to the joint coding mode 3. I.e. Cr' =2 (CSign Cb) is calculated. Wherein, cr' characterizes the predicted second chroma residual of the reference pixel point corresponding to the joint coding mode 3, CSign characterizes the symbol relationship. Cb characterizes the original first chroma residual of the reference pixel point.
It should be noted that, the number of reference pixel points is at least one, and for any reference pixel point, the predicted second chroma residual of the reference pixel point corresponding to any joint coding mode may be determined according to Cr' = CSign ×cb×a, so as to obtain the predicted second chroma residual of each reference pixel point corresponding to any joint coding mode.
Optionally, the number of reference pixel points is a plurality; determining a distortion value corresponding to any joint coding mode based on the original second chroma residual of the reference pixel point and the predicted second chroma residual of the reference pixel point corresponding to any joint coding mode, comprising: for any reference pixel point, calculating a difference value between an original second chromaticity residual of any reference pixel point and a predicted second chromaticity residual of any reference pixel point corresponding to any joint coding mode to obtain a distortion value of any reference pixel point corresponding to any joint coding mode; and determining distortion values corresponding to any one joint coding mode based on the distortion values of the plurality of reference pixel points corresponding to any one joint coding mode.
For any reference pixel, the original second chrominance residual of any reference pixel and the predicted second chrominance residual of any reference pixel corresponding to any joint coding mode may be subtracted to obtain a difference between the original second chrominance residual of any reference pixel and the predicted second chrominance residual of any reference pixel corresponding to any joint coding mode (hereinafter, simply referred to as a difference corresponding to any reference pixel corresponding to any joint coding mode).
Optionally, the absolute value of the difference value corresponding to any reference pixel point corresponding to any joint coding mode is calculated, so as to obtain the distortion value of any reference pixel point corresponding to any joint coding mode. And adding the distortion values of the plurality of reference pixel points corresponding to any one joint coding mode to obtain the distortion value corresponding to any one joint coding mode.
Optionally, the difference value corresponding to any reference pixel point corresponding to any joint coding mode is squared to obtain the distortion value of any reference pixel point corresponding to any joint coding mode. And adding the distortion values of the plurality of reference pixel points corresponding to any one joint coding mode to obtain the distortion value corresponding to any one joint coding mode.
By the method, the distortion value corresponding to each joint coding mode can be determined. Then, a target joint coding mode is determined from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes.
Optionally, determining the target joint coding mode from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes comprises: determining a joint coding mode corresponding to the minimum distortion value from the plurality of joint coding modes based on the distortion values of the plurality of joint coding modes; determining the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value, wherein the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value is used for representing the coding quality when coding is carried out based on the joint coding mode corresponding to the minimum distortion value; and determining the joint coding mode corresponding to the minimum distortion value as a target joint coding mode based on the fact that the rate distortion cost of the joint coding mode corresponding to the minimum distortion value is not greater than that of the non-joint coding mode.
In the embodiment of the application, the minimum distortion value is determined from the distortion values of a plurality of joint coding modes, and the joint coding mode corresponding to the minimum distortion value is determined.
The first chroma residual block and the second chroma residual block of any image block can be subjected to joint coding based on a joint coding mode corresponding to the minimum distortion value, so that a joint chroma residual block is obtained. And performing transformation processing (such as DCT transformation processing, sine transformation processing and the like) on the combined chromaticity residual block to obtain a transformed combined chromaticity residual block. And carrying out quantization treatment on the transformed combined chromaticity residual block to obtain a quantized combined chromaticity residual block, and carrying out inverse quantization treatment on the quantized combined chromaticity residual block to obtain an inverse quantized combined chromaticity residual block. And decoding the dequantized joint chroma residual block based on the joint coding mode corresponding to the minimum distortion value to obtain a first chroma reconstruction residual block and a second chroma reconstruction residual block. And calculating the difference square sum of corresponding data based on the first chroma residual block of any image block, the second chroma residual block of the image block, the first chroma reconstruction residual block and the second chroma reconstruction residual block to obtain a first coding distortion value. The first coding distortion value is used for representing the loss of image block quality caused by coding the image block based on the joint coding mode corresponding to the minimum distortion value, and the first coding distortion value is proportional to the loss of image block quality.
Entropy coding can be performed on the quantized joint chrominance residual block, so that the quantized joint chrominance residual block is converted into binary data, and binary data of a joint coding mode corresponding to the minimum distortion value is obtained. And determining the information quantity occupied by binary data of the joint coding mode corresponding to the minimum distortion value, and determining the first information quantity based on the information quantity. The first information quantity is used for representing the information quantity required by coding based on the joint coding mode corresponding to the minimum distortion value.
And carrying out weighted summation on the first coding distortion value and the first information quantity to obtain the rate distortion cost of the joint coding mode corresponding to the minimum distortion value. Optionally, j0=d0+λ×r0 is calculated, where J0 represents a rate distortion cost of the joint coding mode corresponding to the minimum distortion value, D0 represents the first coding distortion value, λ is a weight coefficient, the weight coefficient is related to a quantization parameter (Quantization Parameter, QP), and R0 represents the first information quantity.
It should be noted that, the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value is obtained based on the first coding distortion value and the first information amount, and is a result obtained by balancing the loss of the image block quality and the required information amount, so the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value can be used to characterize the coding quality when coding is performed based on the joint coding mode corresponding to the minimum distortion value.
For non-joint coding modes. In one aspect, a transform process is performed on a first chrominance residual block of any one of the image blocks to obtain a transformed first chrominance residual block. And carrying out quantization processing on the transformed first-chroma residual block to obtain a quantized first-chroma residual block, and carrying out inverse quantization processing on the quantized first-chroma residual block to obtain an inverse quantized first-chroma residual block. On the other hand, the second chroma residual block of any one image block is transformed to obtain a transformed second chroma residual block. And carrying out quantization treatment on the transformed second chromaticity residual block to obtain a quantized second chromaticity residual block, and carrying out inverse quantization treatment on the quantized second chromaticity residual block to obtain an inverse quantized second chromaticity residual block.
And calculating the sum of squares of differences of corresponding data based on the first chrominance residual block of any image block, the second chrominance residual block of the image block, the first chrominance residual block after inverse quantization and the second chrominance residual block after inverse quantization, so as to obtain a second coding distortion value. Wherein the second encoding distortion value is used for representing the loss of image block quality caused by encoding the image block based on the non-joint encoding mode, and the second encoding distortion value is proportional to the loss of image block quality.
Entropy encoding can be performed on the quantized first chrominance residual block so as to convert the quantized first chrominance residual block into binary data, and binary data corresponding to the first chrominance residual block is obtained. Entropy encoding is carried out on the quantized second chromaticity residual block so as to convert the quantized second chromaticity residual block into binary data, and binary data corresponding to the second chromaticity residual block is obtained. The information amount occupied by binary data corresponding to the first chromaticity residual block and the information amount occupied by binary data corresponding to the second chromaticity residual block are determined, and the second information amount is determined based on the two information amounts. Wherein the second amount of information is used to characterize the amount of information required for encoding based on the non-joint encoding mode.
And carrying out weighted summation on the second coding distortion value and the second information quantity to obtain the rate distortion cost of the non-joint coding mode. Optionally, j1=d1+λ×r1 is calculated, where J1 characterizes the rate-distortion cost of the non-joint coding mode, D1 characterizes the second coding distortion value, λ is a weight coefficient, which is related to the quantization parameter (Quantization Parameter, QP), and R1 characterizes the second information quantity.
The rate-distortion cost of the non-joint coding mode is obtained based on the second coding distortion value and the second information amount and is a result obtained by balancing the loss of the image block quality and the required information amount, and therefore, the rate-distortion cost of the non-joint coding mode can be used for representing the coding quality when coding based on the non-joint coding mode.
The embodiment of the present application does not limit the manner of conversion processing or the content of information. Illustratively, the transform process is a discrete cosine transform (Discrete Cosine Transform, DCT) process, a sine transform process, or the like. The amount of information may be a number of bits.
When the rate distortion cost of the joint coding mode corresponding to the minimum distortion value is not greater than that of the non-joint coding mode, the coding quality when the joint coding mode corresponding to the minimum distortion value is coded is smaller than or equal to that when the non-joint coding mode is coded. The embodiment of the application determines the joint coding mode corresponding to the minimum distortion value as the target joint coding mode, and can improve the coding quality while reducing the data quantity of the coding result of the image block.
Step S203, based on the target joint coding mode, the first chroma residual block and the second chroma residual block of any image block are subjected to joint coding to obtain a joint chroma residual block.
In the embodiment of the application, the weight of the first chrominance residual block and the weight of the second chrominance residual block corresponding to the target joint coding mode can be determined. And calculating to obtain a joint chroma residual block based on the symbol relation, weights of the first chroma residual block of any image block and the first chroma residual block corresponding to the target joint coding mode, and weights of the second chroma residual block of any image block and the second chroma residual block corresponding to the target joint coding mode. Wherein, the relevant content of the symbol relation has been introduced above, and is not described herein.
The target joint coding mode is any one of a plurality of joint coding modes, and the weights of the first chroma residual block and the second chroma residual block corresponding to each joint coding mode are different. The following will further explain taking as an example that the target joint coding mode is any one of the joint coding modes 1 to 3. Wherein resCb [ x ] [ y ] characterizes a first chrominance residual block of any one image block, resJointC [ x ] [ y ] characterizes a joint chrominance residual block of any one image block, resCr [ x ] [ y ] characterizes a second chrominance residual block of any one image block, and CSign characterizes a symbol relationship in the following formula.
The target joint coding mode is joint coding mode 1, which can be noted as mode=1. Joint coding mode 1 satisfies the following formula: resCb [ x ] [ y ] = resJointC [ x ] [ y ], resCr [ x ] [ y ] = (CSign x resJointC [ x ] [ y ])/-2. The weight of the first chrominance residual block corresponding to the joint coding mode 1 may be determined to be 4/5, and the weight of the second chrominance residual block corresponding to the joint coding mode 1 may be determined to be 2/5. Based on 4/5, 2/5, CSign, resCb [ x ] [ y ] and resCr [ x ] [ y ], resJointC [ x ] [ y ] = (4 x resCb [ x ] [ y ] +2 x CSign x rescr [ x ] [ y ])/-5 is calculated.
The target joint coding mode is joint coding mode 2, which can be noted as mode=2. Joint coding mode 2 satisfies the following formula: resCb [ x ] [ y ] = resJointC [ x ] [ y ], resCr [ x ] [ y ] = CSign x resJointC [ x ] [ y ]. The weight of the first chrominance residual block corresponding to the joint coding mode 2 may be determined to be 1/2, and the weight of the second chrominance residual block corresponding to the joint coding mode 2 may be determined to be 1/2. Based on 1/2, CSign, resCb [ x ] [ y ] and resCr [ x ] [ y ], resJointC [ x ] [ y ] = (resCb [ x ] [ y ] + CSign x resCr [ x ] [ y ])/-2 is calculated.
The target joint coding mode is joint coding mode 3, which can be noted as mode=3. Joint coding mode 3 satisfies the following formula: resCr [ x ] [ y ] = resJointC [ x ] [ y ], resCb [ x ] [ y ] = (CSign x resJointC [ x ] [ y ])/-2. It may be determined that the weight of the first chrominance residual block corresponding to the joint coding mode 3 is 2/5 and the weight of the second chrominance residual block corresponding to the joint coding mode 3 is 4/5. Based on 2/5, 4/5, CSign, resCb [ x ] [ y ] and resCr [ x ] [ y ], resJointC [ x ] [ y ] = (4 x rescr [ x ] [ y ] +2 x CSign x resCb [ x ] [ y ])/-5 is calculated.
Step S204, determining the encoding result of any image block based on the joint chroma residual block.
After determining the joint chroma residual block of any one image block, determining the coding result of the image block based on the joint chroma residual block of the image block, storing the coding result of the image block, or transmitting the coding result of the image block. According to the manner of step S201 to step S204, the encoding result of each image block included in the target image may be determined, so as to obtain the encoding result of the target image, where the encoding result of the target image includes the encoding result of each image block included in the target image.
It should be noted that, in the embodiment of the present application, when the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value is not greater than the rate-distortion cost of the non-joint coding mode, the joint coding mode corresponding to the minimum distortion value is determined as the target joint coding mode, and step S203 and step S204 are executed.
In one possible implementation manner, after obtaining the first chroma residual block and the second chroma residual block of any one image block, the method further includes: determining distortion values of a plurality of joint coding modes based on chromaticity residual information of reference pixel points corresponding to any image block; determining a joint coding mode corresponding to the minimum distortion value from the plurality of joint coding modes based on the distortion values of the plurality of joint coding modes; and determining the first chroma residual block and the second chroma residual block of any one image block as the coding result of any one image block based on the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value being greater than the rate-distortion cost of the non-joint coding mode.
The relevant content of "determining distortion values of a plurality of joint coding modes based on chroma residual information of a reference pixel point corresponding to any one image block", "determining a joint coding mode corresponding to a minimum distortion value from a plurality of joint coding modes based on distortion values of a plurality of joint coding modes" has been mentioned above, and will not be described herein.
The rate-distortion costs for the joint coding mode and the non-joint coding mode corresponding to the minimum distortion value may be determined as mentioned above. When the rate distortion cost of the joint coding mode corresponding to the minimum distortion value is larger than that of the non-joint coding mode, the non-joint coding mode is adopted, so that the coding quality can be improved. Of course, if the encoding quality is not considered but only the data amount of the encoding result of the image block is considered at the time of application, the joint encoding mode corresponding to the minimum distortion value may be adopted, that is, the joint encoding mode corresponding to the minimum distortion value is set as the target joint encoding mode, and step S203 and step S204 may be executed.
When it is determined that the non-joint coding mode is adopted, a coding result of any one image block may be determined based on the first and second chrominance residual blocks of any one image block.
Alternatively, the first chrominance residual block of any image block may be detected, and first identification information is determined based on the detection result, where the first identification information is used to identify whether a non-zero coefficient exists in the first chrominance residual block of the image block. If the non-zero coefficient exists in the first chrominance residual block of the image block, the first identification information is a first symbol (e.g. 1), and if the non-zero coefficient does not exist in the first chrominance residual block of the image block, the first identification information is a second symbol (e.g. 0). The first identification information may be denoted as tu_cbf_cb.
It will be appreciated that if the first identification information is a second symbol, indicating that the data in the first chrominance residual block of the image block is all 0, any symbol (e.g., 0, null character, special character, etc.) may be used to characterize the first chrominance residual block of the image block.
Likewise, a second chroma residual block of any one of the image blocks may be detected, and second identification information for identifying whether a non-zero coefficient exists in the second chroma residual block of the image block may be determined based on the detection result. If the non-zero coefficient exists in the second chroma residual block of the image block, the second identification information is a third symbol (e.g. 1), and if the non-zero coefficient does not exist in the second chroma residual block of the image block, the second identification information is a fourth symbol (e.g. 0). The second identification information may be denoted as tu_cbf_cr.
It will be appreciated that if the second identification information is a fourth symbol, indicating that the data in the second chroma residual block of the image block is all 0, any symbol (e.g., 0, null character, special character, etc.) may be used to characterize the second chroma residual block of the image block.
The encoding result of any one image block may be determined based on the first identification information, the second identification information, the first chrominance residual block and the second chrominance residual block of any one image block.
Optionally, determining the encoding result of any one image block based on the joint chroma residual block includes: and taking the joint chroma residual block and target identification information as the coding result of any image block, wherein the target identification information is used for representing whether any image block adopts joint coding or not. Illustratively, the target identification information is denoted as tu_joint_cbcr_residual_flag.
If any image block adopts joint coding, the image block is indicated to perform joint coding on a first chroma residual block and a second chroma residual block of the image block based on a target joint coding mode, so as to obtain a joint chroma residual block. At this time, the target identification information may be denoted as a fifth symbol (e.g., 1) to characterize the image block by using the fifth symbol to employ joint coding. The encoding result of the image block is determined based on the fifth symbol, the joint chroma residual block of the image block. For example, tu_joint_cbcr_residual_flag=1, and the joint chroma residual block resJointC [ x ] [ y ] of the image block are determined as the encoding result of the image block.
If any image block does not adopt joint coding, the image block adopts a non-joint coding mode. At this time, the target identification information may be marked as a sixth symbol (e.g., 0) to indicate that the image block is not jointly encoded by using the sixth symbol. The encoding result of the image block is determined based on the sixth symbol, the first chroma residual block of the image block, and the second chroma residual block of the image block, or based on the sixth symbol, the first chroma residual block of the image block, the second chroma residual block of the image block, the first identification information, and the second identification information. For example, tu_joint_cbcr_residual_flag=0, first identification information tu_cbf_cb (equal to 0 or 1), second identification information tu_cbf_cr (equal to 0 or 1), first chrominance residual block resCb [ x ] [ y ] of an image block, and second chrominance residual block resCr [ x ] [ y ] of an image block are determined as the encoding result of the image block.
In the related art, the first chrominance residual block of the image block is detected, the first identification information is determined based on the detection result, the second chrominance residual block of any image block is detected, and the second identification information is determined based on the detection result. And characterizing a target joint coding mode adopted by the image block by using the first identification information and the second identification information. For example, if a non-zero coefficient exists in the first chrominance residual block of the image block, the first identification information is 1, and if a non-zero coefficient does not exist in the first chrominance residual block of the image block, the first identification information is 0. Similarly, if a non-zero coefficient exists in the second chroma residual block of the image block, the second identification information is 1, and if a non-zero coefficient does not exist in the second chroma residual block of the image block, the second identification information is 0. Referring to table 1 below, table 1 shows the correspondence between the first identification information tu_cbf_cb, the second identification information tu_cbf_cr, and the joint coding modes 1-3.
TABLE 1
tu_cbf_cb tu_cbf_cr Joint coding mode
1 0 Joint coding mode 1
1 1 Joint coding mode 2
0 1 Joint coding mode 3
With the related art, when any one of the image blocks adopts joint coding, a coding result of the image block is determined based on the fifth symbol, the first identification information, the second identification information, and the joint chroma residual block of the image block. The embodiment of the application determines the coding result of the image block based on the fifth symbol and the joint chroma residual block of the image block. The embodiment of the application saves the first identification information and the second identification information, thereby reducing the data volume of the coding result of the image block, improving the coding performance and reducing the storage space required by storing the coding result of the image block and the transmission resource required by transmitting the coding result of the image block.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the first chrominance residual block, the second chrominance residual block, and the like of the image block referred to in the present application are all acquired with sufficient authorization.
The method is to determine the target joint coding mode of any image block based on the chromaticity residual information of the reference pixel point corresponding to the image block. Since the reference pixel points are pixel points around any image block and the decoding order is located before any image block, when the encoding result of any image block is decoded, the chroma residual information of the reference pixel point corresponding to any image block can be obtained, and the target joint encoding mode can be determined based on the chroma residual information of the reference pixel point corresponding to any image block. The method saves the identification information for identifying the target joint coding mode, reduces the data volume of the coding result of the image block and improves the coding performance.
Based on the above-mentioned implementation environment, the embodiment of the present application provides an image decoding method, taking the flowchart of the image decoding method provided in the embodiment of the present application shown in fig. 4 as an example, where the method may be executed by the terminal device 101 or the server 102 in fig. 1, or may be executed by both the terminal device 101 and the server 102. For convenience of description, the terminal device 101 or the server 102 that performs the image decoding method in the embodiment of the present application will be referred to as an electronic device, and the method may be performed by the electronic device. As shown in fig. 4, the method includes steps S401 to S404.
In step S401, for any one of a plurality of image blocks included in the target image, a coding result of any one image block is obtained, and the coding result of any one image block includes a joint chroma residual block.
It has been mentioned above that the encoding end may determine the joint chrominance residual block of any one image block in the manner of step S201 to step S204, and determine the encoding result of the image block based on the joint chrominance residual block of the image block. Therefore, the decoding end can directly or indirectly obtain the encoding result of any image block from the encoding end.
In step S402, the target joint coding mode is determined based on the chroma residual information of the reference pixel corresponding to any one of the image blocks, where the reference pixel is a pixel surrounding any one of the image blocks and the decoding order is located before any one of the image blocks.
Since the reference pixel point is a pixel point located before any image block in decoding order, when the encoding result of any image block is obtained, chroma residual information of the reference pixel point corresponding to any image block can also be obtained. The implementation principle of step S402 is the same as that of step S202, and will not be described herein.
In one possible implementation, determining the target joint coding mode based on the chroma residual information of the reference pixel point corresponding to any one image block includes: determining distortion values of a plurality of joint coding modes based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the distortion values of the joint coding modes are used for representing image quality after any image block is coded based on the joint coding modes; a target joint coding mode is determined from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes.
The embodiment of the application does not limit the number of the joint coding modes. The distortion value of each joint coding mode may be determined based on the chroma residual information of the reference pixel point corresponding to any one of the image blocks. The larger the distortion value of the joint coding mode, the worse the image quality after any one image block is coded based on the joint coding mode, and therefore the distortion value of the joint coding mode is inversely proportional to the image quality after any one image block is coded based on the joint coding mode.
The method of determining the distortion values of the plurality of joint coding modes in step S202 has been described, and the method of determining the target joint coding mode from the plurality of joint coding modes in step S202 is described and will not be described herein.
Step S403, based on the target joint coding mode, performing decoding processing on the joint chroma residual block to obtain a first chroma residual block and a second chroma residual block of any image block.
In the embodiment of the application, the symbol relation can be acquired. The manner in which the code side determines the symbol relationship has been described in the embodiment related to fig. 2, and will not be described in detail here. The decoding end can directly or indirectly acquire the symbol relation determined by the encoding end.
After determining the target joint coding mode, the joint chroma residual block of any one image block may be decoded based on the target joint coding mode and the symbol relationship of the image block to obtain a first chroma residual block and a second chroma residual block of the image block. Since the target joint coding mode is any of a plurality of joint coding modes, there is also a difference in the manner in which joint chroma residual blocks of any one image block are decoded in different joint coding modes.
The following will further explain taking as an example that the target joint coding mode is any one of the joint coding modes 1 to 3. Wherein resCb [ x ] [ y ] characterizes a first chrominance residual block of any one image block, resJointC [ x ] [ y ] characterizes a joint chrominance residual block of any one image block, resCr [ x ] [ y ] characterizes a second chrominance residual block of any one image block, and CSign characterizes a symbol relationship in the following formula. Referring to table 2 below, table 2 shows decoding methods corresponding to each of the joint coding modes 1 to 3.
TABLE 2
Where "> >1" in table 2 indicates that binary data is shifted one bit to the right, which corresponds to decimal data divided by 2. Thus, from Table 2, the following conclusions can be drawn.
(1) The target joint coding mode is joint coding mode 1, which can be noted as mode=1. Joint coding mode 1 satisfies the formula: resCb [ x ] [ y ] = resJointC [ x ] [ y ], resCr [ x ] [ y ] = (CSign x resJointC [ x ] [ y ])/-2.
(2) The target joint coding mode is joint coding mode 2, which can be noted as mode=2. Joint coding mode 2 satisfies the formula: resCb [ x ] [ y ] = resJointC [ x ] [ y ], resCr [ x ] [ y ] = CSign x resJointC [ x ] [ y ].
(3) The target joint coding mode is joint coding mode 3, which can be noted as mode=3. Joint coding mode 3 satisfies the formula: resCr [ x ] [ y ] = resJointC [ x ] [ y ], resCb [ x ] [ y ] = (CSign x resJointC [ x ] [ y ])/-2.
Step S404, determining a decoding result of any one image block based on the first chroma residual block and the second chroma residual block of any one image block.
In the embodiment of the application, any method can be adopted to obtain the predicted first chrominance block of any image block, and the predicted first chrominance block of the image block and the first chrominance residual block of the image block are added to obtain the original first chrominance block of the image block. Optionally, adding the predicted first chrominance value of each pixel point in the predicted first chrominance block of any image block to the first chrominance residual of the corresponding pixel point in the first chrominance residual block of the image block to obtain the original first chrominance value of the corresponding pixel point in the original first chrominance block of the image block.
Similarly, any method may be used to obtain a predicted second chroma block of any image block, and the predicted second chroma block of the image block is added to the second chroma residual block of the image block to obtain the original second chroma block of the image block. Optionally, adding the predicted second chroma value of each pixel point in the predicted second chroma block of any image block to the second chroma residual of the corresponding pixel point in the second chroma residual block of the image block to obtain the original second chroma value of the corresponding pixel point in the original second chroma block of the image block.
By the method, the original first chrominance value and the original second chrominance value of each pixel point in any image block can be determined, which is equivalent to obtaining the decoding result of any image block, and reconstructing any image block is realized. And obtaining a decoding result of the target image by decoding to obtain a decoding result of a plurality of image blocks included in the target image, and realizing reconstruction of the target image.
Optionally, the encoding result of any image block further includes target identification information; if the target identification information characterizes that any image block adopts joint coding, the coding result of any image block comprises a joint chroma residual block.
The target identification information may be a fifth symbol or a sixth symbol. If the target identification information is a fifth symbol, it indicates that any image block adopts joint coding, and at this time, the coding result of any image block includes a joint chroma residual block in addition to the fifth symbol. The decoding result of any one image block may be determined in the manner of step S402 to step S404.
Optionally, the encoding result of any image block further includes target identification information; if the target identification information indicates that any image block does not adopt joint coding, the coding result of any image block comprises a first chromaticity residual block and a second chromaticity residual block of any image block; obtaining the coding result of any image block, and then further comprising: the decoding result of any one image block is determined based on the first chrominance residual block and the second chrominance residual block of any one image block.
If the target identification information is the sixth symbol, it indicates that any image block does not adopt joint coding, that is, any image block adopts a non-joint coding mode. At this time, the encoding result of any one image block includes the first chrominance residual block and the second chrominance residual block of any one image block in addition to the sixth symbol.
Since the encoding result of any image block includes the first chroma residual block and the second chroma residual block of the image block, the decoding result of any image block may be determined based on the first chroma residual block and the second chroma residual block of any image block in the manner of step S404, which is not described herein.
Alternatively, the encoding result of any one of the image blocks may include a sixth symbol, first identification information, second identification information, a first chrominance residual block and a second chrominance residual block of any one of the image blocks. Since the first identification information is used to identify whether a non-zero coefficient exists in the first chrominance residual block of the image block, the second identification information is used to identify whether a non-zero coefficient exists in the second chrominance residual block of the image block. Accordingly, the first chrominance residual block and the second chrominance residual block of any one image block can be accurately positioned from the encoding result of the image block based on the first identification information and the second identification information, so that the decoding result of any one image block is determined based on the first chrominance residual block and the second chrominance residual block of any one image block.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the encoding result and the like of any image block involved in the present application are acquired with sufficient authorization.
In the above method, the encoding result of any one of the image blocks includes a joint chroma residual block. Since the reference pixel points are pixel points around any image block and the decoding order is located before any image block, when the encoding result of any image block is decoded, the chroma residual information of the reference pixel point corresponding to any image block can be obtained, and the target joint encoding mode can be determined based on the chroma residual information of the reference pixel point corresponding to any image block. And then, the joint chroma residual block is decoded based on the target joint coding mode, so that the identification information for identifying the target joint coding mode is saved, the data quantity of the coding result of the image block is reduced, and the coding performance is improved.
The image encoding method and the image decoding method according to the embodiments of the present application are described above from the viewpoint of method steps, and are systematically described below in conjunction with fig. 5 and 6.
Referring to fig. 5, fig. 5 is a schematic diagram of an image encoding method according to an embodiment of the present application, and the image encoding method shown in fig. 5 is performed by an encoding end. The encoding end firstly acquires a target image and divides the target image into a plurality of image blocks. For any one image block, a first chroma residual block and a second chroma residual block of the image block are acquired. Then, distortion values of a plurality of joint coding modes are determined based on the chromaticity residual information of the reference pixel point corresponding to any one image block. And determining the rate distortion cost of the joint coding mode and the rate distortion cost of the non-joint coding mode corresponding to the minimum distortion value.
And comparing the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value with the rate-distortion cost of the non-joint coding mode to judge whether the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value is larger than the rate-distortion cost of the non-joint coding mode.
If the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value is not greater than that of the non-joint coding mode, the joint coding mode corresponding to the minimum distortion value is the target joint coding mode. And the first chroma residual block and the second chroma residual block of the image block can be subjected to joint coding based on a joint coding mode corresponding to the minimum distortion value, so that a joint chroma residual block is obtained. And enabling the target identification information to take a value of 1, and determining the 1 and the joint chromaticity residual block as the coding result of the image block.
If the rate distortion cost of the joint coding mode corresponding to the minimum distortion value is larger than that of the non-joint coding mode, the target identification information is enabled to take the value of 0, and 0, the first chromaticity residual block of the image block and the second chromaticity residual block of the image block are determined to be the coding result of the image block.
Referring to fig. 6, fig. 6 is a schematic diagram of an image decoding method according to an embodiment of the present application, and the image decoding method shown in fig. 6 is performed by a decoding end. The decoding end firstly obtains the coding result of any image block, and the coding result of the image block comprises target identification information.
If the target identification information is 1, the coding result of the image block further comprises a joint chroma residual block. The distortion values of a plurality of joint coding modes can be determined based on the chromaticity residual information of the reference pixel point corresponding to the image block, and the joint chromaticity residual block of the image block is decoded based on the joint coding mode corresponding to the minimum distortion value to obtain a first chromaticity residual block and a second chromaticity residual block of the image block. Then determining a decoding result of the image block based on the first chrominance residual block of the image block and the second chrominance residual block of the image block.
If the target identification information is 0, the coding result of the image block further comprises a first chroma residual block of the image block and a second chroma residual block of the image block. The decoding result of the image block may be determined based on the first chrominance residual block of the image block and the second chrominance residual block of the image block.
In the above method, the reference pixel points are pixel points around any image block and the decoding order is located before any image block, so when the encoding result of any image block is decoded, the chroma residual information of the reference pixel point corresponding to any image block can be obtained, and the target joint encoding mode can be determined based on the chroma residual information of the reference pixel point corresponding to any image block. The method saves the identification information for identifying the target joint coding mode, reduces the data volume of the coding result of the image block and improves the coding performance.
Fig. 7 is a schematic structural diagram of an image encoding device according to an embodiment of the present application, where, as shown in fig. 7, the device includes:
an obtaining module 701, configured to obtain, for any one of a plurality of image blocks included in a target image, a first chrominance residual block and a second chrominance residual block of any one image block;
A determining module 702, configured to determine a target joint coding mode based on chroma residual information of a reference pixel corresponding to any one image block, where the reference pixel is a pixel around any one image block and a decoding order is located before any one image block;
a coding module 703, configured to perform joint coding on the first chroma residual block and the second chroma residual block of any image block based on a target joint coding mode, so as to obtain a joint chroma residual block;
The determining module 702 is further configured to determine a coding result of any one image block based on the joint chroma residual block.
In a possible implementation manner, the determining module 702 is configured to determine distortion values of a plurality of joint coding modes based on chromaticity residual information of reference pixel points corresponding to any image block, where the distortion values of the joint coding modes are used to characterize image quality after any image block is encoded based on the joint coding modes; a target joint coding mode is determined from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes.
In one possible implementation, the chroma residual information includes an original first chroma residual and an original second chroma residual; a determining module 702, configured to determine, for any one joint coding mode, a predicted second chroma residual of a reference pixel corresponding to the any one joint coding mode by using an original first chroma residual of the reference pixel; and determining a distortion value corresponding to any one joint coding mode based on the original second chromaticity residual of the reference pixel point and the predicted second chromaticity residual of the reference pixel point corresponding to any one joint coding mode.
In one possible implementation, the number of reference pixel points is a plurality; a determining module 702, configured to calculate, for any reference pixel point, a difference between an original second chroma residual of any reference pixel point and a predicted second chroma residual of any reference pixel point corresponding to any joint coding mode, to obtain a distortion value of any reference pixel point corresponding to any joint coding mode; and determining distortion values corresponding to any one joint coding mode based on the distortion values of the plurality of reference pixel points corresponding to any one joint coding mode.
In one possible implementation, the determining module 702 is configured to determine, based on distortion values of a plurality of joint coding modes, a joint coding mode corresponding to a minimum distortion value from the plurality of joint coding modes; determining the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value, wherein the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value is used for representing the coding quality when coding is carried out based on the joint coding mode corresponding to the minimum distortion value; and determining the joint coding mode corresponding to the minimum distortion value as a target joint coding mode based on the fact that the rate distortion cost of the joint coding mode corresponding to the minimum distortion value is not greater than that of the non-joint coding mode.
In a possible implementation manner, the determining module 702 is further configured to determine distortion values of a plurality of joint coding modes based on chromaticity residual information of a reference pixel point corresponding to any image block; determining a joint coding mode corresponding to the minimum distortion value from the plurality of joint coding modes based on the distortion values of the plurality of joint coding modes; and determining the coding result of any one image block based on the first chroma residual block and the second chroma residual block of any one image block based on the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value is greater than the rate-distortion cost of the non-joint coding mode.
In a possible implementation manner, the determining module 702 is configured to use the joint chroma residual block and the target identification information as a coding result of any image block, where the target identification information is used to characterize whether any image block adopts joint coding.
The device determines a target joint coding mode of any image block based on the chromaticity residual information of the reference pixel point corresponding to the image block. Since the reference pixel points are pixel points around any image block and the decoding order is located before any image block, when the encoding result of any image block is decoded, the chroma residual information of the reference pixel point corresponding to any image block can be obtained, and the target joint encoding mode can be determined based on the chroma residual information of the reference pixel point corresponding to any image block. The method saves the identification information for identifying the target joint coding mode, reduces the data volume of the coding result of the image block and improves the coding performance.
It should be understood that, in implementing the functions of the apparatus provided in fig. 7, only the division of the functional modules is illustrated, and in practical application, the functional modules may be allocated to different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 8 is a schematic structural diagram of an image decoding device according to an embodiment of the present application, as shown in fig. 7, where the device includes:
An obtaining module 801, configured to obtain, for any one of a plurality of image blocks included in a target image, a coding result of any one image block, where the coding result of any one image block includes a joint chroma residual block;
a determining module 802, configured to determine a target joint coding mode based on chroma residual information of a reference pixel corresponding to any one image block, where the reference pixel is a pixel around any one image block and a decoding order is located before any one image block;
a decoding module 803, configured to perform decoding processing on the combined chroma residual block based on the target joint coding mode, to obtain a first chroma residual block and a second chroma residual block of any one image block;
the determining module 802 is further configured to determine a decoding result of any image block based on the first chroma residual block and the second chroma residual block of any image block.
In a possible implementation manner, the determining module 802 is configured to determine, based on chromaticity residual information of a reference pixel point corresponding to any one image block, distortion values of a plurality of joint coding modes, where the distortion values of the joint coding modes are used to characterize image quality after any one image block is encoded based on the joint coding modes; a target joint coding mode is determined from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes.
In one possible implementation, the encoding result of any image block further includes target identification information; if the target identification information characterizes that any image block adopts joint coding, the coding result of any image block comprises a joint chroma residual block.
In one possible implementation, the encoding result of any image block further includes target identification information; if the target identification information indicates that any image block does not adopt joint coding, the coding result of any image block comprises a first chromaticity residual block and a second chromaticity residual block of any image block; the determining module 802 is further configured to determine a decoding result of any image block based on the first chroma residual block and the second chroma residual block of any image block.
The result of encoding any one of the image blocks of the above apparatus comprises a joint chroma residual block. Since the reference pixel points are pixel points around any image block and the decoding order is located before any image block, when the encoding result of any image block is decoded, the chroma residual information of the reference pixel point corresponding to any image block can be obtained, and the target joint encoding mode can be determined based on the chroma residual information of the reference pixel point corresponding to any image block. And then, the joint chroma residual block is decoded based on the target joint coding mode, so that the identification information for identifying the target joint coding mode is saved, the data quantity of the coding result of the image block is reduced, and the coding performance is improved.
It should be understood that, in implementing the functions of the apparatus provided in fig. 8, only the division of the functional modules is illustrated, and in practical application, the functional modules may be allocated to different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 9 shows a block diagram of a terminal device 900 according to an exemplary embodiment of the present application. The terminal device 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 901 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). Processor 901 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 901 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 901 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
The memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one computer program for execution by processor 901 to implement the image encoding method or the image decoding method provided by the method embodiments of the present application.
In some embodiments, the terminal device 900 may further optionally include: a peripheral interface 903, and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 903 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 904, a display 905, a camera assembly 906, audio circuitry 907, and a power source 908.
The peripheral interface 903 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 901, the memory 902, and the peripheral interface 903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 904 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 904 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 904 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 905 is a touch display, the display 905 also has the ability to capture touch signals at or above the surface of the display 905. The touch signal may be input as a control signal to the processor 901 for processing. At this time, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 905 may be one and disposed on the front panel of the terminal device 900; in other embodiments, the display 905 may be at least two, respectively disposed on different surfaces of the terminal device 900 or in a folded design; in other embodiments, the display 905 may be a flexible display disposed on a curved surface or a folded surface of the terminal device 900. Even more, the display 905 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 905 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 906 is used to capture images or video. Optionally, the camera assembly 906 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different positions of the terminal device 900. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 907 may also include a headphone jack.
The power supply 908 is used to power the various components in the terminal device 900. The power source 908 may be alternating current, direct current, disposable or rechargeable. When the power source 908 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal device 900 also includes one or more sensors 909. The one or more sensors 909 include, but are not limited to: acceleration sensor 911, gyro sensor 912, pressure sensor 913, optical sensor 914, and proximity sensor 915.
The acceleration sensor 911 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal device 900. For example, the acceleration sensor 911 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 901 may control the display 905 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 911. The acceleration sensor 911 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 912 may detect a body direction and a rotation angle of the terminal device 900, and the gyro sensor 912 may collect a 3D motion of the user to the terminal device 900 in cooperation with the acceleration sensor 911. The processor 901 may implement the following functions according to the data collected by the gyro sensor 912: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 913 may be provided at a side frame of the terminal device 900 and/or at a lower layer of the display 905. When the pressure sensor 913 is provided at a side frame of the terminal device 900, a grip signal of the user to the terminal device 900 may be detected, and the processor 901 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 913. When the pressure sensor 913 is provided at the lower layer of the display 905, the processor 901 performs control of the operability control on the UI interface according to the pressure operation of the user on the display 905. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 914 is used to collect the ambient light intensity. In one embodiment, processor 901 may control the display brightness of display 905 based on the intensity of ambient light collected by optical sensor 914. Specifically, when the ambient light intensity is high, the display luminance of the display screen 905 is turned up; when the ambient light intensity is low, the display luminance of the display panel 905 is turned down. In another embodiment, the processor 901 may also dynamically adjust the shooting parameters of the camera assembly 906 based on the ambient light intensity collected by the optical sensor 914.
A proximity sensor 915, also referred to as a distance sensor, is typically provided on the front panel of the terminal device 900. The proximity sensor 915 is used to collect the distance between the user and the front of the terminal device 900. In one embodiment, when the proximity sensor 915 detects that the distance between the user and the front surface of the terminal apparatus 900 gradually decreases, the processor 901 controls the display 905 to switch from the bright screen state to the off screen state; when the proximity sensor 915 detects that the distance between the user and the front surface of the terminal apparatus 900 gradually increases, the processor 901 controls the display 905 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Fig. 10 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1000 may have a relatively large difference due to different configurations or performances, and may include one or more processors 1001 and one or more memories 1002, where the one or more memories 1002 store at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 1001 to implement the image encoding method or the image decoding method according to the above embodiments of the present application, and the processor 1001 is a CPU, for example. Of course, the server 1000 may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one computer program loaded and executed by a processor to cause an electronic device to implement any one of the image encoding method or the image decoding method described above.
Alternatively, the above-mentioned computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Read-Only optical disk (Compact Disc Read-Only Memory, CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program or a computer program product is also provided, in which at least one computer program is stored, which is loaded and executed by a processor, to cause an electronic device to implement any one of the above-mentioned image encoding methods or image decoding methods.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The above embodiments are merely exemplary embodiments of the present application and are not intended to limit the present application, any modifications, equivalent substitutions, improvements, etc. that fall within the principles of the present application should be included in the scope of the present application.

Claims (16)

1. An image encoding method, the method comprising:
For any one image block in a plurality of image blocks included in a target image, acquiring a first chroma residual block and a second chroma residual block of the any one image block;
Determining a target joint coding mode based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the reference pixel points are pixel points around any image block and the decoding sequence is positioned before any image block;
based on the target joint coding mode, performing joint coding on the first chroma residual block and the second chroma residual block of any image block to obtain a joint chroma residual block;
and determining the coding result of any image block based on the joint chroma residual block.
2. The method according to claim 1, wherein determining the target joint coding mode based on the chroma residual information of the reference pixel point corresponding to the any one of the image blocks includes:
Determining distortion values of a plurality of joint coding modes based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the distortion values of the joint coding modes are used for representing image quality after any image block is coded based on the joint coding modes;
A target joint coding mode is determined from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes.
3. The method of claim 2, wherein the chroma residual information comprises an original first chroma residual and an original second chroma residual; the determining distortion values of a plurality of joint coding modes based on the chromaticity residual information of the reference pixel point corresponding to any image block includes:
For any joint coding mode, determining a predicted second chroma residual of the reference pixel point corresponding to any joint coding mode by using the original first chroma residual of the reference pixel point;
And determining a distortion value corresponding to any one joint coding mode based on the original second chromaticity residual of the reference pixel point and the predicted second chromaticity residual of the reference pixel point corresponding to any one joint coding mode.
4. A method according to claim 3, wherein the number of reference pixel points is a plurality; the determining a distortion value corresponding to the any one joint coding mode based on the original second chromaticity residual of the reference pixel point and the predicted second chromaticity residual of the reference pixel point corresponding to the any one joint coding mode includes:
For any reference pixel point, calculating a difference value between an original second chroma residual of the any reference pixel point and a predicted second chroma residual of the any reference pixel point corresponding to the any joint coding mode to obtain a distortion value of any reference pixel point corresponding to the any joint coding mode;
and determining distortion values corresponding to any one joint coding mode based on the distortion values of the plurality of reference pixel points corresponding to the any one joint coding mode.
5. The method of claim 2, wherein the determining a target joint coding mode from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes comprises:
determining a joint coding mode corresponding to a minimum distortion value from the plurality of joint coding modes based on the distortion values of the plurality of joint coding modes;
Determining the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value, wherein the rate-distortion cost of the joint coding mode corresponding to the minimum distortion value is used for representing the coding quality when coding is performed based on the joint coding mode corresponding to the minimum distortion value;
and determining the joint coding mode corresponding to the minimum distortion value as the target joint coding mode based on the fact that the rate distortion cost of the joint coding mode corresponding to the minimum distortion value is not greater than that of the non-joint coding mode.
6. The method of claim 1, wherein after the obtaining the first and second chrominance residual blocks of the any one image block, further comprising:
determining distortion values of a plurality of joint coding modes based on chromaticity residual information of a reference pixel point corresponding to any image block;
determining a joint coding mode corresponding to a minimum distortion value from the plurality of joint coding modes based on the distortion values of the plurality of joint coding modes;
And determining the coding result of any image block based on the first chroma residual block and the second chroma residual block of any image block based on the rate distortion cost of the joint coding mode corresponding to the minimum distortion value is larger than the rate distortion cost of the non-joint coding mode.
7. The method according to any one of claims 1 to 6, wherein said determining the encoding result of said any one of the image blocks based on said joint chroma residual block comprises:
and taking the joint chrominance residual block and target identification information as a coding result of any image block, wherein the target identification information is used for representing whether any image block adopts joint coding or not.
8. An image decoding method, the method comprising:
For any image block in a plurality of image blocks included in a target image, acquiring a coding result of the any image block, wherein the coding result of the any image block comprises a joint chroma residual block;
Determining a target joint coding mode based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the reference pixel points are pixel points around any image block and the decoding sequence is positioned before any image block;
Decoding the joint chroma residual block based on the target joint coding mode to obtain a first chroma residual block and a second chroma residual block of any image block;
And determining a decoding result of the any image block based on the first chroma residual block and the second chroma residual block of the any image block.
9. The method according to claim 8, wherein determining the target joint coding mode based on the chroma residual information of the reference pixel point corresponding to the any one of the image blocks includes:
Determining distortion values of a plurality of joint coding modes based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the distortion values of the joint coding modes are used for representing image quality after any image block is coded based on the joint coding modes;
A target joint coding mode is determined from the plurality of joint coding modes based on distortion values of the plurality of joint coding modes.
10. The method of claim 8, wherein the encoding result of any one of the image blocks further includes target identification information; and if the target identification information characterizes that any image block adopts joint coding, the coding result of any image block comprises a joint chroma residual block.
11. The method of claim 8, wherein the encoding result of any one of the image blocks further includes target identification information; if the target identification information indicates that the any image block is not subjected to joint coding, the coding result of the any image block comprises a first chromaticity residual block and a second chromaticity residual block of the any image block;
the obtaining the encoding result of any image block further comprises:
And determining a decoding result of the any image block based on the first chroma residual block and the second chroma residual block of the any image block.
12. An image encoding apparatus, the apparatus comprising:
An acquisition module, configured to acquire, for any one image block of a plurality of image blocks included in a target image, a first chrominance residual block and a second chrominance residual block of the any one image block;
The determining module is used for determining a target joint coding mode based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the reference pixel points are pixel points around any image block, and the decoding sequence is positioned in front of any image block;
The coding module is used for carrying out joint coding on the first chroma residual block and the second chroma residual block of any image block based on the target joint coding mode to obtain a joint chroma residual block;
the determining module is further configured to determine a coding result of the any one image block based on the joint chroma residual block.
13. An image decoding apparatus, characterized in that the apparatus comprises:
An obtaining module, configured to obtain, for any one image block of a plurality of image blocks included in a target image, a coding result of the any one image block, where the coding result of the any one image block includes a joint chroma residual block;
The determining module is used for determining a target joint coding mode based on chromaticity residual information of reference pixel points corresponding to any image block, wherein the reference pixel points are pixel points around any image block, and the decoding sequence is positioned in front of any image block;
The decoding module is used for decoding the joint chroma residual block based on the target joint coding mode to obtain a first chroma residual block and a second chroma residual block of any image block;
the determining module is further configured to determine a decoding result of the any one image block based on the first chroma residual block and the second chroma residual block of the any one image block.
14. An electronic device comprising a processor and a memory, wherein the memory stores at least one computer program, the at least one computer program being loaded and executed by the processor to cause the electronic device to implement the image encoding method of any one of claims 1 to 7 or the image decoding method of any one of claims 8 to 11.
15. A computer readable storage medium, wherein at least one computer program is stored in the computer readable storage medium, and the at least one computer program is loaded and executed by a processor, to cause an electronic device to implement the image encoding method of any one of claims 1 to 7 or the image decoding method of any one of claims 8 to 11.
16. A computer program product, characterized in that at least one computer program is stored in the computer program product, which is loaded and executed by a processor to cause an electronic device to implement the image encoding method according to any one of claims 1 to 7 or to implement the image decoding method according to any one of claims 8 to 11.
CN202210635198.5A 2022-06-06 2022-06-06 Image encoding method, image decoding method, device, equipment and storage medium Active CN115118979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210635198.5A CN115118979B (en) 2022-06-06 2022-06-06 Image encoding method, image decoding method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210635198.5A CN115118979B (en) 2022-06-06 2022-06-06 Image encoding method, image decoding method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115118979A CN115118979A (en) 2022-09-27
CN115118979B true CN115118979B (en) 2024-11-22

Family

ID=83326426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210635198.5A Active CN115118979B (en) 2022-06-06 2022-06-06 Image encoding method, image decoding method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115118979B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113163212A (en) * 2020-01-07 2021-07-23 腾讯科技(深圳)有限公司 Video decoding method and apparatus, video encoding method and apparatus, medium, and device
CN113965764A (en) * 2020-07-21 2022-01-21 Oppo广东移动通信有限公司 Image coding method, image decoding method and related device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210042841A (en) * 2019-10-10 2021-04-20 한국전자통신연구원 Method and apparatus for encoding/decoding image, recording medium for stroing bitstream

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113163212A (en) * 2020-01-07 2021-07-23 腾讯科技(深圳)有限公司 Video decoding method and apparatus, video encoding method and apparatus, medium, and device
CN113965764A (en) * 2020-07-21 2022-01-21 Oppo广东移动通信有限公司 Image coding method, image decoding method and related device

Also Published As

Publication number Publication date
CN115118979A (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN110234008B (en) Encoding method, decoding method and device
CN115842916B (en) Decoding method, encoding method and device
CN111770340B (en) Video encoding method, device, equipment and storage medium
EP3644605A1 (en) Prediction mode selection method, video encoding device and storage medium
CN110933334B (en) Video noise reduction method, device, terminal and storage medium
CN112532975B (en) Video encoding method, video encoding device, computer equipment and storage medium
CN109151477B (en) Image data encoding and decoding methods and devices
CN114286089B (en) Reference frame selection method, device, equipment and medium
CN113891074B (en) Video encoding method and apparatus, electronic apparatus, and computer-readable storage medium
CN116563771A (en) Image recognition method, device, electronic equipment and readable storage medium
CN115118979B (en) Image encoding method, image decoding method, device, equipment and storage medium
CN116074512A (en) Video encoding method, video encoding device, electronic equipment and storage medium
CN116824548A (en) Obstacle determination method, device, equipment and readable storage medium
CN109040753B (en) Prediction mode selection method, device and storage medium
CN111770339A (en) Video encoding method, device, equipment and storage medium
CN116506616A (en) Video frame coding method, device, electronic equipment and storage medium
CN117197201B (en) Training method of attitude model, point cloud registration method, equipment and storage medium
CN115474037B (en) Video quality detection method and device, electronic equipment and readable storage medium
CN114422782B (en) Video encoding method, video encoding device, storage medium and electronic equipment
CN114079787B (en) Video decoding method, video encoding method, apparatus, device, and storage medium
CN116546203A (en) Video frame processing method and device, electronic equipment and readable storage medium
CN116980627A (en) Video filtering method and device for decoding, electronic equipment and storage medium
CN116546236A (en) Video frame filtering method and device, electronic equipment and storage medium
CN116703763A (en) Video restoration method, device, equipment and storage medium
HK40036902A (en) Coding method, decoding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant