Detailed Description
The application provides an image decoding method and a decoder, which are used for optimizing a decoding algorithm, reducing the hardware overhead of the decoder and simultaneously improving the performance of the decoder.
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings in the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In a video code stream, images in a video are compressed and encoded by a video coding and decoding protocol to obtain coding blocks, and an image is compressed and encoded into a coding block, such as a Macroblock (MB). In a video coding and decoding system, after an MB is transmitted from an encoder to a decoder, the decoder decodes the MB according to a video coding and decoding protocol to finally obtain corresponding decoding information, so that an image carried in the MB is obtained. Because the data volume carried in the compressed and encoded MB is large, when the compressed and encoded MB is decoded, the MB is firstly divided into a plurality of encoded subblocks by the decoder, then the divided encoded subblocks are decoded by the decoder one by one, and finally, all the encoded subblocks in the MB are decoded by the decoder. Fig. 1 is a schematic diagram of dividing a 16 × 16 coded block MB into 16 coded sub-blocks (a0-a15) of 4 × 4, where the positions of the coded sub-blocks may be arbitrary, and only one of the coded sub-blocks is shown in the diagram.
On one hand, the following is obtained through analyzing the mode distribution of the video code stream: the coding mode, motion vector and reference picture index of the co-located block corresponding to the coding subblock divided by the coding block obtained by the motion prediction mode coding are in many cases the same, or in many cases the coding mode, motion vector and reference picture index of the co-located block corresponding to a part of the coding subblock are the same, wherein the co-located block is a coding block having the same coordinate information as the coding subblock and having been decoded.
Taking fig. 1 as an example, in the coding blocks a0-a15, there may be a case where the coding mode, the motion vector, and the reference picture index of 16 co-located blocks corresponding to a0-a15 are the same, or the coding mode, the motion vector, and the reference picture index of 8 co-located blocks corresponding to a0-a7 are the same, and the coding mode, the motion vector, and the reference picture index of the other 8 co-located blocks corresponding to a8-a15 are the same.
On the other hand, through analysis of each video codec protocol (such as h.264, h.265, and AVS 2.0), it is known that: when the decoding operation of the motion prediction mode coding block is executed, the coding mode, the motion vector and the reference image index of the common position block corresponding to different coding sub-blocks may be the same, so in order to improve the decoding efficiency of the decoder, the complicated decoding calculation methods such as scaling operation performed on the coding mode, the motion vector and the reference image index of the common position block can be simplified, the calculation steps are reduced, the calculation time is shortened, and meanwhile, the calculation resources of the decoder are saved.
Based on the analysis of the two aspects, the image decoding method in the embodiment of the application is obtained by optimizing the decoding method, so that the decoding time is reduced, the calculation resources of the decoder are saved, and the decoding efficiency of the decoder is improved.
For convenience of understanding, the image decoding method in the embodiment of the present application is described in detail below with reference to fig. 2, which specifically includes the following steps:
as shown in fig. 2, an embodiment of an image decoding method in the embodiment of the present application includes:
201. the decoder stores side information for the co-located image.
The co-located picture is a picture that has been decoded before decoding the current image block and has the same coordinate information as the current image block. After the co-located image is decoded, the decoder stores the decoding information of the co-located image, and simultaneously stores the auxiliary information of the co-located image, wherein the decoding information comprises at least one of a coding mode, a motion vector and a reference image index.
The auxiliary information of the co-located image is used to indicate encoded sub-blocks in the co-located image where decoding information is the same and encoded sub-blocks in which decoding information is not the same. The decoder judges the decoding information of all the coding subblocks in the common position image and/or whether the decoding information of at least more than two specific position coding subblocks in the common position image is the same or not;
if the two image blocks are the same, the decoder sets the flag bit to 1, and if the two image blocks are not the same, the decoder sets the flag bit to 0 to obtain a marking result, and finally, the decoder takes the marking result as auxiliary information of the common position image and stores the auxiliary information so that the decoder can use the auxiliary information of the common position image to decode the encoded subblock in the current image block.
For a specific implementation of specifically acquiring the auxiliary information of the common location block, reference may be made to the related description in application scenario one below, and details are not described here again.
202. The decoder acquires the side information of the co-located image of the first encoded sub-block.
The first coding sub-block is a coding sub-block to be decoded in the current image block, the decoder divides the current image block into a plurality of coding sub-blocks, wherein the coding sub-blocks comprise a first coding sub-block and a second coding sub-block, and the second coding sub-block is a coding sub-block which is decoded completely.
And the decoder determines a common position image with the same coordinate information as the current image block according to the coordinate information of the current image block and extracts auxiliary information of the common position image. Further, the decoder determines a first co-located block in the co-located image having the same coordinate information as the first encoded sub-block based on the coordinate information of the first encoded sub-block.
The co-located image further includes a second co-located block having the same coordinate information as the second encoded sub-block, and the decoder stores the encoding method, the motion vector, and the reference picture index of the second co-located block, and also stores the decoding auxiliary value of the second encoded sub-block calculated according to the decoding information of the second co-located block.
For example, if the encoding mode of the second common position block is intra encoding mode, the decoder sets the motion vector of the common position block to 0, and the reference picture index to-1, and then the decoder calculates the motion vector and the reference picture index of the common position block according to the calculation methods specified by the video encoding and decoding protocols such as h.264, h.265 and AVS2.0 to obtain the decoding auxiliary value of the second encoding sub-block; if the coding mode of the second common position block is an inter coding mode and is a forward prediction reference, and the second common position block has a forward motion vector and a forward reference image index, the decoder sets the motion vector of the second common position block as the forward motion vector and the reference image index as the forward reference image index, and then the decoder calculates the motion vector and the reference image index of the second common position block according to a video coding and decoding protocol such as a calculation method specified by H.264, H.265 and AVS2.0 to obtain a decoding auxiliary value of the second coding sub-block; if the coding mode of the common position block is an inter coding mode and is a backward prediction reference, the common position block has a backward motion vector and a backward reference image index, the decoder sets the motion vector of the common position block as the backward motion vector and the reference image index as the backward reference image index, and further, the decoder calculates the motion vector and the reference image index of the second common position block according to the calculation method specified by video coding and decoding protocols such as H.264, H.265 and AVS2.0 to obtain the decoding auxiliary value of the second coding sub-block.
It should be noted that, the calculation process of the decoder calculating the auxiliary decoded value of the second encoded sub-block having the same coordinate information as the second common position block according to the decoding information of the second common position block is complex, and a large amount of calculation resources and calculation time of the decoder need to be consumed.
203. If the side information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are the same, the decoder acquires the decoding side value of the second encoded sub-block as the decoding side value of the first encoded sub-block.
If the auxiliary information of the co-location image indicates that the decoding information of the first co-location block and the second co-location block is the same, the decoder determines that the decoding auxiliary value of the first encoded sub-block is the same as the decoding auxiliary value of the second encoded sub-block, and then the decoder takes the decoding auxiliary value of the second encoded sub-block out of the storage space for storing the decoding auxiliary value of the second encoded sub-block, and the decoder decodes the first encoded sub-block by taking the decoding auxiliary value of the second encoded sub-block as the decoding auxiliary value of the first encoded sub-block to obtain the decoding information of the first encoded sub-block.
204. If the auxiliary information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are not the same, the decoder performs calculation according to the decoding information of the first co-located block to obtain the decoding auxiliary value of the first encoded sub-block.
If the auxiliary information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are not the same, the decoder calculates according to the decoding information of the first co-located block to obtain a decoding auxiliary value of the first encoded sub-block, so that the decoder decodes the first encoded sub-block using the decoding auxiliary value of the first encoded sub-block to obtain the decoding information of the first encoded sub-block.
The method for calculating the decoding auxiliary value of the first encoded sub-block by the decoder is similar to the method for calculating the decoding auxiliary value of the second encoded sub-block in step 203, and is not described herein again.
205. The decoder decodes the first encoded sub-block according to the decoding auxiliary value of the first common position block to obtain decoding information of the first encoded sub-block.
If the decoding auxiliary value of the first common position block is within a first preset range, the decoder determines that the decoding information of the first encoded subblock is the same as the decoding information of the first common position block, and further determines the decoding information of the first common position block as the decoding information of the first encoded subblock.
If the decoding auxiliary value of the first coding sub-block is within a second preset range, the decoder acquires the decoding information of the coding sub-block which is adjacent to the first coding sub-block and is already decoded in the current image block. Further, the decoder decodes the first encoded sub-block according to the decoding information of the encoded sub-block which is adjacent to the first encoded sub-block and has been decoded, so as to obtain the decoding information of the first encoded sub-block.
For example, when the decoding auxiliary value of the first encoded subblock is within the second preset range, the decoder performs decoding calculation on the first encoded subblock by using a spatial prediction algorithm to obtain decoding information of the first encoded subblock. In the spatial domain prediction algorithm, firstly, a decoder judges whether the coding subblock adjacent to the first coding subblock is valid according to a coding mode, secondly, if the coding subblock is valid, the decoder calculates target decoding information according to a first calculation method specified by a video coding and decoding protocol, such as a median filtering algorithm, to obtain the decoding information of the first coding subblock, and thirdly, if the coding subblock is invalid, the decoder calculates the target decoding information according to a second calculation method specified by the video coding and decoding protocol to obtain the decoding information of the first coding subblock.
Specifically, when the coding mode of the coding subblock adjacent to the first coding subblock is the inter coding mode, the decoder determines that the coding subblock adjacent to the first coding subblock is invalid, and if the coding mode of the coding subblock adjacent to the first coding subblock is the intra coding mode, the decoder determines that the coding subblock corresponding to the target decoding information is valid.
It should be noted that the first calculation method and the second calculation method need to be further determined according to the relevant regulations in different video coding and decoding protocols, such as h.264, h.265, and AVS2.0, and the specific calculation methods of the first calculation method and the second calculation method may refer to the relevant contents in the h.264, h.265, and AVS2.0 protocols, which are not described herein again.
It should be further noted that the first preset range and the second preset range are calculated according to a correlation calculation method in a video coding and decoding protocol, and are used for determining whether the decoding information of the first encoded subblock and the first common position block is the same, where when the decoding auxiliary value of the first encoded subblock is within the first preset range, the decoder directly uses the decoding information of the first common position block as the decoding information of the first encoded subblock to complete the decoding of the first encoded subblock; when the decoding auxiliary value of the first encoded sub-block is within the second preset range, the decoder needs to perform the related calculation between the first calculation method and the second calculation method to obtain the decoding information of the first encoded sub-block again according to the decoding information of the encoded sub-block adjacent to the first encoded block in the current image block.
For the above-mentioned calculation method for obtaining the first preset range and the second preset range by calculation according to the video coding and decoding protocols, the calculation methods are different in different video coding and decoding protocols, so that the first preset range and the second preset range in different video coding and decoding protocols are also different, and for the description related to the calculation of the first preset range and the second preset range, reference may be made to the description of the related part in the related video coding and decoding protocols, and details thereof are not repeated here.
206. The decoder stores side information of the current image block.
After all the encoded sub-blocks in the current image block are decoded, the decoder acquires and stores the auxiliary information of the current image block to assist the decoder in decoding the subsequent image.
Step 206 is similar to step 201 described above, and is not described in detail here.
In this embodiment, when the coding method, the motion vector and the reference picture index of the first co-located block and the second co-located block are the same, since the coding method, the motion vector and the reference picture index of the second co-located block have been correlated according to the video coding protocol during the decoding of the second encoded sub-block to obtain the decoding auxiliary value of the second co-located block and store the decoding auxiliary value, when the first encoded sub-block is decoded, the decoder does not need to calculate the decoding auxiliary value of the first encoded sub-block again, the decoder can directly copy or read the decoding auxiliary value of the second encoded sub-block stored before, the decoding operation can be simplified, and more importantly, the calculation process of the decoding auxiliary value is very complicated and takes a long time, and the occupation ratio in the whole decoding time is large, so the image decoding method in the embodiment of the present application can greatly shorten the decoding time, therefore, the calculation resource of the decoder is saved, the decoding efficiency of the decoder is improved, and finally the decoding performance of the decoder is improved.
Furthermore, the image decoding method in the application optimizes the decoding operation process in a large quantity, improves the decoding performance of the decoder, does not need to use a plurality of computing units for parallel processing, and can realize that a single computing unit carries out decoding calculation.
To facilitate understanding of the above steps 201 and 206, the following describes a specific process for storing the auxiliary information of the co-location image in a specific operation scene, specifically as follows:
the application scene one: for simplicity, the full-frame coding and full-spatial motion prediction modes under the h.264 protocol are taken as examples for explanation, as shown in fig. 3, in order to divide a 16x16MB into 16 4x4 coding blocks (b0-b15) under the above protocol and coding modes, the figure only shows one arrangement, and may also be other arrangements, which is not limited in this application; according to the relevant provisions of the H.264 protocol, the decoding information of the current 16x16MB coding block is divided into 16 4x4 coding blocks (b0-b15) for information storage, and the coding mode, the motion vector and the reference image index of each coding sub-block in b0-b15 are stored. After the MB is decoded, whether the coding modes, the motion vectors and the reference image indexes of all the coding sub-blocks in b0-b15 and the part of the coding sub-blocks at specific positions are the same or not is judged according to an H.264 protocol, the flag bit is assigned and marked according to the judgment result, and the assigned flag bit is stored. In the arrangement shown in fig. 3, the auxiliary information can be obtained using a flag bit with a length of 16 bits according to the following assignment:
bit 0: 1 if the coding mode, motion vector and reference picture index of all 4x4 blocks within the MB are consistent, otherwise 0;
bit 1: if b0, b5, b10 and b15 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 2: if b0, b5, b10 and b15 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 3: if the coding modes of b0, b5, b10 and b15 are consistent, the forward and backward motion vectors and the reference picture index are 1, otherwise, the forward and backward motion vectors and the reference picture index are 0;
bit 4: if b0, b1, b2 and b3 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 5: if b0, b1, b2 and b3 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 6: if the coding modes of b0, b1, b2 and b3 are consistent, the forward and backward motion vectors and the reference picture index are 1, otherwise, the forward and backward motion vectors and the reference picture index are 0;
bit 7: if b4, b5, b6 and b7 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 8: if b4, b5, b6 and b7 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 9: if the coding modes of b4, b5, b6 and b7 are consistent, the forward and backward motion vectors and the reference image index are 1, otherwise, the forward and backward motion vectors and the reference image index are 0;
bit 10: if b8, b9, b10 and b11 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 11: if b8, b9, b10 and b11 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 12: if the coding modes of b8, b9, b10 and b11 are consistent, the forward and backward motion vectors and the reference picture index are 1, otherwise, the forward and backward motion vectors and the reference picture index are 0;
bit 13: if b12, b13, b14 and b15 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 14: if b12, b13, b14 and b15 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 15: if the coding modes of b12, b13, b14 and b15 are consistent, the forward and backward motion vectors and the reference picture index are 1, otherwise, the reference picture index is 0.
Application scenario two: taking the coding block in the application scenario one as an example, the image decoding method in the embodiment of the present application is described in detail as follows:
s100: the 16x16MB is divided into 16 4x4 coding blocks b0-b15 according to the H.264 protocol, and the following calculations from S101 to S110 are sequentially performed according to the decoding order of b0, b1, b2, b3 … … …, b14 and b 15.
If the currently decoded coded sub-block is b0, jumping to execute the following step S102; if the encoded subblock currently being decoded is any one of the encoded subblocks b1 through b15, the following step S101 is sequentially performed.
S101, finding out a common position image corresponding to the coding sub-block currently being decoded.
The above co-located picture has been decoded when decoding the current picture, and the decoder has stored the coding mode, motion vector and reference picture index of each coded sub-block in the co-located picture, and also stored the auxiliary information of the co-located picture.
S102, if the current coding subblock being decoded is b0, skipping to execute the step S104; otherwise, extracting the auxiliary information G, and judging whether the coded sub-block currently being decoded has the same motion mode, motion vector and reference image index as the coded sub-block already decoded in the current MB coding block according to the auxiliary information G.
S103, if the motion mode, the motion vector and the reference image index of the common position block of the decoded coding sub-block are the same, the calculation result obtained by calculation according to the H.264 protocol is copied and marked as R, and the step S106 is skipped to execute; if not, jumping to execute step S104.
And S104, finding the coordinates of the corresponding common position block in the common position image according to the H.264 protocol.
S105, according to the coordinates obtained in step 104, in the information stored in the co-located image, the coding mode, the motion vector, and the reference image index of the co-located block are addressed, and a calculation result obtained by calculating according to the h.264 protocol is denoted as R, and R is stored, and a specific calculation manner may refer to step S103, which is not described herein again.
S106, if the currently decoded coding subblock is b0, sequentially executing a step S107; if the encoded subblock currently being decoded is any one of the encoded subblocks b1 through b15, jumping to perform step S108.
And S107, determining the coding mode, the motion vector and the reference image index of a coding block adjacent to the current MB coding block according to the H.264 protocol.
And S108, calculating according to the calculation result R and the coding mode, the motion vector and the reference image index of the adjacent coding block obtained in the step S107 through a related calculation step specified by an H.264 protocol to obtain the motion vector and the reference image index of the current coding sub-block. S109, determining whether the coding mode, the motion vector and the reference image index of the current coding sub-block need to be stored or not by an H.264 protocol, and if so, storing the coding mode, the motion vector and the reference image index for being used as a reference image of a subsequent image to be decoded; if not, the storage is not carried out, and the steps S101 to S108 are directly skipped to decode the next coding sub-block in the current image.
S110, if b0-b15 are decoded completely, and the 16x16MB needs to store and optimize the decoding operation of the subsequent image, determining whether the coding mode, the motion vector, and the reference image index of the 16 coding sub-blocks b0-b15 are all consistent, and marking according to the determination result to generate corresponding auxiliary information, and determining whether the coding mode, the motion vector, and the reference image index of the coding sub-blocks at certain specific positions in the 16 coding sub-blocks are all consistent according to the h.264 protocol, and marking according to the determination result to generate corresponding auxiliary information, where the specific determination process refers to the first application scenario, which is not described herein again. If the decoding is not completed, the skip execution step S101 continues the decoding.
The following describes the decoder in the embodiment of the present application in detail with reference to the specific implementation manner, specifically as follows:
as shown in fig. 4, an embodiment of a decoder in the embodiment of the present application schematically shows that the decoder 40 includes: a first obtaining module 401, a second obtaining module 402 and a decoding module 403;
a first obtaining module 401, configured to perform the operation process described in the foregoing step 202;
a second obtaining module 402, which may perform the operation process described in the above step 203;
the decoding module 403 may perform the operation procedure described in step 205.
In one example, the decoder 50 as shown in fig. 5 includes, in addition to the modules shown in fig. 4 described above, the decoder 50 further including: a judging module 504, a storage module 505 and a calculating module 506; the determining module 504 and the storing module 505 may be configured to execute the decoding operations described in the above steps 201 and 206, and execute the relevant operations of acquiring and storing the auxiliary information in the above application scenario one; a calculation module 506, which is configured to jointly execute the operation procedure described in step 204.
For the functions of the modules, reference may be made to the description in the embodiment, the first application scenario, and the second application scenario corresponding to fig. 2 for understanding, and details of the description are not repeated here.
The decoder may also be a decoding chip. In an implementation manner, the decoder may be implemented by hardware, or may be implemented by hardware to execute corresponding software, where the hardware and the software include modules corresponding to one or more of the above functions.
The following describes the hardware structure of the decoder in the embodiment of the present application in detail, specifically as follows:
as shown in fig. 6, which is a hardware structure of the decoder in the embodiment of the present application, the decoder 60 includes:
a processor 601 and a memory 602; memory 404 may include both read-only memory and random-access memory, among other things, and provides instructions and data to processor 403. A portion of memory 404 may also include non-volatile random access memory (NVRAM). The memory 602 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
and (3) operating instructions: the method comprises various operation instructions for realizing various operations; operating the system: including various system programs for implementing various basic services and for handling hardware-based tasks.
Processor 601 may also be referred to as a Central Processing Unit (CPU). The image decoding method disclosed in the embodiment of the present application can be applied to the processor 601 or implemented by the processor 601. The processor 601 may be an integrated circuit chip having decoding capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 601.
The processor 601 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the image decoding method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art.
The processor 601 executes the decoding operation of the decoder described in the embodiment of the method corresponding to fig. 2 by calling the operation instruction stored in the memory 602.
The embodiment of the present application provides a computer-readable storage medium, which is used for storing computer operating instructions for the decoder, and when the computer operating instructions are executed on a computer, the computer is enabled to execute the image decoding method in the embodiment corresponding to fig. 2.
The present application provides a computer program product containing instructions, which when run on a computer, enables the computer to execute the image decoding method in the embodiment corresponding to fig. 2.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the technical solution scope of the embodiments of the present application.