[go: up one dir, main page]

CN112135139B - Method, computer system and storage medium for partitioning encoded video data - Google Patents

Method, computer system and storage medium for partitioning encoded video data Download PDF

Info

Publication number
CN112135139B
CN112135139B CN202010580337.XA CN202010580337A CN112135139B CN 112135139 B CN112135139 B CN 112135139B CN 202010580337 A CN202010580337 A CN 202010580337A CN 112135139 B CN112135139 B CN 112135139B
Authority
CN
China
Prior art keywords
slice
computer
video frame
frame data
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010580337.XA
Other languages
Chinese (zh)
Other versions
CN112135139A (en
Inventor
崔秉斗
刘杉
文格尔史蒂芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent America LLC
Original Assignee
Tencent America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/908,036 external-priority patent/US11589043B2/en
Application filed by Tencent America LLC filed Critical Tencent America LLC
Priority to CN202310268603.9A priority Critical patent/CN116260971B/en
Publication of CN112135139A publication Critical patent/CN112135139A/en
Application granted granted Critical
Publication of CN112135139B publication Critical patent/CN112135139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application provides a method, a computer system and a storage medium for partitioning encoded video data. The method of partitioning encoded video data comprises: video frame data is received and partitioned into at least one sub-unit. The sub-units may each have a unique address value and the at least one sub-unit is arranged in increasing order according to the unique address value. The left boundary and the top boundary associated with each of the sub-units may include at least one of a picture boundary and a boundary of a previously decoded sub-unit.

Description

Method, computer system and storage medium for partitioning encoded video data
Cross-referencing
This application claims priority from U.S. provisional application No. 62/865,945 filed by the united states patent and trademark office at 24/6 of 2019 and U.S. patent application No. 16/908,036 filed by the united states patent and trademark office at 22/6 of 2020, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates generally to the field of data processing, and more particularly to video encoding and decoding.
Background
The picture may be divided into at least one slice (Tile). A slice is a sequence of Coding Tree Units (CTUs) corresponding to rectangular sub-areas of a picture. A slice may be divided into at least one block (Brick). A Slice (Slice) contains multiple slices of a picture or multiple blocks of a Slice. Two modes of banding are supported: raster scan stripe mode and rectangular stripe mode. In raster scan stripe mode, a stripe includes a sequence of slices in a slice raster scan of a picture. In the rectangular slice mode, a slice contains a plurality of blocks of a picture, which collectively constitute a rectangular region.
Disclosure of Invention
In the process of partitioning encoded video data, how to partition a video frame to realize fast and efficient decoding of a partitioned image under the condition of saving bits is a problem to be considered in the technical field.
Embodiments of the present application relate to a method, computer system, and storage medium for partitioning encoded video data.
The embodiment of the application provides a method for partitioning coded video data. The method includes receiving video frame data. The video frame data is partitioned into at least one sub-unit. Wherein each subunit has a unique address value, respectively, according to which the at least one subunit is arranged in increasing order; and the left and top boundaries associated with each of the sub-units comprise at least one of picture boundaries and boundaries of previously decoded sub-units.
The embodiment of the application also provides a computer system for partitioning the coded video data. The computer system comprises a receiving module, a processing module and a display module, wherein the receiving module is used for receiving video frame data; and a segmentation module for segmenting the video frame data into at least one sub-unit; wherein each of the subunits has a unique address value according to which the at least one subunit is arranged in an increasing order, and the left and top boundaries associated with each of the subunits comprise at least one of picture boundaries and boundaries of previously decoded subunits.
An embodiment of the present application further provides a non-volatile computer-readable medium, in which instructions are stored, where the instructions include: at least one instruction, which when executed by at least one processor of a computer, may cause the at least one processor to perform a method as described in embodiments of the present application.
The embodiment of the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the method according to the embodiment of the present application is implemented.
According to the technical scheme of the embodiment of the application, in the process of partitioning the coded video data, the video frame data are divided into at least one sub-unit which is arranged in an increasing order according to the unique address value, so that the decoder can continuously decode the sub-units according to the increased address or identification value, the decoding efficiency of the decoder can be improved, and the bit can be saved.
Drawings
These and other objects, features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. The various features of the drawings are not to scale as the illustrations are for clarity of understanding by those skilled in the art in connection with the detailed description. In the drawings:
FIG. 1 illustrates a networked computer environment, according to at least one embodiment;
FIG. 2 is an exemplary partition image in accordance with at least one embodiment;
FIGS. 3A-3C are exemplary partition parameters according to at least one embodiment;
FIG. 4 is an operational flow diagram illustrating steps performed by a program partitioning encoded video in accordance with at least one embodiment;
FIG. 5 is a block diagram of internal and external components of the computer and server depicted in FIG. 1, in accordance with at least one embodiment;
FIG. 6 is a block diagram of a cloud computing environment including the computer system depicted in FIG. 1, in accordance with at least one embodiment; and
FIG. 7 is a block diagram of functional layers of the cloud computing environment of FIG. 6 in accordance with at least one embodiment.
Detailed Description
Disclosed herein are specific embodiments of the claimed structures and methods; however, it is to be understood that the disclosed embodiments are merely illustrative of the structures and methods that may be embodied in various forms. These structures and methods may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the claims to those skilled in the art. In embodiments, details of well-known features and techniques may be omitted to avoid obscuring the presented embodiments.
Embodiments relate generally to the field of data processing, and more particularly to video encoding and decoding. The exemplary embodiments described below provide a system, method and computer program for partitioning encoded video data. Thus, some embodiments have the ability to increase the computational domain by iteratively encoding and decoding partitioned video frames and image data based on a single slice image and video frame as independent separate images.
As previously described, a picture may be divided into at least one slice. The slice is a sequence of Coding Tree Units (CTUs) corresponding to rectangular sub-regions of a picture. A slice may be divided into at least one block. A slice includes multiple slices of a picture or multiple blocks of a slice. Two modes of banding are supported: raster scan stripe mode and rectangular stripe mode. In raster scan stripe mode, a stripe includes a sequence of slices in a slice raster scan of a picture. In the rectangular slice mode, a slice contains a plurality of blocks of a picture, which collectively constitute a rectangular region.
However, in the latest VVC WD (jvt-N1001-v 8), the syntax element single _ tile _ in _ pic _ flag signaled in the Picture Parameter Set (PPS) indicates whether there is only one slice in a Picture or whether there is more than one slice in each Picture. If the value of single _ tile _ in _ pic _ flag is equal to 0, the slice is not allowed to be partitioned into blocks because the syntax elements of split _ splitting _ present _ flag and split _ flag [ i ] are not present. If there is no split _ flag [ i ], the value of each split _ flag [ i ] is inferred to be equal to 0, and no slice of a picture with respect to a PPS is divided into two or more blocks. Thus, in order to make a slice in a picture have multiple blocks, it may be useful to add, among other things, an additional syntax element (e.g., single _ split _ in _ pic _ flag) that may indicate whether there may be only one block in each picture or whether there may be more than one block in each picture. For example, a slice having a plurality of blocks in a picture may cause the slice to be considered a sub-picture extracted from a picture having a plurality of slices, the slice having a plurality of blocks. This may save bits over conventional approaches.
Aspects are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer-readable media in accordance with various embodiments. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
Referring to fig. 1, fig. 1 is a functional block diagram of a networked computer environment, showing a video frame partitioning system 100 (hereinafter "system") for partitioning encoded video data. It should be understood that FIG. 1 provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements.
The system 100 may include a computer 102 and a server computer 114. The computer 102 may communicate with a server computer 114 over a communication network 110 (hereinafter "network"). The computer 102 may include a processor 104 and a software program 108, the software program 108 being stored on a data storage device 106 and capable of interfacing with a user and communicating with a server computer 114. Computer 102, as will be discussed below with reference to fig. 5, may include internal components 800A and external components 900A, respectively, and server computer 114 may include internal components 800B and external components 900B, respectively. The computer 102 may be, for example, a mobile device, a telephone, a personal digital assistant, a netbook, a laptop computer, a tablet computer, a desktop computer, or any type of computing device capable of running programs, accessing a network, and accessing a database.
The server computer 114 may also operate in a cloud computing Service model, such as a Software as a Service (SaaS), platform as a Service (PaaS), or Infrastructure as a Service (laaS), as discussed below with respect to FIGS. 6 and 7. The server computer 114 may also be located in a cloud computing deployment model, such as a private cloud, a community cloud, a public cloud, or a hybrid cloud.
The activation of the server computer 114 for Partitioning encoded Video data to run a Video Partitioning Program 116 (hereinafter "Program") may interact with the database 112. The video partition program method will be explained in more detail below in conjunction with fig. 4. In one embodiment, the Computer 102 may operate as an input device including a user interface, and the program 116 may run primarily on the Server Computer (Server Computer) 114. In alternative embodiments, the program 116 may run primarily on at least one computer 102, while the server computer 114 may be used to process and store data used by the program 116. It should be noted that the program 116 may be a stand-alone program, or the pages may be integrated into a larger video partitioning program.
It should be noted, however, that in some cases, the processing of the program 116 may be shared between the computer 102 and the server computer 114 at any rate. In another embodiment, the program 116 may operate on more than one computer, a server computer, or some combination of computers and server computers (e.g., multiple computers 102 communicating with a single server computer 114 across the network 110). For example, in another embodiment, the program 116 may operate on multiple server computers 114, the multiple server computers 114 communicating with multiple client computers across the network 110. In some embodiments, the program is operable on a network server in communication with a server and a plurality of client computers across a network.
Network 110 may include wired connections, wireless connections, fiber optic connections, or some combination thereof. In general, the network 110 may be any combination of connections and protocols that support communication between the computer 102 and the server computer 114. Network 110 may include various types of networks, such as a Local Area Network (LAN), a Wide Area Network (WAN) such as the Internet, a telecommunications Network such as the Public Switched Telephone Network (PSTN), a wireless Network, a Public Switched Network, a satellite Network, a cellular Network (e.g., a fifth generation (5G) Network, a Long-Term Evolution (LTE) Network, a third generation (3G) Network, a Code Division Multiple Access (CDMA) Network, etc.), a Public Land Mobile Network (PLMN), a Metropolitan Area Network (MAN), a private Network, an ad hoc Network, an intranet, a fiber-based Network, etc., and/or a combination of these or other types of networks.
The number and arrangement of devices and networks shown in fig. 1 are provided as examples. In practice, there may be more devices and/or networks, fewer devices and/or networks, different devices and/or networks, or a different arrangement of devices and/or networks than those shown in FIG. 1. Further, two or more of the devices shown in fig. 1 may be implemented within a single device, or a single device shown in fig. 1 may be implemented as multiple distributed devices. Additionally or alternatively, a set of devices (e.g., at least one device) of environment 100 may perform at least one function described as being performed by another set of devices of environment 100.
Referring now to FIG. 2, an exemplary partitioned image 200 is depicted. The partitioned image 200 may be divided into at least one slice 204. Slice 204 may be further divided into at least one block 206. Slices 204 and blocks 206 may be grouped together into at least one stripe 202. It will be appreciated that without further block partitioning within a slice, the entire slice may be referred to as a block. When a picture contains only a single slice without further block segmentation, the picture of a single slice may be referred to as a single block. The partitioned image 200 and each of the slices 202, slices 204, and blocks 206 may have a top boundary, a left boundary, a bottom boundary, and a right boundary.
Referring to fig. 3A, 3B, and 3C, exemplary partition parameters 300A, 300B, and 300C are depicted in accordance with at least one embodiment. The partition parameters 300A, 300B, and 300C may include, among other things, syntax elements that may accordingly include:
a single _ split _ in _ pic _ flag equal to 1 may indicate that there is only one block in each picture referring to a Picture Parameter Set (PPS), and a single _ split _ in _ pic _ flag equal to 0 may indicate that there may be multiple blocks in each picture referring to a PPS. The requirement for bitstream conformance is that the value of single _ burst _ in _ pic _ flag is the same for all PPS activated within a Coded Video Sequence (CVS).
A single _ tile _ in _ pic _ flag equal to 1 may indicate that there is only one slice in each picture that relates to a PPS, while a single _ tile _ in _ pic _ flag equal to 0 may indicate that there may be more than one slice in each picture that relates to a PPS. When the value of single _ tile _ in _ pic _ flag is not present, it can be inferred that the value of single _ tile _ in _ pic _ flag is equal to 1. The requirement for bitstream conformance is that the value of single _ tile _ in _ pic _ flag is the same for all PPS active within the CVS.
slice _ address may indicate the stripe address of the stripe. When the slice _ address syntax element is not present, it can be inferred that the value of slice _ address is equal to 0. The stripe address may be a block ID. The length of slice _ address is Ceil (Log 2 (numbricklipic)) bits. The value of slice _ address may range from 0 to numbricklipic-1 (both inclusive).
sh _ slice _ ID may indicate a slice ID of a slice. When sh _ slice _ id is not present, it can be inferred that the value of sh _ slice _ id is equal to 0. The length of the slice _ id may be signaled _ slice _ id _ length _ minus1+1 bits. If signed _ slice _ id _ flag is equal to 0, the value of sh _ slice _ id may range from 0 to num _ slices _ in _ pic _ minus1 (both inclusive). Otherwise, the value of sh _ slice _ id may be in the range of 0 to 2 (signaled _ slice _ id _ length _ minus1+ 1) -1 (both inclusive).
When uniform _ tile _ spacing _ flag may be equal to 0, a brick _ row _ height _ minus1[ i ] [ j ] plus 1 may indicate the height of the jth block in the ith slice in CTB units. When the brick _ row _ height _ minus1 is not present, it can be inferred that the value of brick _ row _ height _ minus1[ i ] [ j ] is equal to Rowheight [ i ] -1.
bottom _ right _ brick _ idx _ delta [ i ] may indicate the difference between the brick index of the block located at the bottom right corner of the ith stripe and top _ left _ brick _ idx [ i ]. When single _ bright _ per _ slice _ flag is equal to 1, it can be inferred that the value of bottom _ right _ bright _ idx _ delta [ i ] is equal to 0. The length of the bottom _ right _ brick _ idx _ delta [ i ] syntax element may be Ceil (Log 2 (numbricklsnic-top _ left _ brick _ idx [ i ])) bits. When bottom _ right _ brick _ idx _ delta is not present, it can be inferred that the value of bottom _ right _ brick _ idx _ delta [ i ] is equal to NumBricksInPic-top _ left _ brick _ idx [ i ] -1.
A requirement for bitstream conformance is that at least one constraint can be applied. For example, the value of slice _ address is not equal to the value of slice _ address of any other coded slice NAL unit of the same coded picture. The slices of the picture may be arranged in increasing order of their slice _ address values. The shape of the slice of the picture may be such that, when each brick is decoded, its entire left boundary and entire top boundary are made up of picture boundaries or of boundaries of previously decoded (at least one) block.
Referring to fig. 4, an operational flow diagram 400 of steps performed by a program that partitions encoded video data is depicted. Fig. 4 can be described with the aid of fig. 1, 2 and 3A to 3C. As previously described, the video partitioning program 116 (fig. 1) can partition encoded video data quickly and efficiently.
At 402, video frame data is received. The video frame data may be a still image or video data from which at least one frame may be extracted. In operation, the video partitioning program 116 (FIG. 1) on the server computer 114 (FIG. 1) may receive the partition image 200 (FIG. 2) from the computer 102 (FIG. 1) over the communication network 110 (FIG. 1), or may retrieve the partition image 200 from the database 112 (FIG. 1).
At 404, the video frame data is partitioned into at least one sub-unit.
In some embodiments, each of the subunits has a unique address value, the at least one subunit being arranged in increasing order according to the unique address value. The left boundary and the top boundary associated with each of the sub-units comprise at least one of a picture boundary or a boundary of a previously decoded sub-unit. In some embodiments, the sub-units comprise one or more of slices, and blocks, and the video frame data may be partitioned into slices, and blocks. In operation, the video partitioning program 116 (fig. 1) on the server computer 114 (fig. 1) may divide the number of slices 202 (fig. 2), slices 204 (fig. 2), and blocks 206 (fig. 2) of the partitioned image 200 (fig. 2) according to features in the partitioned image 200 and inter-and intra-prediction.
In some embodiments, the pattern associated with the stripe comprises one of a raster scan stripe pattern or a rectangular stripe pattern; the stripes may be decoded in an order corresponding to the raster scan stripe pattern; or decoding the stripes according to the sequence corresponding to the rectangular stripe mode.
In some embodiments, the raster scan stripe mode is enabled in the rectangular stripe mode.
At 406, a flag is set indicating a number of sub-units of the video frame data.
In some embodiments, the flag may be, for example, single _ tile _ in _ pic _ flag or single _ quick _ in _ pic _ flag.
In some embodiments, if it is determined that a slice contains one slice and more than one block, the slice is decoded as an independent picture according to the number of sub-units indicated by the set flag. By the flag indicating the number of sub-units present, it may be determined, for example, that a slice may contain one slice and more than one block, and that the slice may be processed, e.g., decoded, as an independent picture. In operation, the video partitioning program 116 (FIG. 1) on the server computer 114 (FIG. 1) may set the flags using the partition parameters 300A, 300B, and/or 300C (FIG. 2) depending on the number of stripes 202 (FIG. 2), slices 204 (FIG. 2), and blocks 206 (FIG. 2) within the partitioned image 200 (FIG. 2). When it is determined that a stripe 202 comprises one slice 204 and more than one block 206, the stripe 202 may be processed as a stand-alone image.
At 408, the encoded video data is decoded according to the address values associated with the sub-units. For example, the sub-units may be successively decoded according to increasing address or identification values. In operation, the video partitioning program 116 (fig. 1) on the server computer 114 (fig. 1) may decode the slices 202 (fig. 2), slices 204 (fig. 2), and blocks 206 (fig. 2) in successively increasing order to allow for decoding of the partitioned images 202 (fig. 2).
It will be appreciated that fig. 4 provides only an illustration of one implementation and does not imply any limitation on how the different embodiments may be implemented. Many modifications to the depicted environments may be made depending on design and implementation requirements.
According to the technical scheme of the embodiment of the application, in the process of partitioning the coded video data, the video frame data are divided into at least one sub-unit which is arranged in an increasing order according to the unique address value, so that the decoder can continuously decode the sub-units according to the increased address or identification value, the decoding efficiency of the decoder can be improved, and the bit can be saved.
An embodiment of the present application further provides a computer system for partitioning encoded video data, including: a receiving module and a dividing module;
the receiving module is used for receiving video frame data;
a segmentation module for segmenting the video frame data into at least one sub-unit;
wherein each of the subunits has a unique address value according to which the at least one subunit is arranged in an increasing order, and the left and top boundaries associated with each of the subunits comprise at least one of picture boundaries and boundaries of previously decoded subunits.
In some embodiments, the subunits comprise one or more of stripes, sheets and blocks.
In some embodiments, the pattern associated with the stripe comprises one of a raster scan stripe pattern or a rectangular stripe pattern; the computer system further comprises a decoding module for decoding the stripes according to the sequence corresponding to the raster scanning stripe mode; or decoding the stripes according to the sequence corresponding to the rectangular stripe mode.
In some embodiments, the decoding module is further configured to enable the raster scan stripe mode in the rectangular stripe mode.
In some embodiments, the computer system further comprises a setting module for setting a flag after dividing the video frame data into at least one sub-unit, the flag indicating a number of sub-units present in the video frame data.
In some embodiments, if it is determined that a slice contains one slice and more than one block according to the number of sub-units indicated by the set flag, the slice is decoded as an independent picture.
In some embodiments, the computer system further comprises a decoding module to decode the encoded video data according to the address value associated with the sub-unit.
FIG. 5 is a block diagram 500 of internal and external components of the computer depicted in FIG. 1, in accordance with an illustrative embodiment. It should be understood that FIG. 5 provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made depending on design and implementation requirements.
Computer 102 (FIG. 1) and server computer 114 (FIG. 1) may include respective sets of internal components 800A, 800B and external components 900A, 900B as shown in FIG. 5. Each set of internal components 800 includes at least one processor 820, at least one computer-readable RAM 822, and at least one computer-readable ROM 824 on at least one bus 826, at least one operating system 828, and at least one computer-readable tangible storage device 830.
The processor 820 is implemented in hardware, firmware, or a combination of hardware and software. Processor 820 is a Central Processing Unit (CPU), graphics Processing Unit (GPU), accelerated Processing Unit (APU), microprocessor, microcontroller, digital Signal Processor (DSP), field-Programmable Gate Array (FPGA), application-Specific Integrated Circuit (ASIC), or other type of Processing component. In some implementations, the processor 820 includes at least one processor that can be programmed to perform functions. Bus 826 includes components that permit communication between internal components 800A, 800B.
At least one operating system 828, software programs 108 (fig. 1), and video partition programs 116 (fig. 1) on server computer 114 (fig. 1) are stored on at least one respective computer readable tangible storage device 830 for execution by at least one respective processor 820 via respective at least one RAM 822 (which typically includes cache memory). In the embodiment shown in fig. 5, each of the computer readable tangible storage devices 830 is a magnetic disk storage device of an internal hard disk drive. In some embodiments, each computer readable tangible storage device 830 is a semiconductor memory device, such as a ROM 824, an EPROM, a flash memory, an optical disk, a magneto-optical disk, a solid state disk, a Compact Disk (CD), a Digital Versatile Disk (DVD), a floppy disk, a cassette, a tape, and/or another type of non-volatile computer readable tangible storage device capable of storing a computer program and digital information.
Each set of internal components 800A, 800B also includes an R/W drive or interface 832 for reading from and writing to at least one portable computer readable tangible storage device 936, such as a CD-ROM, DVD, memory stick, magnetic tape, magnetic disk, optical disk, or semiconductor memory device. Software programs, such as the software program 108 (FIG. 1) and the video partition program 116 (FIG. 1), may be stored on at least one respective portable computer readable tangible storage device 936, read by a respective R/W drive or interface 832, and loaded into a respective hard disk drive 830.
Each set of internal components 800A, 800B also includes a network adapter or interface 836, such as a TCP/IP adapter card, a wireless Wi-Fi interface card, or a 3G, 4G, or 5G wireless interface card or other wired or wireless communication link. The software programs 108 (FIG. 1) and the video partition programs 116 (FIG. 1) on the server computer 114 (FIG. 1) may be downloaded from external computers to the computer 102 (FIG. 1) and the server computer 114 over a network (e.g., the Internet, a local area network, or other wide area network) and corresponding network adapters or interfaces 836. The software programs 108 and video partition programs 116 on the server computer 114 are loaded into the respective hard disk drives 830 from a network adapter or interface 836. The network may include copper wires, optical fibers, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
Each set of external components 900A, 900B may include a computer display monitor 920, a keyboard 930, and a computer mouse 934. The external components 900A, 900B may also include touch screens, virtual keyboards, touch pads, pointing devices, and other human interface devices. Each set of the internal components 800A, 800B further includes a device driver 840, the device driver 840 interfacing to a computer display monitor 920, a keyboard 930, and a computer mouse 934. The device driver 840, R/W driver or interface 832, and network adapter or interface 836 include hardware and software (stored in the storage device 830 and/or ROM 824).
It is to be understood in advance that although the present disclosure includes a detailed description of cloud computing, the implementations described herein are not limited to a cloud computing environment. Rather, some embodiments may be implemented in connection with any other type of computing environment, whether now known or later developed.
Cloud computing is a service delivery model for enabling convenient, on-demand shared pool network access to configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be provisioned and released quickly, minimizing the workload of managing the resources and interactions with service providers. The cloud model may include at least five features, at least three service models, and at least four deployment models.
The characteristics are as follows:
on-demand self-service: cloud consumers can unilaterally obtain computing capabilities, such as server time and network storage, automatically on demand, thereby eliminating the process of interacting with service providers.
Extensive network access: many of the available functions are provided in a network, accessible through various unified standard mechanisms from a wide variety of thin client or thick client platforms (e.g., mobile phones, laptops, or PDA palmtops).
Resource pool: the service provider collects the computing resources into a resource pool, serves a plurality of consumers through a multi-tenant mode, and dynamically allocates or reallocates different physical resources and virtual resources according to the requirements of the consumers. The locality of the resource is privacy, and the consumer is typically unaware of the exact location of the resource, and has no control over the allocation of the resource, but can indicate a more precise summary location (e.g., country, province, or data center).
The method is rapid and flexible: various functions can be quickly and flexibly provided to effect expansion, in some cases automatically, to expand quickly outward, and quickly released to expand quickly inward. The functions available to the consumer are unlimited and any number of purchases may be made at any time.
Measurement service: cloud systems automatically control and optimize resource usage by leveraging some level of abstraction metering capability appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled and reported to provide transparency to the provider and consumer of the service used.
The service model is as follows:
software as a service (SaaS): the capability provided to the consumer is to use the provider's application running on the cloud infrastructure. The application programs may be accessed from various client devices through a thin client interface like a web browser, for example (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, including the functionality of networks, servers, operating systems, storage, or even individual applications, with the possible exception of limited user-specific application configuration settings.
Platform as a service (PaaS): the ability to provide consumers is to deploy applications created or acquired by the consumer onto the cloud infrastructure using programming languages and tools supported by the resource provider. The consumer does not manage or control the underlying cloud infrastructure, including the network, servers, operating system, or storage, but may control the deployed application and possibly the application hosting environment configuration.
Infrastructure as a service (laaS): the ability to offer to consumers is to provision processing, storage, networking, and other underlying computing resources on which consumers can deploy and run any software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure, but controls the operating system, storage, deployed applications, and may have limited control over selected network components (e.g., host firewalls).
The deployment model is as follows:
private cloud: the cloud infrastructure is operated solely for the organization. It may be managed by the organization or a third party and there may be two states, preset (on-premise) or off-premise.
Community cloud: the cloud infrastructure is shared by several organizations and supports specific communities with shared issues (e.g., tasks, security requirements, policies, and compliance considerations). It may be managed by an organization or a third party and may exist in both preset and external states.
Public cloud: the cloud infrastructure is facing the general public or large industry groups and is owned by organizations selling cloud services.
Mixing cloud: the cloud infrastructure is made up of two or more clouds (private, community or public) that are independent of each other but are bound together by standardized or proprietary techniques to enable portability of data and applications (e.g., cloud bursting (cloud bursting) to address load balancing between clouds).
Cloud computing environments are service oriented, focusing on stateless, low-connectivity, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes.
Referring to FIG. 6, an illustrative cloud computing environment 600 is shown. As shown in the figure, cloud computing environment 600 includes at least one cloud computing node 10, which at least one cloud computing node 10 may communicate with local computing devices used by cloud consumers, such as Personal Digital Assistants (PDAs) or cellular phones 54A, desktop computers 54B, laptop computers 54C, and/or automobile computer systems 54N. The cloud computing nodes 10 may communicate with each other. They may be grouped (not shown) physically or virtually in at least one network, such as a private cloud, a community cloud, a public cloud, or a hybrid cloud, or a combination thereof, as described above. Allowing cloud computing environment 600 to provide infrastructure, platforms, and/or software as services for which cloud consumers do not need to maintain resources on local computing devices. It should be understood that the types of computing devices 54A-N shown in fig. 6 are merely illustrative, and that cloud computing node 10 and cloud computing environment 600 may communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser).
Referring to FIG. 7, a set of functional abstraction layers 700 provided by cloud computing environment 600 (FIG. 6) is shown. It should be understood in advance that the components, layers, and functions shown in fig. 7 are merely illustrative and embodiments are not limited thereto. As depicted, the following layers and corresponding functions are provided:
the hardware and software layer 60 includes hardware and software components. Examples of hardware components include: a mainframe 61; a RISC (reduced instruction set computer) architecture based server 62; a server 63; a blade server 64; a storage device 65; and a network and networking component 66. In some embodiments, the software components include web application server software 67 and database software 68.
The virtualization layer 70 provides an abstraction layer from which the following examples of virtual entities may be provided: the virtual server 71; a virtual memory 72; and a virtual network 73 comprising a virtual private network; virtual applications and operating systems 74; and virtual client 75.
In one example, the management layer 80 may provide the functionality described below. Resource provisioning 81 provides for dynamic procurement of computing resources and other resources to perform tasks in the cloud computing environment. Metering and pricing 82 provides cost tracking when utilizing resources in a cloud computing environment and bills or invoices the consumption of these resources. In one example, these resources may include application software licenses. Security provides authentication for cloud consumers and tasks, and protection for data and other resources. The user portal 83 provides access to the cloud computing environment for consumers and system administrators. Service level management 84 provides cloud computing resource allocation and management to meet the required service level. Service Level Agreement (SLA) plan and implementation 85 provides prearrangement and procurement of cloud computing resources in anticipation of future needs according to the SLA.
Workload layer 90 provides an example of the functionality that may utilize a cloud computing environment. Examples of workloads and functions provided by the workload layer 90 include: mapping and navigation 91; software development and lifecycle management 92; virtual classroom education delivery 93; data analysis processing 94; transaction processing 95; and a video partition 96. Video partition 96 may partition encoded video data into slices, and blocks.
Some embodiments relate to systems, methods, and/or computer-readable media that integrate at any possible level of technical detail. The computer-readable medium may include a computer-readable non-volatile storage medium (or media) having computer-readable program instructions that cause a processor to perform operations.
The computer readable storage medium may be a tangible device that can retain and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer-readable storage medium includes the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an erasable Programmable Read-Only Memory (EPROM or flash Memory), a Static Random Access Memory (SRAM), a portable compact disc Read-Only Memory (CD-ROM), a Digital Versatile Disc (DVD), a Memory stick, a floppy disk, a mechanical coding device (such as a punch card or a raised structure in a recess on which instructions are recorded), and any suitable combination of the foregoing. A computer-readable storage medium as used herein should not be interpreted as a transitory signal per se, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or an electrical signal transmitted through an electrical wire.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a corresponding computing/processing device, or to an external computer or external storage device, over a network (e.g., the internet, a local area network, a wide area network, and/or a wireless network). The network may include copper transmission cables, optical transmission fibers, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium within the respective computing/processing device.
The computer-readable program code/instructions for performing the operations may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, configuration data for an integrated circuit, or source or object code written in any combination of at least one programming language, including an object oriented programming language such as Smalltalk, C + +, or the like, and a procedural programming language such as the "C" programming language or a similar programming language. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer, partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, an electronic circuit comprising, for example, a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), may execute computer-readable program instructions to perform various aspects or operations by personalizing the electronic circuit with state information of the computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable storage medium storing the instructions includes an article of manufacture (manufacture) including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer-readable media according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises at least one executable instruction for implementing the specified logical function(s). The methods, computer systems, and computer-readable media may include additional blocks, fewer blocks, different blocks, or a different arrangement of blocks than those depicted in the figures. In some embodiments, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be apparent that the systems and/or methods described herein may be implemented in various forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement the systems and/or methods is not limiting of these implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to the specific software code-it being understood that software and hardware may be designed to implement the systems and/or methods based on the description herein.
No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles "a" and "an" are intended to include at least one item, and may be used interchangeably with "at least one". Further, as used herein, the term "group" is intended to include at least one item (e.g., related items, unrelated items, combinations of related and unrelated items, etc.) and may be used interchangeably with "at least one". Where only one item is intended, the term "one" or similar language is used. Further, as used herein, the term "having" ("has", "have", "having"), and the like, is intended to be an open-ended term. Further, the phrase "based on" is intended to mean "based, at least in part, on" unless explicitly stated otherwise.
The description of the various aspects and embodiments has been presented for purposes of illustration but is not intended to be exhaustive or limited to the disclosed embodiments. Even if combinations of features are recited in the claims and/or disclosed in the description, these combinations are not intended to limit the disclosure of possible implementations. Indeed, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may be directly dependent on only one claim, the disclosure of possible implementations includes a combination of each dependent claim with every other claim in the claim set. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application, or technical improvements found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1. A method of partitioning encoded video data, comprising:
receiving video frame data;
dividing the video frame data into at least one sub-unit, wherein the sub-unit is a strip; wherein, in a rectangular stripe pattern, the stripes constitute a rectangular area;
each of the slices each having a unique address value according to which the at least one slice is arranged in increasing order, and left and top boundaries associated with each of the slices include a picture boundary and at least one boundary of a previously decoded slice;
signaling a number of blocks in a stripe in video frame data in a picture parameter set to which the video frame data relates;
signaling an address value of the slice in the picture parameter set related to the video frame data if it is determined that the rectangular slice flag rect _ slice _ flag value is false and the number of blocks is greater than 1.
2. The method of claim 1, wherein the pattern associated with the stripe comprises a raster scan stripe pattern or a rectangular stripe pattern;
the method further comprises:
decoding the strips according to the sequence corresponding to the raster scanning strip mode; or
And decoding the stripes according to the sequence corresponding to the rectangular stripe mode.
3. The method of claim 2, further comprising: enabling the raster scan stripe mode in the rectangular stripe mode.
4. The method of claim 1, wherein after segmenting the video frame data into at least one sub-unit, the method further comprises: setting a flag indicating a number of sub-units of the video frame data.
5. The method of claim 4, further comprising: if it is determined that the slice contains one tile and more than one block split, the slice is decoded as an independent picture according to the number of sub-units indicated by the set flag.
6. The method according to any one of claims 1 to 5, further comprising: decoding the encoded video data according to the unique address value associated with the slice.
7. A computer system for partitioning encoded video data, the computer system comprising:
the receiving module is used for receiving video frame data; and
a dividing module, configured to divide the video frame data into at least one sub-unit, where the sub-unit is a stripe; in a rectangular stripe pattern, the stripes constitute a rectangular area;
wherein each of the slices has a unique address value according to which the at least one slice is arranged in increasing order, and the left and top boundaries associated with each of the slices comprise a picture boundary and at least one boundary of a previously decoded slice;
signaling a number of blocks brick in a stripe in video frame data in a picture parameter set to which the video frame data relates;
signaling an address value of the slice in the picture parameter set related to the video frame data if it is determined that the rectangular slice flag rect slice flag is false and the number of blocks split is greater than 1.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-6 when executing the program.
9. A non-transitory computer-readable storage medium having instructions stored thereon, the instructions comprising: at least one instruction which, when executed by at least one processor of a computer, may cause the at least one processor to perform the method of any one of claims 1-6.
CN202010580337.XA 2019-06-24 2020-06-23 Method, computer system and storage medium for partitioning encoded video data Active CN112135139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310268603.9A CN116260971B (en) 2019-06-24 2020-06-23 Method and device for encoding and decoding video data

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962865945P 2019-06-24 2019-06-24
US62/865,945 2019-06-24
US16/908,036 US11589043B2 (en) 2019-06-24 2020-06-22 Flexible slice, tile and brick partitioning
US16/908,036 2020-06-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310268603.9A Division CN116260971B (en) 2019-06-24 2020-06-23 Method and device for encoding and decoding video data

Publications (2)

Publication Number Publication Date
CN112135139A CN112135139A (en) 2020-12-25
CN112135139B true CN112135139B (en) 2023-03-24

Family

ID=73851257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010580337.XA Active CN112135139B (en) 2019-06-24 2020-06-23 Method, computer system and storage medium for partitioning encoded video data

Country Status (1)

Country Link
CN (1) CN112135139B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102640492A (en) * 2009-10-30 2012-08-15 三星电子株式会社 Method and apparatus for encoding and decoding coding unit of picture boundary
CN104838654A (en) * 2012-12-06 2015-08-12 索尼公司 Decoding device, decoding method, and program
CN109691103A (en) * 2016-07-14 2019-04-26 皇家Kpn公司 video encoding
CN109845268A (en) * 2016-10-14 2019-06-04 联发科技股份有限公司 Divided using the block of tree construction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102640492A (en) * 2009-10-30 2012-08-15 三星电子株式会社 Method and apparatus for encoding and decoding coding unit of picture boundary
CN104838654A (en) * 2012-12-06 2015-08-12 索尼公司 Decoding device, decoding method, and program
CN109691103A (en) * 2016-07-14 2019-04-26 皇家Kpn公司 video encoding
CN109845268A (en) * 2016-10-14 2019-06-04 联发科技股份有限公司 Divided using the block of tree construction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Versatile Video Coding (Draft 5),JVET-N1001-v8;Benjamin Bross等;《Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting》;20190611;全文 *

Also Published As

Publication number Publication date
CN112135139A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
EP4133723A1 (en) Chroma mode video coding
US12212788B2 (en) Semi-decoupled partitioning for video coding
US20230141129A1 (en) Flexible slice, tile and brick partitioning
US20220141497A1 (en) Extended maximum coding unit size
CN112135139B (en) Method, computer system and storage medium for partitioning encoded video data
US11902585B2 (en) Method for signaling subpicture identifier
WO2021041521A1 (en) Adaptive motion vector resolution signaling
AU2023208142B2 (en) Method for signaling output subpicture layer set
US20250126303A1 (en) Semi-decoupled partitioning for video coding
EP4136839A1 (en) Adaptive chroma intra mode coding in video compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035350

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant