WO2025149542A1 - Adaptive thresholding for motion information coding - Google Patents
Adaptive thresholding for motion information codingInfo
- Publication number
- WO2025149542A1 WO2025149542A1 PCT/EP2025/050375 EP2025050375W WO2025149542A1 WO 2025149542 A1 WO2025149542 A1 WO 2025149542A1 EP 2025050375 W EP2025050375 W EP 2025050375W WO 2025149542 A1 WO2025149542 A1 WO 2025149542A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion vector
- vector predictor
- current
- poc
- dynamic parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- a video encoder or video decoder may determine that a selected motion vector predictor for a current block is non-valid. Based on determining the selected motion vector predictor is non-valid, a dynamic parameter may be computed. A replacement motion vector predictor may be determined based on the computed dynamic parameter. The current block may be encoded or decoded based on the replacement motion vector predictor.
- a video encoder or video decoder may determine a distance between a current picture order count (POC) and a reference POC. The dynamic parameter may be computed based on the distance between the current POC and the reference POC.
- a video encoder or video decoder may determine a temporal layer ID of a current picture. The dynamic parameter may be computed based on the temporal layer ID of the current picture.
- a video encoder or video decoder may determine an absolute distance between a current POC and a minimum quantization parameter (QP) in the current POC. The dynamic parameter may be computed based on the absolute distance between the current POC and the minimum QP in the current POC.
- QP quantization parameter
- a video encoder or video decoder may determine a similarity between a first motion vector predictor and a second motion vector predictor.
- the dynamic parameter may be computed based on the similarity between the first motion vector predictor and the second motion vector predictor.
- the dynamic parameter may be computed based on any combination of the above examples (e.g., based on at least one of the distance between the current POC and the reference POC, the temporal layer ID of the current picture, the absolute distance between the current POC and the minimum quantization parameter (QP) in the current POC, or the similarity between the first motion vector predictor and the second motion vector predictor).
- QP minimum quantization parameter
- Systems, methods, and instrumentalities described herein may involve a decoder.
- the systems, methods, and instrumentalities described herein may involve an encoder.
- the systems, methods, and instrumentalities described herein may involve a signal (e.g., from an encoder and/or received by a decoder).
- a computer-readable medium may include instructions for causing one or more processors to perform methods described herein.
- a computer program product may include instructions which, when the program is executed by one or more processors, may cause the one or more processors to carry out the methods described herein.
- FIG. 1 A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented.
- FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1 A according to an embodiment.
- WTRU wireless transmit/receive unit
- FIG. 1 C is a system diagram illustrating an example radio access network (RAN) and an example core network (ON) that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
- RAN radio access network
- ON core network
- FIG. 1 D is a system diagram illustrating a further example RAN and a further example ON that may be used within the communications system illustrated in FIG. 1 A according to an embodiment.
- FIG. 2 illustrates an example video encoder
- FIG. 3 illustrates an example video decoder
- FIG. 4 illustrates an example of a system in which various aspects and examples may be implemented.
- FIG. 5 illustrates an example of Coding Tree Unit (CTU), Coding Unit (CU), and Prediction Unit (PU) structures to represent a compressed picture (e.g., a picture compressed using a video coding scheme).
- CTU Coding Tree Unit
- CU Coding Unit
- PU Prediction Unit
- FIG. 8 illustrates an example of a CTU division of a coding tree, e.g., according to a video coding scheme.
- FIG. 9 illustrates an example of split modes supported in multi-type tree partitioning.
- FIG. 10 illustrates an example of signaling of inter prediction information (e.g., according to a video coding scheme).
- FIG. 12 illustrates an example of neighboring spatial locations A0, A1 (left) B0, B1 , B2 (above) and collocated blocks for temporal motion vector prediction (TMVP) (H and C) of a current block.
- TMVP temporal motion vector prediction
- FIG. 14 illustrates an example of the construction of a list of merge motion vector predictor candidates of the video coding scheme.
- FIG. 15 illustrates an example of the construction of a list of merge motion vector predictor candidates (e.g., in the video coding scheme).
- FIG. 16 illustrates an example of whole-block and sub-block-based motion representation categories.
- FIGs. 17A-17B illustrate an example of non-sub-block merge candidate list construction.
- FIG. 18 illustrates an example of allowed motion vector differences (MVDs).
- FIG. 19 illustrates an example representation of CU motion data in geometric partitioning mode
- FIG. 20 illustrates examples of GPM splits grouped by identical angles.
- FIG. 21 illustrates an example of blending between two predicted partitions performed in GPM.
- FIG. 22 illustrates an example of control point based affine motion models (e.g., supported by a video coding scheme).
- FIG. 23 illustrates an example of affine motion field representation on a 4x4 subblock basis.
- FIG. 24 illustrates an example of locations of inherited affine motion predictors.
- FIG. 28 illustrates an example of decoding side motion vector refinement (DMVR).
- the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
- the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell.
- the base station 114b may have a direct connection to the Internet 110.
- the base station 114b may not be required to access the Internet 110 via the CN 106/115.
- the RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
- the data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like.
- QoS quality of service
- the CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
- the RAN 104/113 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT.
- the CN 106/115 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
- the CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112.
- the PSTN 108 may include circuit- switched telephone networks that provide plain old telephone service (POTS).
- POTS plain old telephone service
- the Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite.
- the networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers.
- the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
- FIG. 1 B is a system diagram illustrating an example WTRU 102.
- the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others.
- GPS global positioning system
- the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
- the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
- the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
- the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
- the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
- the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
- the power source 134 may be any suitable device for powering the WTRU 102.
- the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
- the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
- location information e.g., longitude and latitude
- the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable locationdetermination method while remaining consistent with an embodiment.
- the WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous.
- the full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118).
- the CN 106 shown in FIG. 1 C may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (or PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
- MME mobility management entity
- SGW serving gateway
- PGW packet data network gateway
- the MME 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an S1 interface and may serve as a control node.
- the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like.
- the MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
- the traffic between STAs within a BSS may be considered and/or referred to as peer-to- peer traffic.
- the peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS).
- the DLS may use an 802.11e DLS or an 802.11 z tunneled DLS (TDLS).
- a WLAN using an Independent BSS (I BSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other.
- the IBSS mode of communication may sometimes be referred to herein as an “ad- hoc” mode of communication.
- the AP may transmit a beacon on a fixed channel, such as a primary channel.
- the primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling.
- the primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP.
- Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems.
- the STAs e.g., every STA, including the AP, may sense the primary channel.
- High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
- the streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA.
- the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
- MAC Medium Access Control
- WLAN systems which may support multiple channels, and channel bandwidths, such as 802.11 n, 802.11 ac, 802.11 af, and 802.11 ah, include a channel which may be designated as the primary channel.
- the primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS.
- the bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode.
- FIG. 1 D is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment.
- the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
- the RAN 113 may also be in communication with the CN 115.
- the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum.
- the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing varying number of OFDM symbols and/or lasting varying lengths of absolute time).
- TTIs subframe or transmission time intervals
- eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
- Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E- UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 1 D, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
- UPF User Plane Function
- AMF Access and Mobility Management Function
- the CN 115 shown in FIG. 1 D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
- SMF Session Management Function
- Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c.
- different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for machine type communication (MTC) access, and/or the like.
- URLLC ultra-reliable low latency
- eMBB enhanced massive mobile broadband
- MTC machine type communication
- the AMF 162 may provide a control plane function for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
- the SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 115 via an N11 interface.
- the SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 115 via an N4 interface.
- the SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b.
- the SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing downlink data notifications, and the like.
- a PDU session type may be IP-based, non-IP based, Ethernetbased, and the like.
- the WTRUs 102a, 102b, 102c may be connected to a local Data Network (DN) 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
- DN local Data Network
- one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, eNode-B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-b, UPF 184a-b, SMF 183a-b, DN 185a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown).
- the emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein.
- the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
- the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
- the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
- the one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
- RF circuitry e.g., which may include one or more antennas
- the video sequence may go through pre-encoding processing (201), for example, applying a color transform to the input color picture (e.g., conversion from RGB 4:4:4 to YCbCr 4:2:0), or performing a remapping of the input picture components in order to get a signal distribution more resilient to compression (for instance using a histogram equalization of one of the color components).
- Metadata may be associated with the pre-processing, and attached to the bitstream.
- FIG. 4 is a diagram showing an example of a system in which various aspects and examples described herein may be implemented.
- System 400 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this document. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers.
- Elements of system 400, singly or in combination may be embodied in a single integrated circuit (IC), multiple ICs, and/or discrete components.
- the processing and encoder/decoder elements of system 400 are distributed across multiple ICs and/or discrete components.
- System 400 includes a storage device 440, which can include non-volatile memory and/or volatile memory, including, but not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, magnetic disk drive, and/or optical disk drive.
- the storage device 440 can include an internal storage device, an attached storage device (including detachable and non-detachable storage devices), and/or a network accessible storage device, as non-limiting examples.
- System 400 includes an encoder/decoder module 430 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 430 can include its own processor and memory.
- the encoder/decoder module 430 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device can include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 430 may be implemented as a separate element of system 400 or may be incorporated within processor 410 as a combination of hardware and software as known to those skilled in the art.
- Program code to be loaded onto processor 410 or encoder/decoder 430 to perform the various aspects described in this document may be stored in storage device 440 and subsequently loaded onto memory 420 for execution by processor 410.
- processor 410, memory 420, storage device 440, and encoder/decoder module 430 can store one or more of various items during the performance of the processes described in this document. Such stored items can include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
- memory inside of the processor 410 and/or the encoder/decoder module 430 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding.
- a memory external to the processing device (for example, the processing device may be either the processor 410 or the encoder/decoder module 430) is used for one or more of these functions.
- the external memory may be the memory 420 and/or the storage device 440, for example, a dynamic volatile memory and/or a non-volatile flash memory.
- an external non-volatile flash memory is used to store the operating system of, for example, a television.
- a fast external dynamic volatile memory such as a RAM is used as working memory for video encoding and decoding operations.
- the input to the elements of system 400 may be provided through various input devices as indicated in block 445.
- Such input devices include, but are not limited to, (i) a radio frequency (RF) portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (iii) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal.
- RF radio frequency
- COMP Component
- USB Universal Serial Bus
- HDMI High Definition Multimedia Interface
- the input devices of block 445 have associated respective input processing elements as known in the art.
- the RF portion may be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) downconverting the selected signal, (iii) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which may be referred to as a channel in certain examples, (iv) demodulating the downconverted and band-limited signal, (v) performing error correction, and/or (vi) demultiplexing to select the desired stream of data packets.
- the RF portion of various examples includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, band-limiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers.
- the RF portion can include a tuner that performs various of these functions, including, for example, downconverting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband.
- the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, downconverting, and filtering again to a desired frequency band.
- Adding elements can include inserting elements in between existing elements, such as, for example, inserting amplifiers and an analog-to-digital converter.
- the RF portion includes an antenna.
- the system 400 includes communication interface 450 that enables communication with other devices via communication channel 460.
- the communication interface 450 can include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 460.
- the communication interface 450 can include, but is not limited to, a modem or network card and the communication channel 460 may be implemented, for example, within a wired and/or a wireless medium.
- Data is streamed, or otherwise provided, to the system 400, in various examples, using a wireless network such as a Wi-Fi network, for example IEEE 802.11 (IEEE refers to the Institute of Electrical and Electronics Engineers).
- the Wi-Fi signal of these examples is received over the communications channel 460 and the communications interface 450 which are adapted for Wi-Fi communications.
- the communications channel 460 of these examples is typically connected to an access point or router that provides access to external networks including the Internet for allowing streaming applications and other over-the-top communications.
- Other examples provide streamed data to the system 400 using a set-top box that delivers the data over the HDMI connection of the input block 445.
- Still other examples provide streamed data to the system 400 using the RF connection of the input block 445.
- various examples provide data in a non-streaming manner.
- various examples use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth® network.
- the system 400 can provide an output signal to various output devices, including a display 475, speakers 485, and other peripheral devices 495.
- the display 475 of various examples includes one or more of, for example, a touchscreen display, an organic light-emitting diode (OLED) display, a curved display, and/or a foldable display.
- the display 475 may be for a television, a tablet, a laptop, a cell phone (mobile phone), or other device.
- the display 475 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop).
- the display 475 and speakers 485 can alternatively be separate from one or more of the other components, for example, if the RF portion of input 445 is part of a separate set-top box.
- the output signal may be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
- encoding refers only to entropy encoding
- encoding refers only to differential encoding
- encoding refers to a combination of differential encoding and entropy encoding.
- the implementations and aspects described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program).
- An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
- the methods may be implemented in, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
- PDAs portable/personal digital assistants
- Accessing the information can include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
- the word “signal” refers to, among other things, indicating something to a corresponding decoder.
- Encoder signals may include, for example, an encoding function on an input for a block using a precision factor, etc.
- the same parameter is used at both the encoder side and the decoder side.
- an encoder can transmit (explicit signaling) a particular parameter to the decoder so that the decoder can use the same particular parameter.
- implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
- the information can include, for example, instructions for performing a method, or data produced by one of the described implementations.
- a signal may be formatted to carry the bitstream of a described example.
- Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
- the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
- the information that the signal carries may be, for example, analog or digital information.
- the signal may be transmitted over a variety of different wired or wireless links, as is known.
- the signal may be stored on, or accessed or received from, a processor-readable medium.
- a video encoder or video decoder may determine that a selected motion vector predictor for a current block is non-valid. Based on determining the selected motion vector predictor is non-valid, a dynamic parameter may be computed. A replacement motion vector predictor may be determined based on the computed dynamic parameter. The current block may be encoded or decoded based on the replacement motion vector predictor.
- Inter prediction information may be represented in compressed video, for example, using different types of video compression schemes.
- a block-based video codec may associate motion information with a (e.g., each) block coded in inter mode.
- a block structure may be used in video coding schemes to represent a compressed picture. Motion representations may be assigned to an inter block.
- a video compression system may divide a picture into Coding Tree Units (CTUs).
- a size of a CTU may be, for example, 64x64, 128x128, or 256x256 pixels.
- a (e.g., each) CTU may be represented by a Coding Tree in the compressed domain.
- a block structure may be used in a video coding scheme.
- a block structure may be used to represent compressed pictures.
- a picture may be divided in square CTUs (e.g., in various video coding schemes).
- a CTU may be of size 32x32, 64x64, or 128x128.
- the CTU division of a picture may (e.g., thus) form a regular grid, where upper and left bounds may spatially coincide with the top and left border of the picture.
- FIG. 8 illustrates an example of a CTU division of a coding tree, e.g., according to a video coding scheme.
- Separate coding trees may be used in intra picture. Separated coding trees may be used for a luma component on one side and chroma components on the other side.
- the luma component part of a CTU may be referred to as a luma coding tree block.
- a luma coding tree block (CTB) may (e.g., then) be associated with a coding tree.
- the coding tree leaves may be associated with luma coding blocks.
- Intra picture video coding may use separated luma/chroma coding trees and a three (3) component picture, where the two chroma CTBs may share the same coding tree.
- FIG. 10 illustrates an example of signaling of inter prediction information (e.g., according to a video coding scheme).
- Table 1 provides an example summary of differences between AMVP, merge, and skip modes of inter coded CU.
- the merge index may enable the derivation of the prediction type (e.g., P or B picture), the reference picture list index, and/or the associated motion vectors.
- reference pictures e.g., up to two reference pictures
- reference pictures used to temporally predict a considered PU may be (e.g., explicitly) signaled, for example, with the motion vectors associated with each PU and each reference picture.
- the motion vectors may be predictively coded.
- a motion vector predictor MVP
- MVP motion vector predictor
- MVD motion vector difference
- the decoder side reconstructed motion data may include the sum of the MVPs used for a given PU and their associated MVDs.
- a redundancy check may be conducted between derived spatial MVPs. For example, duplicate derived MVPs may be discarded.
- a merge mode may be implemented in video coding schemes. As shown by example in FIG. 10, motion information coding/decoding according to the merge mode may take place in two modes, e.g., the skip mode and the merge mode.
- the decoder may retrieve the motion information of a PU, for example, based on a (e.g., one single) field (e.g., the merge index) that may be signaled (e.g., in the two modes).
- the merge index may indicate which Motion Vector Predictor (MVP) in the list of merge motion information predictors may be used to derive the motion information of a current PU.
- the list of motion information predictors may be referred to as the merge list or the merge candidate list.
- a candidate motion information predictor may be referred to as a merge candidate.
- the symbols AO, A1 , BO, B1 , and B2 may denote the spatial positions shown by example in FIG. 13. Spatial candidates, which may include associated motion information, that may be different from each other, may be selected.
- a temporal predictor e.g., TMVP
- the “center” may be a candidate at position H, for example, if the considered reference picture is not available.
- a pruning process may take place (e.g., as shown by example in FIG. 14), for example, to eliminate redundant candidates from the selected set of spatial and temporal candidates.
- candidates of another type may be pushed to the merge list (e.g., if the merge list is not full), for example, in the case of a B slice.
- a combined candidate type may be formed, for example, by forming a candidate made of the motion information associated with a (e.g., one) reference picture list (L0) from a (e.g., one) candidate already present in the merge list, with the motion associated with the other reference picture list (L1) from another candidate already present in the merge list.
- Zero motion vectors may be pushed to the back of the merge list until it is full, for example, if the merge list is still not full (e.g., with five (5) elements).
- FIG. 13 illustrates an example of positions of spatial and temporal motion vector predictors used in a merge mode of a video coding scheme. As shown in FIG. 13, spatial merge candidates are shown on the left and temporal merge candidates are shown on the right.
- FIG. 14 illustrates an example of the construction of a list of merge motion vector predictor candidates of the video coding scheme.
- FIG. 15 illustrates an example of the construction of a list of merge motion vector predictor candidates (e.g., in the video coding scheme).
- Inter prediction information may be represented and coded in video coding, such as in a video coding scheme.
- motion data representation may be richer in one video coding scheme than another.
- Motion data representation may be divided into categories (e.g., two main categories), such as whole-block-based motion representation and sub-block-based motion representation, as illustrated by example in FIG. 16.
- One or more (e.g., two) modes for coding motion information may be used in a (e.g., each) category.
- Modes for coding information may include, for example, merge/skip and AMVP.
- FIG. 16 illustrates an example of whole-block and sub-block-based motion representation categories.
- a FIFO rule may be applied to manage the table.
- a redundant candidate in an HMVP table may be removed, for example, instead of the first candidate.
- the table may be reset, for example, at a (e.g., each) CTU row, e.g., to enable parallel processing.
- SMVD may include setting the MVD associated with reference picture list 1 (L1) equal to the opposite of the MVD associated with reference picture list 0 (L0) for a given block.
- Reference pictures used in SMVD mode may be derived by the decoder, for example, with pre-defined rules.
- SMVD may enable reduction of the rate cost for coding MVD information.
- SMVD may be selected at block level.
- AMVR may allow/enable signaling the MVD with quarter-pel, half-pel, integer-pel, or 4-pel luma sample resolutions, which may allow/enable saving bits in the coding of MVD information.
- the motion vector resolution (e.g., in AMVR) may be chosen at block level.
- FIGs. 17A and 17B An example of a whole-block-based (e.g., or non-subblock-based) merge list construction process (e.g., for a video coding scheme) is illustrated by FIGs. 17A and 17B.
- Whole-block-based merge mode (e.g., in a video coding mechanism) may also be called regular merge mode.
- Whole-block-based merge mode merge MVP candidate list construction may differ, for example, among various video coding mechanisms.
- Whole-block-based merge mode may have multiple merge coding modes, which may include, for example, one or more of the following: Merge Mode with MV Difference (MMVD); Geometric Partitioning Mode (GPM); and/or Combined I ntra/lnter Prediction (CIIP).
- MMVD Merge Mode with MV Difference
- GPSM Geometric Partitioning Mode
- CIIP Combined I ntra/lnter Prediction
- a merge MVP candidate list is constructed with one or more of the following types of MVP candidates; spatial candidates; temporal MVP candidates; HMVP candidates; pairwise average candidates; and/or zero MV candidates.
- FIG. 18 illustrates an example of allowed motion vector differences (MVDs).
- MMVD merge mode may allow/enable coding of a limited motion vector difference (MVD) on top of selected merge MVP candidates, for example, to represent the motion information of a CU.
- MMVD coding may be limited to four (4) vector directions and eight (8) magnitude values, e.g., from % luma sample to 32-luma sample.
- MMVD may provide an intermediate accuracy level, which may yield an intermediate trade-off between rate cost and MV accuracy to signal the motion information.
- FIG. 21 illustrates an example of blending between two predicted partitions performed in GPM.
- Sample values along the geometric partition edge may be adjusted using a blending processing with adaptive weights (e.g., as illustrated by example in FIG. 21), for example, after predicting each of part of the geometric partition.
- the process may form the prediction signal for the whole CU.
- a transform and quantization process may be applied to the whole CU (e.g., not for each partition), for example, as in other prediction modes.
- FIG. 23 illustrates an example of affine motion field representation on a 4x4 subblock basis.
- Affine motion compensation may be performed, for example, on a 4x4 subblock basis.
- a motion vector of a (e.g., each) 4x4 luma subblock may be derived, for example, by calculating the motion vector of the center sample of each subblock according to Eq. (2) or Eq. (3) (e.g., as shown by example in FIG. 23).
- the calculated motion vector may be rounded, for example, to 1/16 fraction accuracy.
- the motion compensation interpolation filters may (e.g., then) be applied to generate the prediction of a (e.g., each) subblock with the derived motion vector.
- Affine AMVP mode may be applied for CUs based on width and/or height (e.g., with both width and height larger than or equal to 16).
- An affine flag (e.g., at CU level) may be signaled in the bitstream to indicate the use of affine AMVP mode. Another flag may signal if a 4-parameter affine or a 6-parameter affine model is used.
- the difference of the CPMVs of a current CU and their predictors CPMVPs may be coded, e.g., in affine AMVP mode.
- the CPMVPs used to predict the CPMV of a CU may be taken from an affine AMVP candidate list, which may be made of two (2) elements.
- the affine AMVP candidate list may be constructed, for example, using one or more of the following types of CPVM candidate (e.g., in the following order): inherited affine AMVP candidates extrapolated from the CPMVs of the neighbor CUs; constructed affine AMVP candidates CPMVPs that may be derived using the translational MVs of the neighbor CUs; translational MVs from neighboring CUs; and/or zero MVs.
- Term checking may be used. Checking a potential candidate may include checking that a valid affine AMVP or affine merge candidate to predict the current CU’s affine CPMVs is available and/or is valid. An available and valid candidate may be added to the candidate list under construction.
- the checking order of inherited affine AMVP candidates may be the same as or similar to the checking order of inherited affine merge candidates.
- a difference (e.g., the only difference) may be that (e.g., only) the affine CU that has the same reference picture as in the current block may be considered for an AVMP candidate.
- a pruning process may not be applied, for example, if/when inserting an inherited affine motion predictor into the candidate list.
- MVs mv 0 , mv ⁇ and mv 2 may be added, e.g., in order, as translational MVs to predict (e.g., all) control point MVs of the current CU, e.g., if/when available, for example, if the affine AMVP list of candidates is still less than two (2) after valid inherited affine AMVP candidates and constructed AMVP candidate are inserted.
- Z zero MVs may (e.g., then) be used to fill the affine AMVP list if it is still not full.
- a sub-block merge/skip mode may be a merge mode using a merge candidate list of (e.g., at most five (5)) elements with (e.g., only) subblock-based motion candidates.
- a merge index (e.g., for regular merge) may indicate a subblock-based merge candidate used to derive the motion data of a CU.
- the subblock-based merge candidate list may be made of, for example, the following elements.
- a Subblockbased Temporal Motion Vector Prediction (SbTMVP) candidate may be put at first place.
- Affine merge candidates may (e.g., then) be put in the list.
- the subblock merge list may be constructed, for example, with one or more of the following list of candidates: SbTMVP; inherited affine merge candidates; constructed affine merge candidates CPMVPs that may be derived using the translational MVs of the neighbor CUs; and/or zero MVs.
- Block-based video coding may offer a wide range of flexibility of configurations. These configurations may depend on a targeted goal to achieve (e.g., compression efficiency, complexity, delays, robustness, etc.). These configurations may be driven by the encoder setting.
- pictures For all intra (Al), pictures (e.g., each picture) may be encoded as an intra picture (I picture). Pictures (e.g., all pictures) may be coded using the temporal order and may use the same quantization parameter (QP).
- QP quantization parameter
- a hierarchical of bi-predicted picture (B picture) may be used (e.g., as shown in FIG. 27). In that configuration, the first picture may be an I picture, the others may be B pictures encoded in a specific order depending on its positions into the group of picture (GOP). Each picture may belong to a specific temporal layer (TL). The pictures at a lower temporal level may be the reference for the pictures at the upper temporal level.
- the QP of pictures (e.g., each picture) may be adjusted periodically depending on its position in the GOP.
- the first picture may be encoded as an I picture and the subsequent pictures may be encoded as a predicted picture (P picture).
- the pictures e.g., all pictures
- the QP of pictures may be adjusted periodically depending on its position relative the first frame.
- the first picture may be encoded as an I picture and the subsequent pictures may be encoded as a B picture.
- the pictures e.g., all pictures
- the QP of pictures e.g., each picture
- the application of DMVR may be restricted and may be (e.g., may only be) applied for the CUs which are coded with at least one of the following modes and features: CU level merge mode with bi-prediction MV; one reference picture is in the past and another reference picture is in the future with respect to the current picture; the distances (e.g., picture order count (POC) difference) from two reference pictures to the current picture are the same; both reference pictures are short-term reference pictures; CU has more than 64 luma samples; both CU height and CU width are larger than or equal to 8 luma samples; BCW weight index indicates equal weight; WP is not enabled for the current block; or CIIP mode is not used for the current block.
- POC picture order count
- the refined MV derived by DMVR process may be used to generate the inter prediction samples and may (e.g., may also) be used in temporal motion vector prediction for future pictures coding.
- the original MV may be used in a deblocking process and may (e.g., may also) be used in spatial motion vector prediction for future CU coding.
- Additional features of DMVR may include a DMVR search scheme, bilinear-interpolation and sample padding, a maximum DMVR processing unit, or DMVR applied to affine merge blocks.
- search points surrounding the initial MV and the MV offset may be applied to the MV difference mirroring rule.
- Any points that are checked by DMVR, denoted by candidate MV pair (MVO, MV1), may be applied by the following two equations:
- the center position cost and the costs at four neighboring positions from the center may be used to fit a 2-D parabolic error surface equation of the following form:
- the 8-tap interpolation filter may be applied to generate the final prediction (e.g., after the refined MV is attained with the DMVR search process).
- the samples which may be not needed for the interpolation process based on the original MV but may be needed for the interpolation process based on the refined MV, may be padded from those available samples.
- FIG. 29 illustrates an example combination of motion vector predictors and motion vector difference values.
- a block to predict is shown together with its two AMVP motion vector prediction candidates MVP 0 and MVP ⁇ , and a motion vector difference MV d to signal.
- the coding cost of a motion difference may be correlated to the magnitude of the motion vector difference and may increase as a function of the MV d magnitude.
- FIG. 29 shows that the coding configuration may use the motion vector predictor MVR instead of MVP 0 , which may lead to a smaller motion vector difference magnitude.
- MVP 0 may be detected as non-optimal. The decoder may be able to detect that the use of motion vector predictor MVP 0 is not the most optimal one to employ for pointing to the spatial position corresponding to MVP 0 + MV (e.g., as shown in FIG. 29).
- FIG. 30 illustrates an example decoder side detection of a non-optimal motion vector coding
- the decoder may be able to detect that the use of motion vector predictor MVP ⁇ is the most likely one compared to MVP 0 .
- the signaling of the motion vector predictor and the signaling of the motion vector difference may carry some redundant information.
- a detector that may be able to detect a non-valid situation (e.g., such as the example shown in FIG. 29), may adapt the motion vector relative to the situation and may reconstruct the motion vector based on a new defined motion vector predictor candidate ( MVP' ld ⁇ ), such as the one described below:
- a detector that may be able to detect a non-valid combination of motion vector predictors and a motion vector difference, may adapt the notion predictor by above equation, may code a MVD of a smaller magnitude based on this new candidate, and may encode the newly defined MVD and the corresponding MVP index that may be detected as a non-valid situation by the decoder.
- FIG. 31 illustrates an example of global gain vs. d parameter trends. Examples herein may increase motion data coding efficiency by optimizing the cost of the MVD when MV coding design examples described herein are used.
- MV coding design examples herein describe a mechanism that may rely on a static parameter d (see the equation above), which may be any positive value.
- the selection of the best d parameter to get an optimum compression gain may be complex.
- This d parameter may impact both the magnitude of a (e.g., new) MVP candidate and may impact (e.g., may then impact) the gain (e.g., in magnitude reduction) of a (e.g., new) MVD based on this new MVP candidate.
- the d parameter may (e.g., may also) impact the number of blocks where this mechanism may be applied. The bigger the parameter d is, the better the potential gain on blocks (e.g., each individual block) may be. The bigger the d parameter is, the fewer the number of blocks the mechanism may be applied (e.g., as shown in FIG. 31 ).
- the optimum d for a sequence or a set of sequences may not be deduced automatically but may be adjusted empirically depending on the content and the encoding parameters used.
- Examples herein may include a dynamic parameter d when the mechanism MV coding design examples described herein are used.
- This dynamic parameter d may be set automatically depending on internal characteristics of a current block level or a picture level (e.g., frame level) available both at the encoder and decoder sides.
- the mechanism that defines this dynamic parameter d per block e.g., each block
- the dynamic parameter may depend on a relative distance between a current POC and a reference POC.
- the encoder or decoder may determine a distance between the current POC and the reference POC. Based on the determined distance between the current POC and the reference POC, the dynamic parameter may be computed.
- the absolute distance between the current POC and the POC of its AMVP’s reference used may vary from 1 up to 32 for blocks (e.g., for each block). Statistically, the amplitude of the MV coded if this distance is short may be smaller than if this distance is large.
- a specific d parameter may be associated to each level (e.g., or block of levels) of this diffPOC. Typically, the d parameter value may increase if the absolute value of distPOC increases.
- Motion vector differences may be of larger magnitude as the temporal distance from a block to its reference picture increases.
- a monotonically increasing function may be used.
- the function may be an affine, piecewise linear function.
- the function may be defined by means of a lookup table, which may be used to map each possible POC distance to a value of the dynamic parameter d.
- FIG. 32 illustrates an example of a dynamic parameter d being computed on the encoder side.
- the d parameter value may be computed before the detector (e.g., as shown in FIG. 32).
- the encoder may compute a dynamic parameter.
- a selected motion vector predictor may be determined to be non-valid.
- a replacement motion vector predictor may be determined based on the dynamic parameter.
- a motion vector difference may be determined based on the replacement motion vector predictor, and an indication of the motion vector difference may be included in the video data.
- the replacement motion vector predictor may be determined based on the computed dynamic parameter.
- the current block may be encoded based on the replacement motion vector predictor.
- the dynamic parameter d may depend on a temporal layer ID.
- the encoder or decoder may determine a temporal layer ID of a current picture. Based on the temporal layer ID of the current picture, the dynamic parameter may be computed. If a hierarchical B structure is used (e.g., as shown in FIG. 27), the values of the dynamic parameter applied for a block may rely on the current ID of a temporal layer (TL) of the current picture.
- TL temporal layer
- a specific dynamic parameter may be associated with each TL ID (or set of TL IDs). The dynamic parameter value may increase if the absolute value of TL ID increases.
- an infinite value of the dynamic parameter may be set for a TL ID (or a set of TLs IDs) that may deactivate selectively the modified motion data coding system for the selected TL ids.
- the function f( ) that may compute the value of the dynamic parameter may be the same at the encoder and the decoder side. This function may be monotonically decreasing as a function of the temporal layer ID. The temporal distance between a picture and its closest reference picture may decrease as a function of the temporal layer ID. Thus motion vectors, hence motion vector differences, may (e.g., may also) statistically decrease as a function of the temporal ID.
- the proposed function f may be linear, piecewise linear, or may take the form of a lookup table that may map the temporal layer ID to a value of the dynamic parameter. As example of the function is the following:
- a specific dynamic parameter may be associated with each QP values (or set of QPs).
- the dynamic parameter value may increase if the distMinQP value increases.
- the proposed function f may be linear, piecewise linear, or may take the form of a lookup table that may map parameter distMinQP to a value of d.
- the dynamic parameter d may depend on the value of motion vector candidate similarity.
- the encoder or decoder may determine a similarity between a first motion vector predictor and a second motion vector predictor. Based on the similarity between the first motion vector predictor and the second motion vector predictor, the dynamic parameter may be computed.
- the values of the dynamic parameter applied for a block may rely on the current similarity of the two current MVP candidates. This similarity may be calculated as the norm of vector difference between these candidates and can be expressed into several versions:
- a specific dynamic parameter may be associated with each MVPSimilarity value or range of values.
- the dynamic parameter value may increase if the MVPSimilarity value decreases.
- the function f( ) that computes the value of d may be the same at the encoder and the decoder side.
- the dynamic parameter d may depend on the set of parameters available at decoder and encoder side.
- the values of the dynamic parameter applied for a block may rely on (e.g., may be computed based on) combinations of the parameters (e.g., any combination of all the parameters) of the examples herein, as shown by the following: f (currPoc, refPoc, TLid, QP, maxQPPoc, MVPSimilarity)
- the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated sequence parameter set (SPS) signaling indication (e.g., flag).
- the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated picture parameter set (PPS) signaling indication.
- the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated picture header syntax element.
- the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated slice header syntax element.
- the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated sub-picture level syntax element.
- Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
- ROM read only memory
- RAM random access memory
- register cache memory
- semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
- a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Systems, methods, and instrumentalities are disclosed herein for adaptive thresholding for motion information coding. In examples, a video encoder or video decoder may determine that a selected motion vector predictor for a current block is non-valid. Based on determining the selected motion vector predictor is non-valid, a dynamic parameter may be computed. A replacement motion vector predictor may be determined based on the computed dynamic parameter. The current block may be encoded or decoded based on the replacement motion vector predictor.
Description
ADAPTIVE THRESHOLDING FOR MOTION INFORMATION CODING
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The application claims the benefit of European Patent Application Number 24305051 .5, filed January 9, 2024, the contents of which are incorporated by reference in their entirety herein.
BACKGROUND
[0002] Video coding systems may be used to compress digital video signals, e.g., to reduce the storage and/or transmission bandwidth needed for such signals. Video coding systems may include, for example, block-based, wavelet-based, and/or object-based systems.
SUMMARY
[0003] Systems, methods, and instrumentalities are disclosed herein for adaptive thresholding for motion information coding. In examples, a video encoder or video decoder may determine that a selected motion vector predictor for a current block is non-valid. Based on determining the selected motion vector predictor is non-valid, a dynamic parameter may be computed. A replacement motion vector predictor may be determined based on the computed dynamic parameter. The current block may be encoded or decoded based on the replacement motion vector predictor.
[0004] In examples, a video encoder or video decoder may determine a distance between a current picture order count (POC) and a reference POC. The dynamic parameter may be computed based on the distance between the current POC and the reference POC. In examples, a video encoder or video decoder may determine a temporal layer ID of a current picture. The dynamic parameter may be computed based on the temporal layer ID of the current picture. In examples, a video encoder or video decoder may determine an absolute distance between a current POC and a minimum quantization parameter (QP) in the current POC. The dynamic parameter may be computed based on the absolute distance between the current POC and the minimum QP in the current POC. In examples, a video encoder or video decoder may determine a similarity between a first motion vector predictor and a second motion vector predictor. The dynamic parameter may be computed based on the similarity between the first motion vector predictor and the second motion vector predictor. In examples, the dynamic parameter may be computed based on any combination of the above examples (e.g., based on at least one of the distance between the current POC and the reference POC, the temporal layer ID of the current picture, the absolute distance between the current POC and the minimum quantization parameter (QP) in the current POC, or the similarity between the first motion vector predictor and the second motion vector predictor).
[0005] These examples may be performed by a device with a processor. The device may be an encoder or a decoder. These examples may be performed by a computer program product which is stored on a non-transitory computer readable medium and includes program code instructions. These examples may be performed by a computer program comprising program code instructions.
[0006] Systems, methods, and instrumentalities described herein may involve a decoder. In some examples, the systems, methods, and instrumentalities described herein may involve an encoder. In some examples, the systems, methods, and instrumentalities described herein may involve a signal (e.g., from an encoder and/or received by a decoder). A computer-readable medium may include instructions for causing one or more processors to perform methods described herein. A computer program product may include instructions which, when the program is executed by one or more processors, may cause the one or more processors to carry out the methods described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented.
[0008] FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1 A according to an embodiment.
[0009] FIG. 1 C is a system diagram illustrating an example radio access network (RAN) and an example core network (ON) that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
[0010] FIG. 1 D is a system diagram illustrating a further example RAN and a further example ON that may be used within the communications system illustrated in FIG. 1 A according to an embodiment.
[0011] FIG. 2 illustrates an example video encoder.
[0012] FIG. 3 illustrates an example video decoder.
[0013] FIG. 4 illustrates an example of a a system in which various aspects and examples may be implemented.
[0014] FIG. 5 illustrates an example of Coding Tree Unit (CTU), Coding Unit (CU), and Prediction Unit (PU) structures to represent a compressed picture (e.g., a picture compressed using a video coding scheme).
[0015] FIG. 6 illustrates an example of CTUs, PUs, and Transform Units (TUs) in video coding.
[0016] FIG. 7 illustrates an example partitioning of CUs into PUs.
[0017] FIG. 8 illustrates an example of a CTU division of a coding tree, e.g., according to a video coding scheme.
[0018] FIG. 9 illustrates an example of split modes supported in multi-type tree partitioning.
[0019] FIG. 10 illustrates an example of signaling of inter prediction information (e.g., according to a video coding scheme).
[0020] FIG. 11 illustrates an example of motion vector prediction (MVP) candidate list construction in an adaptive motion vector prediction (AMVP) mode of a video coding mechanism.
[0021] FIG. 12 illustrates an example of neighboring spatial locations A0, A1 (left) B0, B1 , B2 (above) and collocated blocks for temporal motion vector prediction (TMVP) (H and C) of a current block.
[0022] FIG. 13 illustrates an example of positions of spatial and temporal motion vector predictors used in a merge mode of a video coding scheme.
[0023] FIG. 14 illustrates an example of the construction of a list of merge motion vector predictor candidates of the video coding scheme.
[0024] FIG. 15 illustrates an example of the construction of a list of merge motion vector predictor candidates (e.g., in the video coding scheme).
[0025] FIG. 16 illustrates an example of whole-block and sub-block-based motion representation categories.
[0026] FIGs. 17A-17B illustrate an example of non-sub-block merge candidate list construction.
[0027] FIG. 18 illustrates an example of allowed motion vector differences (MVDs).
[0028] FIG. 19 illustrates an example representation of CU motion data in geometric partitioning mode
(GPM) mode.
[0029] FIG. 20 illustrates examples of GPM splits grouped by identical angles.
[0030] FIG. 21 illustrates an example of blending between two predicted partitions performed in GPM.
[0031] FIG. 22 illustrates an example of control point based affine motion models (e.g., supported by a video coding scheme).
[0032] FIG. 23 illustrates an example of affine motion field representation on a 4x4 subblock basis.
[0033] FIG. 24 illustrates an example of locations of inherited affine motion predictors.
[0034] FIG. 25 illustrates an example of control point motion vector inheritance.
[0035] FIG. 26 illustrates an example of locations of a candidate’s position for constructed affine merge mode.
[0036] FIG. 27 illustrates an example of a hierarchical B picture structure with four temporal layers.
[0037] FIG. 28 illustrates an example of decoding side motion vector refinement (DMVR).
[0038] FIG. 29 illustrates an example combination of motion vector predictors and motion vector difference values.
[0039] FIG. 30 illustrates an example decoder side detection of a non-optimal motion vector coding configuration.
[0040] FIG. 31 illustrates an example of global gain vs. d parameter trends.
[0041] FIG. 32 illustrates an example of a dynamic parameter d being computed on the encoder side. [0042] FIG. 33 illustrates an example of a dynamic parameter d being computed on the decoder side.
DETAILED DESCRIPTION
[0043] A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings.
[0044] FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
[0045] As shown in FIG. 1 A, the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a ON 106/115, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 102a, 102b, 102c, 102d, any of which may be referred to as a “station” and/or a “STA”, may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. Any of the WTRUs 102a, 102b, 102c and 102d may be interchangeably referred to as a UE.
[0046] The communications systems 100 may also include a base station 114a and/or a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/115, the Internet 110, and/or the other networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
[0047] The base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions.
[0048] The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established using any suitable radio access technology (RAT).
[0049] More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 115/116/117 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
[0050] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
[0051] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 116 using New Radio (NR).
[0052] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
[0053] In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
[0054] The base station 114b in FIG. 1 A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like. In one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In an embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell. As shown in FIG. 1A, the base station 114b may have a direct connection to the Internet 110. Thus, the base station 114b may not be required to access the Internet 110 via the CN 106/115.
[0055] The RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. The data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance
requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 1A, it will be appreciated that the RAN 104/113 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT. For example, in addition to being connected to the RAN 104/113, which may be utilizing a NR radio technology, the CN 106/115 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
[0056] The CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112. The PSTN 108 may include circuit- switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
[0057] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
[0058] FIG. 1 B is a system diagram illustrating an example WTRU 102. As shown in FIG. 1 B, the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
[0059] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. As suggested above, the processor 118 may include a plurality of
processors. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
[0060] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
[0061] Although the transmit/receive element 122 is depicted in FIG. 1 B as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
[0062] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 , for example.
[0063] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118
may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
[0064] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
[0065] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable locationdetermination method while remaining consistent with an embodiment.
[0066] The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like. The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
[0067] The WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118). In an embodiment, the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
[0068] FIG. 1 C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment. As noted above, the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 104 may also be in communication with the CN 106.
[0069] The RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the eNode-Bs 160a, 160b, 160c may implement MIMO technology. Thus, the eNode-B 160a, for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
[0070] Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 1 C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
[0071] The CN 106 shown in FIG. 1 C may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (or PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
[0072] The MME 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like. The MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
[0073] The SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface. The SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c. The SGW 164 may perform other functions, such as anchoring user planes during inter- eNode B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
[0074] The SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
[0075] The CN 106 may facilitate communications with other networks. For example, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to
facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. For example, the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108. In addition, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
[0076] Although the WTRU is described in FIGS. 1 A-1 D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
[0077] In representative embodiments, the other network 112 may be a WLAN.
[0078] A WLAN in Infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more stations (STAs) associated with the AP. The AP may have an access or an interface to a Distribution System (DS) or another type of wired/wireless network that carries traffic in to and/or out of the BSS. Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs. Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations. Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA. The traffic between STAs within a BSS may be considered and/or referred to as peer-to- peer traffic. The peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS). In certain representative embodiments, the DLS may use an 802.11e DLS or an 802.11 z tunneled DLS (TDLS). A WLAN using an Independent BSS (I BSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other. The IBSS mode of communication may sometimes be referred to herein as an “ad- hoc” mode of communication.
[0079] When using the 802.11 ac infrastructure mode of operation or a similar mode of operations, the AP may transmit a beacon on a fixed channel, such as a primary channel. The primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling. The primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP. In certain representative embodiments, Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems. For CSMA/CA, the STAs (e.g., every STA), including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off. One STA (e.g., only one station) may transmit at any given time in a given BSS.
[0080] High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
[0081] Very High Throughput (VHT) STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels. The 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels. A 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two non-contiguous 80 MHz channels, which may be referred to as an 80+80 configuration. For the 80+80 configuration, the data, after channel encoding, may be passed through a segment parser that may divide the data into two streams. Inverse Fast Fourier Transform (IFFT) processing, and time domain processing, may be done on each stream separately. The streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA. At the receiver of the receiving STA, the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
[0082] Sub 1 GHz modes of operation are supported by 802.11af and 802.11 ah. The channel operating bandwidths, and carriers, are reduced in 802.11 af and 802.11 ah relative to those used in 802.11 n, and 802.11 ac. 802.11 af supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV White Space (TVWS) spectrum, and 802.11 ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non- TVWS spectrum. According to a representative embodiment, 802.11 ah may support Meter Type Control/Machine-Type Communications, such as MTC devices in a macro coverage area. MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g., only support for) certain and/or limited bandwidths. The MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
[0083] WLAN systems, which may support multiple channels, and channel bandwidths, such as 802.11 n, 802.11 ac, 802.11 af, and 802.11 ah, include a channel which may be designated as the primary channel. The primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS. The bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode. In the example of 802.11 ah, the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other STAs in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes. Carrier sensing and/or Network Allocation Vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode), transmitting to the AP, the entire available frequency bands may be considered busy even though a majority of the frequency bands remains idle and may be available.
[0084] In the United States, the available frequency bands, which may be used by 802.11 ah, are from 902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.11 ah is 6 MHz to 26 MHz depending on the country code.
[0085] FIG. 1 D is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment. As noted above, the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 113 may also be in communication with the CN 115.
[0086] The RAN 113 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 113 may include any number of gNBs while remaining consistent with an embodiment. The gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the gNBs 180a, 180b, 180c may implement MIMO technology. For example, gNBs 180a, 108b may utilize beamforming to transmit signals to and/or receive signals from the gNBs 180a, 180b, 180c. Thus, the gNB 180a, for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a. In an embodiment, the gNBs 180a, 180b, 180c may implement carrier aggregation technology. For example, the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum. In an embodiment, the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology. For example, WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
[0087] The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum. The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing varying number of OFDM symbols and/or lasting varying lengths of absolute time).
[0088] The gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non-standalone configuration. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c). In the standalone configuration, WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band. In a non-standalone configuration WTRUs 102a, 102b, 102c may communicate
with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c. For example, WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously. In the non-standalone configuration, eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
[0089] Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E- UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 1 D, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
[0090] The CN 115 shown in FIG. 1 D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
[0091] The AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N2 interface and may serve as a control node. For example, the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different PDU sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of NAS signaling, mobility management, and the like. Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c. For example, different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for machine type communication (MTC) access, and/or the like. The AMF 162 may provide a control plane function for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi. [0092] The SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 115 via an N11 interface. The SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 115 via an N4 interface. The SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b. The SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing
downlink data notifications, and the like. A PDU session type may be IP-based, non-IP based, Ethernetbased, and the like.
[0093] The UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet- switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The UPF 184, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multi-homed PDU sessions, handling user plane QoS, buffering downlink packets, providing mobility anchoring, and the like.
[0094] The CN 115 may facilitate communications with other networks. For example, the CN 115 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 115 and the PSTN 108. In addition, the CN 115 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers. In one embodiment, the WTRUs 102a, 102b, 102c may be connected to a local Data Network (DN) 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
[0095] In view of Figures 1 A-1 D, and the corresponding description of Figures 1 A-1 D, one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, eNode-B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-b, UPF 184a-b, SMF 183a-b, DN 185a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown). The emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein. For example, the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
[0096] The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network. The emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
[0097] The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed
(e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
[0098] This application describes a variety of aspects, including tools, features, examples, models, approaches, etc. Many of these aspects are described with specificity and, at least to show the individual characteristics, are often described in a manner that may sound limiting. However, this is for purposes of clarity in description, and does not limit the application or scope of those aspects. Indeed, all of the different aspects may be combined and interchanged to provide further aspects. Moreover, the aspects may be combined and interchanged with aspects described in earlier filings as well.
[0099] The aspects described and contemplated in this application may be implemented in many different forms. FIGs. 5-33 described herein may provide some examples, but other examples are contemplated. The discussion of FIGs. 5-33 does not limit the breadth of the implementations. At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded. These and other aspects may be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described.
[0100] In the present application, the terms “reconstructed” and “decoded” may be used interchangeably, the terms “pixel” and “sample” may be used interchangeably, the terms “image,” “picture” and “frame” may be used interchangeably.
[0101] Various methods are described herein, and each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as “first”, “second”, etc. may be used in various examples to modify an element, component, step, operation, etc., such as, for example, a “first decoding” and a “second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding.
[0102] Various methods and other aspects described in this application may be used to modify modules, for example, decoding modules, of a video encoder 200 and decoder 300 as shown in FIG. 2 and FIG. 3. Moreover, the subject matter disclosed herein may be applied, for example, to any type, format or version of video coding, whether described in a standard or a recommendation, whether pre-existing or future-
developed, and extensions of any such standards and recommendations. Unless indicated otherwise, or technically precluded, the aspects described in this application may be used individually or in combination. [0103] Various numeric values are used in examples described the present application, such as bits, bit depth, etc. These and other specific values are for purposes of describing examples and the aspects described are not limited to these specific values.
[0104] FIG. 2 is a diagram showing an example video encoder. Variations of example encoder 200 are contemplated, but the encoder 200 is described below for purposes of clarity without describing all expected variations.
[0105] Before being encoded, the video sequence may go through pre-encoding processing (201), for example, applying a color transform to the input color picture (e.g., conversion from RGB 4:4:4 to YCbCr 4:2:0), or performing a remapping of the input picture components in order to get a signal distribution more resilient to compression (for instance using a histogram equalization of one of the color components). Metadata may be associated with the pre-processing, and attached to the bitstream.
[0106] In the encoder 200, a picture is encoded by the encoder elements as described below. The picture to be encoded is partitioned (202) and processed in units of, for example, coding units (CUs). Each unit is encoded using, for example, either an intra or inter mode. When a unit is encoded in an intra mode, it performs intra prediction (260). In an inter mode, motion estimation (275) and compensation (270) are performed. The encoder decides (205) which one of the intra mode or inter mode to use for encoding the unit, and indicates the intra/inter decision by, for example, a prediction mode flag. Prediction residuals are calculated, for example, by subtracting (210) the predicted block from the original image block.
[0107] The prediction residuals are then transformed (225) and quantized (230). The quantized transform coefficients, as well as motion vectors and other syntax elements, are entropy coded (245) to output a bitstream. The encoder can skip the transform and apply quantization directly to the nontransformed residual signal. The encoder can bypass both transform and quantization, i.e. , the residual is coded directly without the application of the transform or quantization processes.
[0108] The encoder decodes an encoded block to provide a reference for further predictions. The quantized transform coefficients are de-quantized (240) and inverse transformed (250) to decode prediction residuals. Combining (255) the decoded prediction residuals and the predicted block, an image block is reconstructed. In-loop filters (265) are applied to the reconstructed picture to perform, for example, deblocking/SAO (Sample Adaptive Offset) filtering to reduce encoding artifacts. The filtered image is stored at a reference picture buffer (280).
[0109] FIG. 3 is a diagram showing an example of a video decoder. In example decoder 300, a bitstream is decoded by the decoder elements as described below. Video decoder 300 generally performs
a decoding pass reciprocal to the encoding pass as described in FIG. 2. The encoder 200 also generally performs video decoding as part of encoding video data.
[0110] In particular, the input of the decoder includes a video bitstream, which may be generated by video encoder 200. The bitstream is first entropy decoded (330) to obtain transform coefficients, motion vectors, and other coded information. The picture partition information indicates how the picture is partitioned. The decoder may therefore divide (335) the picture according to the decoded picture partitioning information. The transform coefficients are de-quantized (340) and inverse transformed (350) to decode the prediction residuals. Combining (355) the decoded prediction residuals and the predicted block, an image block is reconstructed. The predicted block may be obtained (370) from intra prediction (360) or motion-compensated prediction (i.e., inter prediction) (375). In-loop filters (365) are applied to the reconstructed image. The filtered image is stored at a reference picture buffer (380).
[0111] The decoded picture can further go through post-decoding processing (385), for example, an inverse color transform (e.g. conversion from YCbCr 4:2:0 to RGB 4:4:4) or an inverse remapping performing the inverse of the remapping process performed in the pre-encoding processing (201). The post-decoding processing can use metadata derived in the pre-encoding processing and signaled in the bitstream. In an example, the decoded images (e.g., after application of the in-loop filters (365) and/or after post-decoding processing (385), if post-decoding processing is used) may be sent to a display device for rendering to a user.
[0112] FIG. 4 is a diagram showing an example of a system in which various aspects and examples described herein may be implemented. System 400 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this document. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers. Elements of system 400, singly or in combination, may be embodied in a single integrated circuit (IC), multiple ICs, and/or discrete components. For example, in at least one example, the processing and encoder/decoder elements of system 400 are distributed across multiple ICs and/or discrete components. In various examples, the system 400 is communicatively coupled to one or more other systems, or other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports. In various examples, the system 400 is configured to implement one or more of the aspects described in this document.
[0113] The system 400 includes at least one processor 410 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this document. Processor 410 can include embedded memory, input output interface, and various other circuitries as known in the art. The
system 400 includes at least one memory 420 (e.g., a volatile memory device, and/or a non-volatile memory device). System 400 includes a storage device 440, which can include non-volatile memory and/or volatile memory, including, but not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, magnetic disk drive, and/or optical disk drive. The storage device 440 can include an internal storage device, an attached storage device (including detachable and non-detachable storage devices), and/or a network accessible storage device, as non-limiting examples.
[0114] System 400 includes an encoder/decoder module 430 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 430 can include its own processor and memory. The encoder/decoder module 430 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device can include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 430 may be implemented as a separate element of system 400 or may be incorporated within processor 410 as a combination of hardware and software as known to those skilled in the art.
[0115] Program code to be loaded onto processor 410 or encoder/decoder 430 to perform the various aspects described in this document may be stored in storage device 440 and subsequently loaded onto memory 420 for execution by processor 410. In accordance with various examples, one or more of processor 410, memory 420, storage device 440, and encoder/decoder module 430 can store one or more of various items during the performance of the processes described in this document. Such stored items can include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
[0116] In some examples, memory inside of the processor 410 and/or the encoder/decoder module 430 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding. In other examples, however, a memory external to the processing device (for example, the processing device may be either the processor 410 or the encoder/decoder module 430) is used for one or more of these functions. The external memory may be the memory 420 and/or the storage device 440, for example, a dynamic volatile memory and/or a non-volatile flash memory. In several examples, an external non-volatile flash memory is used to store the operating system of, for example, a television. In at least one example, a fast external dynamic volatile memory such as a RAM is used as working memory for video encoding and decoding operations.
[0117] The input to the elements of system 400 may be provided through various input devices as indicated in block 445. Such input devices include, but are not limited to, (i) a radio frequency (RF) portion
that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (iii) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal. Other examples, not shown in FIG. 4, include composite video.
[0118] In various examples, the input devices of block 445 have associated respective input processing elements as known in the art. For example, the RF portion may be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) downconverting the selected signal, (iii) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which may be referred to as a channel in certain examples, (iv) demodulating the downconverted and band-limited signal, (v) performing error correction, and/or (vi) demultiplexing to select the desired stream of data packets. The RF portion of various examples includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, band-limiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers. The RF portion can include a tuner that performs various of these functions, including, for example, downconverting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband. In one set-top box example, the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, downconverting, and filtering again to a desired frequency band. Various examples rearrange the order of the above-described (and other) elements, remove some of these elements, and/or add other elements performing similar or different functions. Adding elements can include inserting elements in between existing elements, such as, for example, inserting amplifiers and an analog-to-digital converter. In various examples, the RF portion includes an antenna.
[0119] The USB and/or HDMI terminals can include respective interface processors for connecting system 400 to other electronic devices across USB and/or HDMI connections. It is to be understood that various aspects of input processing, for example, Reed-Solomon error correction, may be implemented, for example, within a separate input processing IC or within processor 410 as necessary. Similarly, aspects of USB or HDMI interface processing may be implemented within separate interface ICs or within processor 410 as necessary. The demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 410, and encoder/decoder 430 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
[0120] Various elements of system 400 may be provided within an integrated housing, Within the integrated housing, the various elements may be interconnected and transmit data therebetween using
suitable connection arrangement 425, for example, an internal bus as known in the art, including the Inter- IC (I2C) bus, wiring, and printed circuit boards.
[0121] The system 400 includes communication interface 450 that enables communication with other devices via communication channel 460. The communication interface 450 can include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 460. The communication interface 450 can include, but is not limited to, a modem or network card and the communication channel 460 may be implemented, for example, within a wired and/or a wireless medium. [0122] Data is streamed, or otherwise provided, to the system 400, in various examples, using a wireless network such as a Wi-Fi network, for example IEEE 802.11 (IEEE refers to the Institute of Electrical and Electronics Engineers). The Wi-Fi signal of these examples is received over the communications channel 460 and the communications interface 450 which are adapted for Wi-Fi communications. The communications channel 460 of these examples is typically connected to an access point or router that provides access to external networks including the Internet for allowing streaming applications and other over-the-top communications. Other examples provide streamed data to the system 400 using a set-top box that delivers the data over the HDMI connection of the input block 445. Still other examples provide streamed data to the system 400 using the RF connection of the input block 445. As indicated above, various examples provide data in a non-streaming manner. Additionally, various examples use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth® network. [0123] The system 400 can provide an output signal to various output devices, including a display 475, speakers 485, and other peripheral devices 495. The display 475 of various examples includes one or more of, for example, a touchscreen display, an organic light-emitting diode (OLED) display, a curved display, and/or a foldable display. The display 475 may be for a television, a tablet, a laptop, a cell phone (mobile phone), or other device. The display 475 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop). The other peripheral devices 495 include, in various examples, one or more of a stand-alone digital video disc (or digital versatile disc) (DVD, for both terms), a disk player, a stereo system, and/or a lighting system. Various examples use one or more peripheral devices 495 that provide a function based on the output of the system 400. For example, a disk player performs the function of playing the output of the system 400. [0124] In various examples, control signals are communicated between the system 400 and the display 475, speakers 485, or other peripheral devices 495 using signaling such as AV. Link, Consumer Electronics Control (CEC), or other communications protocols that enable device-to-device control with or without user intervention. The output devices may be communicatively coupled to system 400 via dedicated connections through respective interfaces 470, 480, and 490. Alternatively, the output devices may be connected to system 400 using the communications channel 460 via the communications interface 450.
The display 475 and speakers 485 may be integrated in a single unit with the other components of system 400 in an electronic device such as, for example, a television. In various examples, the display interface 470 includes a display driver, such as, for example, a timing controller (T Con) chip.
[0125] The display 475 and speakers 485 can alternatively be separate from one or more of the other components, for example, if the RF portion of input 445 is part of a separate set-top box. In various examples in which the display 475 and speakers 485 are external components, the output signal may be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
[0126] The examples may be carried out by computer software implemented by the processor 410 or by hardware, or by a combination of hardware and software. As a non-limiting example, the examples may be implemented by one or more integrated circuits. The memory 420 may be of any type appropriate to the technical environment and may be implemented using any appropriate data storage technology, such as optical memory devices, magnetic memory devices, semiconductor-based memory devices, fixed memory, and removable memory, as non-limiting examples. The processor 410 may be of any type appropriate to the technical environment, and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as non-limiting examples.
[0127] Various implementations involve decoding. “Decoding”, as used in this application, can encompass all or part of the processes performed, for example, on a received encoded sequence in order to produce a final output suitable for display. In various examples, such processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding. In various examples, such processes also, or alternatively, may include processes performed by a decoder of various implementations described in this application, for example, determining that a selected motion vector predictor for a current block is non- valid; based on determining the selected motion vector predictor is non-valid, computing a dynamic parameter; determining a replacement motion vector predictor based on the computed dynamic parameter; decoding the current block based on the replacement motion vector predictor; etc.
[0128] As further examples, in one example “decoding” refers only to entropy decoding, in another example “decoding” refers only to differential decoding, and in another example “decoding” refers to a combination of entropy decoding and differential decoding. Whether the phrase “decoding process” is intended to refer specifically to a subset of operations or generally to the broader decoding process will be clear based on the context of the specific descriptions and is believed to be well understood by those skilled in the art.
[0129] Various implementations involve encoding. In an analogous way to the above discussion about “decoding”, “encoding” as used in this application can encompass all or part of the processes performed, for example, on an input video sequence in order to produce an encoded bitstream. In various examples, such processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding. In various examples, such processes also, or alternatively, may include processes performed by an encoder of various implementations described in this application, for example, computing a dynamic parameter; determining that a selected motion vector predictor is non-valid; based on determining that the selected motion vector predictor is non-valid, determining a replacement motion vector predictor based on the computed dynamic parameter; encoding a current block based on the replacement motion vector predictor; etc.
[0130] As further examples, in one example “encoding” refers only to entropy encoding, in another example “encoding” refers only to differential encoding, and in another example “encoding” refers to a combination of differential encoding and entropy encoding. Whether the phrase “encoding process” is intended to refer specifically to a subset of operations or generally to the broader encoding process will be clear based on the context of the specific descriptions and is believed to be well understood by those skilled in the art.
[0131] When a figure is presented as a flow diagram, it should be understood that it also provides a block diagram of a corresponding apparatus. Similarly, when a figure is presented as a block diagram, it should be understood that it also provides a flow diagram of a corresponding method/process.
[0132] The implementations and aspects described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.
[0133] Reference to “one example” or “an example” or “one implementation” or “an implementation”, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the example is included in at least one example. Thus, the appearances of the phrase “in one example” or “in an example” or “in one implementation” or “in an implementation”, as
well any other variations, appearing in various places throughout this application are not necessarily all referring to the same example.
[0134] Additionally, this application may refer to “determining” various pieces of information. Determining the information can include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory. Obtaining may include receiving, retrieving, constructing, generating, and/or determining.
[0135] Further, this application may refer to “accessing” various pieces of information. Accessing the information can include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
[0136] Additionally, this application may refer to “receiving” various pieces of information. Receiving is, as with “accessing”, intended to be a broad term. Receiving the information can include one or more of, for example, accessing the information, or retrieving the information (for example, from memory). Further, “receiving” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
[0137] It is to be appreciated that the use of any of the following 7”, “and/or”, and “at least one of’, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed.
[0138] Also, as used herein, the word “signal” refers to, among other things, indicating something to a corresponding decoder. Encoder signals may include, for example, an encoding function on an input for a block using a precision factor, etc. In this way, in an example the same parameter is used at both the encoder side and the decoder side. Thus, for example, an encoder can transmit (explicit signaling) a particular parameter to the decoder so that the decoder can use the same particular parameter.
Conversely, if the decoder already has the particular parameter as well as others, then signaling may be
used without transmitting (implicit signaling) to simply allow the decoder to know and select the particular parameter. By avoiding transmission of any actual functions, a bit savings is realized in various examples. It is to be appreciated that signaling may be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various examples. While the preceding relates to the verb form of the word “signal”, the word “signal” may (e.g., may also) be used herein as a noun.
[0139] As will be evident to one of ordinary skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information can include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry the bitstream of a described example. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on, or accessed or received from, a processor-readable medium.
[0140] Many examples are described herein. Features of examples may be provided alone or in any combination, across various claim categories and types. Further, examples may include one or more of the features, devices, or aspects described herein, alone or in any combination, across various claim categories and types. For example, features described herein may be implemented in a bitstream or signal that includes information generated as described herein. The information may allow a decoder to decode a bitstream, the encoder, bitstream, and/or decoder according to any of the embodiments described. For example, features described herein may be implemented by creating and/or transmitting and/or receiving and/or decoding a bitstream or signal. For example, features described herein may be implemented a method, process, apparatus, medium storing instructions, medium storing data, or signal. For example, features described herein may be implemented by a TV, set-top box, cell phone, tablet, or other electronic device that performs decoding. The TV, set-top box, cell phone, tablet, or other electronic device may display (e.g., using a monitor, screen, or other type of display) a resulting image (e.g., an image from residual reconstruction of the video bitstream). The TV, set-top box, cell phone, tablet, or other electronic device may receive a signal including an encoded image and perform decoding.
[0141] These examples may be performed by a device with at least one processor. The device may be an encoder or a decoder. These examples may be performed by a computer program product which is
stored on a non-transitory computer readable medium and includes program code instructions. These examples may be performed by a computer program comprising program code instructions.
[0142] Systems, methods, and instrumentalities are disclosed herein for adaptive thresholding for motion information coding. In examples, a video encoder or video decoder may determine that a selected motion vector predictor for a current block is non-valid. Based on determining the selected motion vector predictor is non-valid, a dynamic parameter may be computed. A replacement motion vector predictor may be determined based on the computed dynamic parameter. The current block may be encoded or decoded based on the replacement motion vector predictor.
[0143] In examples, a video encoder or video decoder may determine a distance between a current picture order count (POC) and a reference POC. The dynamic parameter may be computed based on the distance between the current POC and the reference POC. In examples, a video encoder or video decoder may determine a temporal layer ID of a current picture. The dynamic parameter may be computed based on the temporal layer ID of the current picture. In examples, a video encoder or video decoder may determine an absolute distance between a current POC and a minimum quantization parameter (QP) in the current POC. The dynamic parameter may be computed based on the absolute distance between the current POC and the minimum QP in the current POC. In examples, a video encoder or video decoder may determine a similarity between a first motion vector predictor and a second motion vector predictor. The dynamic parameter may be computed based on the similarity between the first motion vector predictor and the second motion vector predictor. In examples, the dynamic parameter may be computed based on any combination of the above examples (e.g., based on at least one of the distance between the current POC and the reference POC, the temporal layer ID of the current picture, the absolute distance between the current POC and the minimum quantization parameter (QP) in the current POC, or the similarity between the first motion vector predictor and the second motion vector predictor).
[0144] Inter prediction information may be represented in compressed video, for example, using different types of video compression schemes. A block-based video codec may associate motion information with a (e.g., each) block coded in inter mode. A block structure may be used in video coding schemes to represent a compressed picture. Motion representations may be assigned to an inter block. [0145] A video compression system may divide a picture into Coding Tree Units (CTUs). A size of a CTU may be, for example, 64x64, 128x128, or 256x256 pixels. A (e.g., each) CTU may be represented by a Coding Tree in the compressed domain. For example, there may be a quad-tree division (QT) of the CTU, where each leaf may be referred to as a Coding Unit (CU), as shown by example in FIG. 5 and FIG. 6.
[0146] FIG. 5 illustrates an example of Coding Tree Unit, Coding Unit, and Prediction Unit structures to represent a compressed picture (e.g., a picture compressed using a video coding scheme).
[0147] A (e.g., each) CU may be given Intra or Inter prediction parameters (e.g., Prediction Info). A CU may be spatially partitioned into one or more Prediction Units (PUs). A (e.g., each) PU may be assigned prediction information. An Intra or Inter coding mode may be assigned, for example, on the CU level.
[0148] FIG. 6 illustrates an example of Coding Tree Units, Prediction Units, and Transform Units in video coding.
[0149] Examples of partition types existing in video coding are illustrated in FIG. 7. As shown in FIG. 7, partition types may include square partitions (e.g., 2Nx2N and NxN), which may be (e.g., the only partition types) used in both intra and inter CUs, symmetric non-square partitions (e.g., 2NxN, Nx2N, which may be used (e.g., only) in inter CUs, and/or asymmetric partitions, which may be used (e.g., only) in inter CUs. [0150] FIG. 7 illustrates an example partitioning of coding units into prediction units.
[0151] A block structure may be used in a video coding scheme. A block structure may be used to represent compressed pictures. A picture may be divided in square CTUs (e.g., in various video coding schemes). For example, a CTU may be of size 32x32, 64x64, or 128x128. The CTU division of a picture may (e.g., thus) form a regular grid, where upper and left bounds may spatially coincide with the top and left border of the picture.
[0152] A (e.g., each) CTU may be split into coding units according to a coding tree, as illustrated by the example in FIG. 8. The coding tree may be made of multiple (e.g., two) stages. A CTU may (e.g., first) be partitioned by a quaternary tree (e.g., or quad-tree/QT). For example, a quad-tree split may divide a coding tree node corresponding to a square picture block into four (4) nodes corresponding to four (4) sub-blocks of equal sizes, as shown by solid lines of FIG. 8.
[0153] The quad-tree leaves may (e.g., then) be (e.g., further) partitioned by a multi-type tree (MTT), which may involve four (4) split types (e.g., or split modes), as illustrated by the example in FIG. 9. The split types may be vertical and horizontal binary split (BT) modes, e.g., SPLIT_BT_VER and SPUT_BT_HOR, and vertical and horizontal ternary split (TT) modes SPLIT_TT_VER and SPLIT_TT_HOR. A binary split may divide a block into two sub-blocks, which may be half size of the parent block, according to the split orientation. A ternary split may divide a block into three (3) sub-blocks. The sizes may be, respectively, equal to %, 1/2, and % of the parent block, in the considered split orientation.
[0154] An example of a CTU division is illustrated by FIG. 8. FIG. 8 illustrates an example of a CTU division of a coding tree, e.g., according to a video coding scheme.
[0155] FIG. 9 illustrates an example of split modes supported in multi-type tree partitioning.
[0156] The leaves of the coding tree of a CTU may be the coding unit(s), for example, in the case of a joint coding shared by luma and chroma components.
[0157] Separate coding trees may be used in intra picture. Separated coding trees may be used for a luma component on one side and chroma components on the other side. The luma component part of a CTU may be referred to as a luma coding tree block. A luma coding tree block (CTB) may (e.g., then) be associated with a coding tree. The coding tree leaves may be associated with luma coding blocks. Intra picture video coding may use separated luma/chroma coding trees and a three (3) component picture, where the two chroma CTBs may share the same coding tree.
[0158] In some types of video coding schemes, unit sizes - CU, PU, and TU - may be of equal size. Coding units may generally not be partitioned into PU or TU, for example, except in one or more (e.g., specific) coding modes.
[0159] In video coding schemes, inter prediction information may be represented using motion vectors. For example, a coding unit coded in inter mode may employ one or more (e.g., several) motion vectors respectively assigned to each PU in the CU, e.g., as explained herein. The coding of the motion information may be performed, for example, according to an Adaptive Motion Vector Prediction (AMVP) mode and/or a merge mode. As example of coding and decoding of inter prediction information is shown in FIG. 10. As shown in FIG. 10, there may be, for example, three (3) (e.g., main) modes for coding inter predictions parameters: skip merge mode, non-skip merge mode, and AMVP mode. The skip and merge modes may be signaled, for example, through two (2) dedicated flags. AMVP mode may be on, for example, if/when the two flags are false.
[0160] FIG. 10 illustrates an example of signaling of inter prediction information (e.g., according to a video coding scheme).
[0161] Table 1 provides an example summary of differences between AMVP, merge, and skip modes of inter coded CU.
Table 1 - Example of differences between AMVP, merge and skip modes of inter coded CU [0162] The merge index may enable the derivation of the prediction type (e.g., P or B picture), the reference picture list index, and/or the associated motion vectors.
[0163] As shown by FIG. 10 (e.g., in AMVP mode of a video coding scheme), reference pictures (e.g., up to two reference pictures) used to temporally predict a considered PU may be (e.g., explicitly) signaled, for example, with the motion vectors associated with each PU and each reference picture.
[0164] The motion vectors may be predictively coded. A motion vector predictor (MVP) may be chosen by the encoder for a (e.g., each) reference picture and a motion vector difference (MVD) relative to a (e.g., each) selected MVP is signaled. The decoder side reconstructed motion data may include the sum of the MVPs used for a given PU and their associated MVDs.
[0165] The MVPs may be chosen in an AMVP candidate list, which may include (e.g., 2) elements for a (e.g., each) reference picture. The index of the chosen MVP may be signaled in the bit-stream. The MVP candidate list may be constructed, for example, according to the example workflow shown in FIG. 11 .
[0166] A candidate MVP from left neighboring position A0, A1 , as shown by example in FIG. 12, may be derived, for example, if an inter coded block exists at the corresponding spatial location. An MVP from the top neighboring block may be derived, and (e.g., then) a temporal MVP may be derived, from a reference picture at spatial position H (e.g., if available), or position C (e.g., if otherwise/not available). A derived MVP may be scaled, for example, according to the temporal distances between the reference picture associated with the MVP and the current reference picture being considered.
[0167] A redundancy check may be conducted between derived spatial MVPs. For example, duplicate derived MVPs may be discarded.
[0168] The final AMVP candidate list may include the (e.g., two first) derived MVP candidates. The AMVP candidate list may be completed with zero motion vectors, for example, if less than two (2) MVP candidates are obtained. FIG. 11 illustrates an example of MVP candidate list construction in AMVP mode of a video coding mechanism. FIG. 12 illustrates an example of neighboring spatial locations A0, A1 (left), B0, B1 , B2 (above), and collocated blocks for TMVP (H and C) of a current block.
[0169] A merge mode may be implemented in video coding schemes. As shown by example in FIG. 10, motion information coding/decoding according to the merge mode may take place in two modes, e.g., the skip mode and the merge mode. The decoder may retrieve the motion information of a PU, for example, based on a (e.g., one single) field (e.g., the merge index) that may be signaled (e.g., in the two modes). The merge index may indicate which Motion Vector Predictor (MVP) in the list of merge motion information predictors may be used to derive the motion information of a current PU. The list of motion information predictors may be referred to as the merge list or the merge candidate list. A candidate motion information predictor may be referred to as a merge candidate.
[0170] Merge mode may be implemented in a video coding scheme. The merge mode may include derivation of the inter prediction information (e.g., also called motion information) of a given prediction unit
from a selected motion information predictor candidate. The motion information considered may include (e.g., all) the inter prediction parameters of a PU, which may include, for example, one or more of the following: the uni-directional or bi-directional temporal prediction type; the reference picture index within a (e.g., each) reference picture list; and/or the motion vector(s).
[0171] A merge candidate list may be (e.g., systematically) constructed with multiple (e.g., 5) merge candidates. Examples describe how the merge list may be constructed, e.g., on the encoder and on the decoder sides. One or more spatial positions (e.g., up to 5 spatial positions) may be considered to retrieve potential candidates. Spatial positions may be visited according to an order, such as, for example, the following order: (1) Left (A1); (2) Above (B1); (3) Above right (BO); (4) Left bottom (AO); and (5) Above left (B2).
[0172] The symbols AO, A1 , BO, B1 , and B2 may denote the spatial positions shown by example in FIG. 13. Spatial candidates, which may include associated motion information, that may be different from each other, may be selected. A temporal predictor (e.g., TMVP) may be selected, for example, by considering the temporal motion information located at position H. The “center” may be a candidate at position H, for example, if the considered reference picture is not available. A pruning process may take place (e.g., as shown by example in FIG. 14), for example, to eliminate redundant candidates from the selected set of spatial and temporal candidates.
[0173] As shown in FIG. 14, candidates of another type (e.g., combined candidate type) may be pushed to the merge list (e.g., if the merge list is not full), for example, in the case of a B slice. A combined candidate type may be formed, for example, by forming a candidate made of the motion information associated with a (e.g., one) reference picture list (L0) from a (e.g., one) candidate already present in the merge list, with the motion associated with the other reference picture list (L1) from another candidate already present in the merge list.
[0174] Zero motion vectors may be pushed to the back of the merge list until it is full, for example, if the merge list is still not full (e.g., with five (5) elements).
[0175] FIG. 13 illustrates an example of positions of spatial and temporal motion vector predictors used in a merge mode of a video coding scheme. As shown in FIG. 13, spatial merge candidates are shown on the left and temporal merge candidates are shown on the right. FIG. 14 illustrates an example of the construction of a list of merge motion vector predictor candidates of the video coding scheme.
[0176] An example of a process (e.g., an overall process) of merge list construction (e.g., in a video coding scheme) is shown in FIG. 15. FIG. 15 illustrates an example of the construction of a list of merge motion vector predictor candidates (e.g., in the video coding scheme).
[0177] Inter prediction information may be represented and coded in video coding, such as in a video coding scheme. For example, motion data representation may be richer in one video coding scheme than another. Motion data representation may be divided into categories (e.g., two main categories), such as whole-block-based motion representation and sub-block-based motion representation, as illustrated by example in FIG. 16. One or more (e.g., two) modes for coding motion information may be used in a (e.g., each) category. Modes for coding information may include, for example, merge/skip and AMVP.
[0178] FIG. 16 illustrates an example of whole-block and sub-block-based motion representation categories.
[0179] Whole-block-based motion representation may include assignment of a (e.g., one) set of motion information to an inter block. A set of motion information may be made of one or two motion vectors and associated reference picture(s). Motion information of a block may be represented under the form of a (e.g., single) motion vector for the whole block.
[0180] Sub-block-based motion coding mode may divide a block into subblocks (e.g., 4x4 or 8x8 luma sample subblocks). An individual set of motion information may be assigned to a (e.g., each) subblock.
[0181] Whole-block-based motion representation and coding may be implemented in video coding schemes.
[0182] Whole-block-based AMVP mode may be implemented in certain video coding schemes. An AMVP mode may (e.g., explicitly) signal the motion vector as an MV difference relative to a selected MVP for a given reference picture, for example, together with the reference picture index that identifies the reference picture associated with the coded motion vector.
[0183] An AMVP motion vector predictor (MVP) candidate list may include multiple (e.g., two) elements. One or more of the following elements may be employed to construct an AMVP MVP candidate list on the encoder and decoder sides: multiple (e.g., up to four (4)) spatial candidates; (e.g., up to one (1)) temporal MVP candidate; (up to four (4)) History-Based Motion Vector Prediction (HMVP) candidates; and/or Zero motion vector(s), for example, if needed to get two (2)) MVP candidates in the final list.
[0184] HMVP may use previously coded MVs as MVPs associated with adjacent or non-adjacent blocks relative to a current block. A table of HMVP candidates may be maintained at the encoder and decoder sides. The HMVP table may be updated on the fly, for example, as a first-in-first-out (FIFO) buffer of MVPs. In some examples, there may be up to five candidates in the HMVP table. The table may be updated by appending associated motion information to the end of the table as a new HMVP candidate, for example, after coding a (e.g., one) inter predicted block that is not in sub-block mode (e.g., including affine mode) or geometric partition mode (GPM). A FIFO rule may be applied to manage the table. A redundant candidate
in an HMVP table may be removed, for example, instead of the first candidate. The table may be reset, for example, at a (e.g., each) CTU row, e.g., to enable parallel processing.
[0185] A whole-block-based AMVP motion coding mode may employ, for example, one or more of the following (e.g., in addition to an AMVP candidate list): Symmetric Motion Vector Difference (SMVD); Adaptive Motion Vector Resolution (AMVR); and/or Bi-prediction with coding unit weights (BCW).
[0186] SMVD may include setting the MVD associated with reference picture list 1 (L1) equal to the opposite of the MVD associated with reference picture list 0 (L0) for a given block. Reference pictures used in SMVD mode may be derived by the decoder, for example, with pre-defined rules. SMVD may enable reduction of the rate cost for coding MVD information. SMVD may be selected at block level.
[0187] AMVR may allow/enable signaling the MVD with quarter-pel, half-pel, integer-pel, or 4-pel luma sample resolutions, which may allow/enable saving bits in the coding of MVD information. The motion vector resolution (e.g., in AMVR) may be chosen at block level.
[0188] BCW may enable bi-prediction of a block with unequal weights. BCW may be signaled at the block (e.g., CU) level.
[0189] An internal motion vector representation (e.g., in a video codec) may be achieved at 1/16-luma sample accuracy, for example, instead of 4-luma sample accuracy (e.g., in a video coding mechanism).
[0190] An example of a whole-block-based (e.g., or non-subblock-based) merge list construction process (e.g., for a video coding scheme) is illustrated by FIGs. 17A and 17B. Whole-block-based merge mode (e.g., in a video coding mechanism) may also be called regular merge mode. Whole-block-based merge mode merge MVP candidate list construction may differ, for example, among various video coding mechanisms. Whole-block-based merge mode may have multiple merge coding modes, which may include, for example, one or more of the following: Merge Mode with MV Difference (MMVD); Geometric Partitioning Mode (GPM); and/or Combined I ntra/lnter Prediction (CIIP).
[0191] A merge MVP candidate list is constructed with one or more of the following types of MVP candidates; spatial candidates; temporal MVP candidates; HMVP candidates; pairwise average candidates; and/or zero MV candidates.
[0192] Spatial candidates in one video coding scheme may be similar to that of another video coding mechanism, e.g., except the two first candidates may be swapped. Temporal MVP candidates in one video coding scheme may be similar to another video coding scheme. HMVP candidates may be inserted into the merge list, for example, so that the merge list reaches the maximum allowed number of MVP candidates minus one (1). Pairwise average candidates (e.g., up to one pairwise average candidate) may be added to the merge candidate list. Pairwise candidates may be computed, for example, as follows. The two first MVP candidates present in the list may be considered. The motion vectors of the two first MVP candidates
present in the list may be averaged. The averaging may be computed separately for each reference picture list. Motion vectors related to both lists LO and L1 may be averaged, for example, if both MVPs are bidirectional. If only one motion vector is present in a reference picture list, the motion vector may be taken as is to form the pairwise candidate.
[0193] FIG. 18 illustrates an example of allowed motion vector differences (MVDs). MMVD merge mode may allow/enable coding of a limited motion vector difference (MVD) on top of selected merge MVP candidates, for example, to represent the motion information of a CU. MMVD coding may be limited to four (4) vector directions and eight (8) magnitude values, e.g., from % luma sample to 32-luma sample. MMVD may provide an intermediate accuracy level, which may yield an intermediate trade-off between rate cost and MV accuracy to signal the motion information.
[0194] FIG. 19 illustrates an example representation of CU motion data in GPM mode. GPM merge mode (e.g., in a video coding scheme) may support inter prediction. In an example, an inter CU may be partitioned into (e.g., two) motion partitions, for example, along a straight line, as shown by example in FIG. 19. The partition may be non-rectangular. An asymmetric splitting may be performed, for example, if rectangular, which may avoid redundancy with CU-level binary splitting.
[0195] FIG. 20 illustrates examples of GPM splits grouped by identical angles. GPM usage may be signaled, for example, with a CU-level flag as a particular merge mode. The split line orientation and/or position relative to the CU center may be signaled (e.g., at CU level) through a (e.g., dedicated) GPM index. In some examples, a total of 64 partitions may be supported by geometric partitioning mode for a (e.g., each possible) CU size w x h = 2m x 2n with m, n e {3 ••• 6}, e.g., excluding 8x64 and 64x8. FIG. 20 illustrates various examples of split lines that may be achieved, e.g., at different position offsets from the CU center, and for various split line angles.
[0196] A (e.g., each) part of a geometric partition in the CU may be inter-predicted using its own motion. Uni-prediction (e.g., only uni-prediction) may be allowed for a (e.g., each) partition. A (e.g., each) part may have one motion vector and one reference picture index. The uni-prediction motion constraint may be applied, for example, to limit the number of (e.g., only two) motion compensated predictions for each CU, which may be similar to conventional bi-prediction.
[0197] The motion vector of a (e.g., each) partition may be derived from (e.g., up to two) merge indices, respectively, for a (e.g., each) partition, which may be similar to a regular whole-block-based merge mode.
[0198] FIG. 21 illustrates an example of blending between two predicted partitions performed in GPM. Sample values along the geometric partition edge may be adjusted using a blending processing with adaptive weights (e.g., as illustrated by example in FIG. 21), for example, after predicting each of part of the geometric partition. The process may form the prediction signal for the whole CU. A transform and
quantization process may be applied to the whole CU (e.g., not for each partition), for example, as in other prediction modes.
[0199] CIIP merge mode may include combining an inter prediction signal with an intra prediction signal to predict a current CU. The inter prediction signal in the CIIP mode may be derived, for example, using the same inter prediction process as applied to regular merge mode. The intra prediction signal may be derived following the regular intra prediction process with the planar mode. The intra and inter prediction signals may be combined, for example, using weighted averaging. The weight value may be calculated, for example, depending on the coding modes of the top and left neighboring blocks. The weight value may be calculated, for example, in accordance with the following equation.
PCI IP = (Wmerge X Pmerge + W intra X Pintra + 2) » 2
[0200] The sums of weights Wmerge and Wintra may be equal to a constant, e.g., four (4). The weights Wjnerge and WJntra may be constant in the whole CU.
[0201] Sub-block-based motion may be represented and coded, for example, in a video coding scheme. FIG. 22 illustrates an example of control point based affine motion models (e.g., supported by a video coding scheme). On the left is shown a 4-parameter affine mode. On the right is shown a 6-parameter affine model.
[0202] Affine motion compensation may be performed. In a video coding scheme, translation motion model (e.g., only a translation motion model) may be applied for motion compensated temporal prediction (MCP). A translational motion may not capture some types of motion, such as zoom in, zoom out, rotation, perspective motions, and/or irregular motions. In a video coding scheme, a sub-block-based affine motion compensation prediction may be used at a CU level. As shown by the example in FIG. 22, the affine motion field of the block may be described by motion information of (e.g., two) control point motion vectors (e.g., 4- parameter affine motion model) or (e.g., three) control point motion vectors (e.g., 6-parameter affine motion model). As shown in FIG. 22, the vectors v_0, v_1 , v_2, may be the control point motion vectors (CPMVs) associated with the block. The vectors may be used to represent the affine motion field of the considered block.
[0203] A 4-parameter affine motion model may derive a motion vector at sample location (x, y) in a block, for example, in accordance with the 4-parameter affine motion field computation shown in the following equation:
[0204] A 6-parameter affine motion model may derive a motion vector at sample location (x, y) in a block, for example, in accordance with the 6-parameter affine motion field computation as shown in the following equation:
where (mvOx, mvOy) may be the motion vector of the top-left corner control point, (mvlx, mvly) may be the motion vector of the top-right corner control point, and (mv2x, mv2y) may be the motion vector of the bottom-left corner control point.
[0205] FIG. 23 illustrates an example of affine motion field representation on a 4x4 subblock basis. Affine motion compensation may be performed, for example, on a 4x4 subblock basis. A motion vector of a (e.g., each) 4x4 luma subblock may be derived, for example, by calculating the motion vector of the center sample of each subblock according to Eq. (2) or Eq. (3) (e.g., as shown by example in FIG. 23). The calculated motion vector may be rounded, for example, to 1/16 fraction accuracy. The motion compensation interpolation filters may (e.g., then) be applied to generate the prediction of a (e.g., each) subblock with the derived motion vector. The subblock size in chroma-components may (e.g., also) be, for example, 4x4. The MV of a 4x4 chroma subblock may be calculated as the average of the MVs of the topleft and bottom-right luma subblocks in the collocated 8x8 luma region.
[0206] Translational motion inter prediction may implement one or more (e.g., two main) affine inter prediction modes, such as affine AMVP mode and/or affine merge mode.
[0207] Affine merge mode may be a subblock-based motion coding mode inside the sub-block merge mode. Affine merge mode may be applied for CUs for example, based on width and/or height (e.g., if both width and height larger than or equal to eight (8)). The CPMVs of the current CU may be generated in affine merge mode based on the motion information of the spatial neighboring CUs. There may be one or more (e.g., up to five) Control Point Motion Vector Predictors (CPMVP) candidates. An index may be signaled to indicate the candidate to be used for the current CU. One or more of the following three types of CPVM candidate may be used to form an affine merge candidate list: inherited affine merge candidates extrapolated from the CPMVs of the neighbor CUs; constructed affine merge candidates’ CPMVPs that may be derived using the translational MVs of the neighbor CUs; and/or zero MVs.
[0208] There may be a maximum number of (e.g., two) inherited affine candidates, which may be derived from the affine motion model of the neighboring blocks, e.g., one from left neighboring CUs and one from above neighboring CUs.
[0209] Candidate blocks are shown by example in FIG. 24. FIG. 24 illustrates an example of locations of inherited affine motion predictors. The scan order may be A0->A1 , for example, for the left predictor. The scan order may be B0->B1 ->B2, for example, for the above predictor. The first inherited candidate from a (e.g., each) side (e.g., only the first inherited candidate from each side) may be selected. Control point motion vectors for a neighboring affine CU may be used to derive the CPMVP candidate in the affine merge list of the current CU, for example, if/when the neighboring affine CU is identified.
[0210] FIG. 25 illustrates an example of control point motion vector inheritance. As shown in FIG. 25, the motion vectors v2 , v3 and v4 of the top left corner, above right corner and left bottom corner of the CU that include the block A may be attained, for example, if the neighbor left bottom block A is coded in affine mode. The two CPMVs of the current CU may be calculated according to v2, and v3, for example, if/when block A is coded with a 4-parameter affine model. The three CPMVs of the current CU may be calculated according to v2, v3, and v4, for example, if block A is coded with a 6-parameter affine model.
[0211] FIG. 26 illustrates an example of locations of a candidate’s position for constructed affine merge mode. A constructed affine candidate may be a candidate constructed by combining the neighbor translational motion information of each control point. The motion information for the control points may be derived from (e.g., specified) spatial neighbors and a temporal neighbor, e.g., as shown by example in FIG. 26. CPMVk (k=1 , 2, 3, 4) may represent the k-th control point. For CPMV1, the B2->B3->A2 blocks may be checked. The MV of the first available block may be used. For CPMV2, the B1 ->B0 blocks may be checked. For CPMV3, the A1->A0 blocks may be checked. TMVP may be used as CPMV4, for example, if it is available.
[0212] Affine merge candidates may be constructed based on the motion information, for example, after MVs of four control points are obtained. One or more of the following combinations of control point MVs may be used to generate a constructed affine merge candidate (e.g., in the following order): {CPMV1 , CPMV2, CPMV3}, {CPMV1 , CPMV2, CPMV4}, {CPMV1, CPMV3, CPMV4}, {CPMV2, CPMV3, CPMV4}, { CPMV1 , CPMV2}, { CPMV1, CPMV3}.
[0213] For example, control point motion vectors {CPMV1 , CPMV2, CPMV3} may be used. Control point motion vectors {CPMV1 , CPMV2, CPMV3} may be used to generate an affine motion field for the CU (e.g., following Eq. (3)).
[0214] The combination of three (3) CPMVs may construct a 6-parameter affine merge candidate. The combination of two (2) CPMVs may construct a 4-parameter affine merge candidate. The related
combination of control point MVs may be discarded, for example, if the reference indices of control points are different, which may avoid a motion scaling process in case CPMVs point to different reference pictures.
[0215] Zero MVs may be inserted to the end of the list, for example, if the list is still not full after inherited affine merge candidates and constructed affine merge candidate are considered for being appended to the affine merge candidate list.
[0216] Affine AMVP mode may be applied for CUs based on width and/or height (e.g., with both width and height larger than or equal to 16). An affine flag (e.g., at CU level) may be signaled in the bitstream to indicate the use of affine AMVP mode. Another flag may signal if a 4-parameter affine or a 6-parameter affine model is used. The difference of the CPMVs of a current CU and their predictors CPMVPs may be coded, e.g., in affine AMVP mode.
[0217] The CPMVPs used to predict the CPMV of a CU may be taken from an affine AMVP candidate list, which may be made of two (2) elements. The affine AMVP candidate list may be constructed, for example, using one or more of the following types of CPVM candidate (e.g., in the following order): inherited affine AMVP candidates extrapolated from the CPMVs of the neighbor CUs; constructed affine AMVP candidates CPMVPs that may be derived using the translational MVs of the neighbor CUs; translational MVs from neighboring CUs; and/or zero MVs.
[0218] Term checking may be used. Checking a potential candidate may include checking that a valid affine AMVP or affine merge candidate to predict the current CU’s affine CPMVs is available and/or is valid. An available and valid candidate may be added to the candidate list under construction.
[0219] The checking order of inherited affine AMVP candidates may be the same as or similar to the checking order of inherited affine merge candidates. A difference (e.g., the only difference) may be that (e.g., only) the affine CU that has the same reference picture as in the current block may be considered for an AVMP candidate. A pruning process may not be applied, for example, if/when inserting an inherited affine motion predictor into the candidate list.
[0220] A constructed affine AMVP candidate may be derived from (e.g., specified) spatial neighbors, e.g., as shown by example in FIG. 26. The checking order used may, for example, be the same as a checking order in affine merge candidate construction. The reference picture index of the neighboring block may (e.g., also) be checked. The block that may be used may be the first block in the checking order that is inter coded and has the same reference picture as current CUs.
[0221] MVs mv0 and mv may be added as one candidate in the affine AMVP list, for example, if/when the current CU is coded with 4-parameter affine mode, and mv0 and mv are both available. Three CPMVs may be added as one candidate in the affine AMVP list, for example, if/when the current CU
is coded with 6-parameter affine mode, and the three CPMVs are available. A constructed AMVP candidate may be set as unavailable, for example, if otherwise.
[0222] MVs mv0 , mv± and mv2 may be added, e.g., in order, as translational MVs to predict (e.g., all) control point MVs of the current CU, e.g., if/when available, for example, if the affine AMVP list of candidates is still less than two (2) after valid inherited affine AMVP candidates and constructed AMVP candidate are inserted. Z zero MVs may (e.g., then) be used to fill the affine AMVP list if it is still not full. [0223] A sub-block merge/skip mode may be a merge mode using a merge candidate list of (e.g., at most five (5)) elements with (e.g., only) subblock-based motion candidates. A merge index (e.g., for regular merge) may indicate a subblock-based merge candidate used to derive the motion data of a CU. The subblock-based merge candidate list may be made of, for example, the following elements. A Subblockbased Temporal Motion Vector Prediction (SbTMVP) candidate may be put at first place. Affine merge candidates may (e.g., then) be put in the list. The subblock merge list may be constructed, for example, with one or more of the following list of candidates: SbTMVP; inherited affine merge candidates; constructed affine merge candidates CPMVPs that may be derived using the translational MVs of the neighbor CUs; and/or zero MVs.
[0224] Subblock-based Temporal Motion Vector Prediction (SbTMVP) may use the motion field in the collocated picture, which may be similar to the TMVP regular merge candidate. SbTMVP may differ from TMVP in one or more of the following aspects. TMVP may predict motion at a CU level. SbTMVP may predict motion at a sub-CU level. TMVP may fetch temporal motion vectors from the collocated block in the collocated picture. SbTMVP may apply a motion shift before fetching the temporal motion information from the collocated picture. The motion shift may be obtained from the motion vector from a spatial neighboring block of the current CU.
[0225] Block-based video coding may offer a wide range of flexibility of configurations. These configurations may depend on a targeted goal to achieve (e.g., compression efficiency, complexity, delays, robustness, etc.). These configurations may be driven by the encoder setting.
[0226] FIG. 27 illustrates an example of a hierarchical B picture structure with four temporal layers. To encode successive pictures of video (e.g., all the successive pictures of video), a video encoder may use four typical modes of prediction structure: all intra (Al), random access (RA), low delay P picture (LDP), or low delay B picture (LDB).
[0227] For all intra (Al), pictures (e.g., each picture) may be encoded as an intra picture (I picture). Pictures (e.g., all pictures) may be coded using the temporal order and may use the same quantization parameter (QP).
[0228] For random access (RA), a hierarchical of bi-predicted picture (B picture) may be used (e.g., as shown in FIG. 27). In that configuration, the first picture may be an I picture, the others may be B pictures encoded in a specific order depending on its positions into the group of picture (GOP). Each picture may belong to a specific temporal layer (TL). The pictures at a lower temporal level may be the reference for the pictures at the upper temporal level. The QP of pictures (e.g., each picture) may be adjusted periodically depending on its position in the GOP.
[0229] For low delay P picture (LDP), the first picture may be encoded as an I picture and the subsequent pictures may be encoded as a predicted picture (P picture). The pictures (e.g., all pictures) may be coded using the temporal order. The QP of pictures (e.g., each picture) may be adjusted periodically depending on its position relative the first frame.
[0230] For low delay B picture (LDB), the first picture may be encoded as an I picture and the subsequent pictures may be encoded as a B picture. The pictures (e.g., all pictures) may be coded using the temporal order. The QP of pictures (e.g., each picture) may be adjusted periodically depending on its position relative to the first frame.
[0231] FIG. 28 illustrates an example of decoding side motion vector refinement (DMVR). Bilateralmatching (BM) based decoder side motion vector refinement may be applied (e.g., to increase the accuracy of the MVs of the merge mode). In bi-prediction operation, a refined MV may be searched around the initial MVs in the reference picture list L0 and reference picture list L1 . BM may calculate the distortion between the two candidate blocks in the reference picture list L0 and list L1 . As illustrated in FIG. 28, the SAD between the red blocks based on each MV candidate around the initial MV may be calculated. The MV candidate with the lowest SAD may become the refined MV and may be used to generate the bipredicted signal.
[0232] In examples, the application of DMVR may be restricted and may be (e.g., may only be) applied for the CUs which are coded with at least one of the following modes and features: CU level merge mode with bi-prediction MV; one reference picture is in the past and another reference picture is in the future with respect to the current picture; the distances (e.g., picture order count (POC) difference) from two reference pictures to the current picture are the same; both reference pictures are short-term reference pictures; CU has more than 64 luma samples; both CU height and CU width are larger than or equal to 8 luma samples; BCW weight index indicates equal weight; WP is not enabled for the current block; or CIIP mode is not used for the current block.
[0233] The refined MV derived by DMVR process may be used to generate the inter prediction samples and may (e.g., may also) be used in temporal motion vector prediction for future pictures coding. The original MV may be used in a deblocking process and may (e.g., may also) be used in spatial motion vector
prediction for future CU coding. Additional features of DMVR may include a DMVR search scheme, bilinear-interpolation and sample padding, a maximum DMVR processing unit, or DMVR applied to affine merge blocks.
[0234] For a DMVR search scheme, in DVMR, the search points surrounding the initial MV and the MV offset may be applied to the MV difference mirroring rule. Any points that are checked by DMVR, denoted by candidate MV pair (MVO, MV1), may be applied by the following two equations:
MVO' = MV0 + MVoffset
MV1' = MVl - MVoffset where MVoffset may represent the refinement offset between the initial MV and the refined MV in one of the reference pictures. The refinement search range may be two integer luma samples from the initial MV. The searching may include the integer sample offset search stage and fractional sample refinement stage. [0235] 25 points full search may be applied for integer sample offset searching. The SAD of the initial MV pair may be first calculated. If the SAD of the initial MV pair is smaller than a threshold, the integer sample stage of DMVR may be terminated. Otherwise, SADs of the remaining 24 points may be calculated and checked in raster scanning order. The point with the smallest SAD may be selected as the output of integer sample offset searching stage. To reduce the penalty of the uncertainty of DMVR refinement, the original MV may be favored during the DMVR process. The SAD between the reference blocks referred by the initial MV candidates may be decreased by 1/4 of the SAD value.
[0236] The integer sample search may be followed by fractional sample refinement. To save the calculational complexity, the fractional sample refinement may be derived by using a parametric error surface equation (e.g., instead of an additional search with SAD comparison). The fractional sample refinement may be invoked based on the output of the integer sample search stage. When the integer sample search stage is terminated with the center having the smallest SAD in either the first iteration or the second iteration search, the fractional sample refinement may be applied (e.g., further applied).
[0237] In parametric error surface based sub-pixel offsets estimation, the center position cost and the costs at four neighboring positions from the center may be used to fit a 2-D parabolic error surface equation of the following form:
E(x,y) = A(x - Xm in )2 + B(y - ym in )2 + C where (xjnin, y niri) may correspond to the fractional position with the least cost and C may correspond to the minimum cost value. By solving the above equations by using the cost value of the five search points, the (xjnin, yjniri) may be computed as:
£(-1,0) - £(1,0) min 2(£(— 1,0) + £(1,0) - 2£(0,0)) y_min = (£(0, -1) - £(0,l))/(2((£(0, -1) + £(0,1) - 2£(0,0)))
[0238] The value of xmind ymine may be automatically constrained to be between - 8 and 8 since cost values (e.g., all cost values) are positive and the smallest value is E(0, 0). This may correspond to a half peal offset with 1/16th-pel MV accuracy. The computed fractional (x_min, yjniri) may be added to the integer distance refinement MV to get the sub-pixel accurate refinement delta MV.
[0239] For bilinear-interpolation and sample padding, the resolution of the MVs may be 1/16 luma samples. The samples at the fractional position may be interpolated using an 8-tap interpolation filter. In DMVR, the search points may be surrounding the initial fractional-pel MV with an integer sample offset. The samples at the fractional position may be interpolated for the DMVR search process. The bi-linear interpolation filter may be used to generate the fractional samples for the searching process in DMVR (e.g., to reduce the calculation complexity). By using a bi-linear filter with a 2-sample search range, the DVMR may not access more reference samples compared to other motion compensation processes. The 8-tap interpolation filter may be applied to generate the final prediction (e.g., after the refined MV is attained with the DMVR search process). To avoid accessing more reference samples to MC processes, the samples, which may be not needed for the interpolation process based on the original MV but may be needed for the interpolation process based on the refined MV, may be padded from those available samples.
[0240] For a maximum DMVR processing unit, if the width and/or height of a CU are larger than 16 luma samples, the CU may be (e.g., further) split into subblocks with a width and/or height equal to 16 luma samples. The maximum unit size for DMVR searching process may be limited to 16x16.
[0241] For DMVR applied to affine merge blocks, DMVR may be applied to affine merge coded blocks when a DMVR condition is satisfied. An affine motion field may be modelized as follows (for a 6-parameters affine case):
where (mvx, mvy) may be the motion vector at location (x, y) and
maY be the base
MV representing 53 the translation motion of the affine model. Parameters mVlx W mv°x, mV2x H mv°x, may represent the non-translation parameters (e.g., rotation, scaling).
[0242] In the DMVR process applied to affine, the first stage of multi-stage DMVR may be applied to the translation part of the affine motion such that a translation MV offset is added to the CPMVs (e.g., all the CPMVs) of the candidate in the affine merge list if the candidate meets the DMVR condition. The MV offset may be derived by minimizing the cost of bilateral matching.
[0243] The first stage refinement process may include a 3x3 square search pattern used to loop through the search range which may be set as [-3, 3] to find the best integer MV offset. A half-pel search may be conducted around the best integer position and an error surface estimation may be performed to find an optimal MV offset with 1/16 precision.
[0244] FIG. 29 illustrates an example combination of motion vector predictors and motion vector difference values. A block to predict is shown together with its two AMVP motion vector prediction candidates MVP0 and MVP^, and a motion vector difference MVd to signal. Given the motion vector difference coding system, the coding cost of a motion difference may be correlated to the magnitude of the motion vector difference and may increase as a function of the MVd magnitude. FIG. 29 shows that the coding configuration may use the motion vector predictor MVR instead of MVP0, which may lead to a smaller motion vector difference magnitude. On decoder side, MVP0 may be detected as non-optimal. The decoder may be able to detect that the use of motion vector predictor MVP0 is not the most optimal one to employ for pointing to the spatial position corresponding to MVP0 + MV (e.g., as shown in FIG. 29).
[0245] FIG. 30 illustrates an example decoder side detection of a non-optimal motion vector coding
> configuration. In this example, a proper encoding strategy is provided. If the motion vector difference MVd of FIG. 29 is signaled, then that MVP0 may be not used as the motion vector predictor for considered block. Hence, the decoder may be able to detect that the use of motion vector predictor MVP^ is the most likely one compared to MVP0. As such, the signaling of the motion vector predictor and the signaling of the motion vector difference may carry some redundant information.
[0246] An example of a motion vector coding algorithm is provided as follows. On the decoder side, a detector that may be able to detect a non-valid situation (e.g., such as the example shown in FIG. 29), may adapt the motion vector relative to the situation and may reconstruct the motion vector based on a new defined motion vector predictor candidate ( MVP'ld^), such as the one described below:
[0247] On the encoder side, a detector that may be able to detect a non-valid combination of motion vector predictors and a motion vector difference, may adapt the notion predictor by above equation, may code a MVD of a smaller magnitude based on this new candidate, and may encode the newly defined MVD and the corresponding MVP index that may be detected as a non-valid situation by the decoder.
[0248] FIG. 31 illustrates an example of global gain vs. d parameter trends. Examples herein may increase motion data coding efficiency by optimizing the cost of the MVD when MV coding design examples described herein are used. MV coding design examples herein describe a mechanism that may rely on a static parameter d (see the equation above), which may be any positive value. The selection of the best d parameter to get an optimum compression gain may be complex. This d parameter may impact both the magnitude of a (e.g., new) MVP candidate and may impact (e.g., may then impact) the gain (e.g., in magnitude reduction) of a (e.g., new) MVD based on this new MVP candidate. The d parameter may (e.g., may also) impact the number of blocks where this mechanism may be applied. The bigger the parameter d is, the better the potential gain on blocks (e.g., each individual block) may be. The bigger the d parameter is, the fewer the number of blocks the mechanism may be applied (e.g., as shown in FIG. 31 ). The optimum d for a sequence or a set of sequences may not be deduced automatically but may be adjusted empirically depending on the content and the encoding parameters used.
[0249] Examples herein may include a dynamic parameter d when the mechanism MV coding design examples described herein are used. This dynamic parameter d may be set automatically depending on internal characteristics of a current block level or a picture level (e.g., frame level) available both at the encoder and decoder sides. The mechanism that defines this dynamic parameter d per block (e.g., each block) may be shared at the encoder and decoder sides and may lead to identical reconstructed PU motion vector(s) on encoder and decoder sides.
[0250] In examples, the dynamic parameter may depend on a relative distance between a current POC and a reference POC. The encoder or decoder may determine a distance between the current POC and the reference POC. Based on the determined distance between the current POC and the reference POC, the dynamic parameter may be computed.
[0251] In examples, the values of dynamic parameter applied for a block may rely on the distance between the current POC and the POC of the AMVP reference image used for this block, as shown in the following equation: distPOC = abs (currPOC - refPOC)
[0252] The absolute distance between the current POC and the POC of its AMVP’s reference used may vary from 1 up to 32 for blocks (e.g., for each block). Statistically, the amplitude of the MV coded if this distance is short may be smaller than if this distance is large. A specific d parameter may be associated to each level (e.g., or block of levels) of this diffPOC. Typically, the d parameter value may increase if the absolute value of distPOC increases.
[0253] Motion vector differences may be of larger magnitude as the temporal distance from a block to its reference picture increases. In examples, a monotonically increasing function may be used. The function
may be an affine, piecewise linear function. The function may be defined by means of a lookup table, which may be used to map each possible POC distance to a value of the dynamic parameter d.
[0254] Three examples of the function are as follows:
2 if distPoc < 8
1) f (distPoc) =
4 if distPoc > 8
3) f (distPoc) = a * iog(distPoc) + b where a = 1.6 and b = 0.3
[0255] FIG. 32 illustrates an example of a dynamic parameter d being computed on the encoder side. On the encoder side, for each block, the d parameter value may be computed before the detector (e.g., as shown in FIG. 32). In examples, the encoder may compute a dynamic parameter. A selected motion vector predictor may be determined to be non-valid. Based on determining that the selected motion vector predictor is non-valid, a replacement motion vector predictor may be determined based on the dynamic parameter. A motion vector difference may be determined based on the replacement motion vector predictor, and an indication of the motion vector difference may be included in the video data. The replacement motion vector predictor may be determined based on the computed dynamic parameter. The current block may be encoded based on the replacement motion vector predictor.
[0256] FIG. 33 illustrates an example of a dynamic parameter d being computed on the decoder side. On the decoder side, since the non-valid detector may be fully independent of the value of d, the function f(currPoc, refPoc) that computes d may be processed after the detector process and if (e.g., only if) the detector is true (e.g., as shown in FIG. 33). The function f(currPoc, refPoc) that computes the value of d may be the same at the encoder and the decoder side. In examples, a decoder may determine that a selected motion vector predictor for a current block is non-valid. Based on determining the selected motion vector predictor is non-valid, a dynamic parameter may be computed. A replacement motion vector predictor may be determined based on the computed dynamic parameter. The current block may be decoded based on the replacement motion vector predictor.
[0257] In examples, the dynamic parameter d may depend on a temporal layer ID. The encoder or decoder may determine a temporal layer ID of a current picture. Based on the temporal layer ID of the current picture, the dynamic parameter may be computed. If a hierarchical B structure is used (e.g., as shown in FIG. 27), the values of the dynamic parameter applied for a block may rely on the current ID of a temporal layer (TL) of the current picture.
[0258] In examples, a specific dynamic parameter may be associated with each TL ID (or set of TL IDs). The dynamic parameter value may increase if the absolute value of TL ID increases. In examples, an infinite value of the dynamic parameter may be set for a TL ID (or a set of TLs IDs) that may deactivate selectively the modified motion data coding system for the selected TL ids.
[0259] The function f( ) that may compute the value of the dynamic parameter may be the same at the encoder and the decoder side. This function may be monotonically decreasing as a function of the temporal layer ID. The temporal distance between a picture and its closest reference picture may decrease as a function of the temporal layer ID. Thus motion vectors, hence motion vector differences, may (e.g., may also) statistically decrease as a function of the temporal ID. The proposed function f may be linear, piecewise linear, or may take the form of a lookup table that may map the temporal layer ID to a value of the dynamic parameter. As example of the function is the following:
[0260] In examples, the dynamic parameter d may depend on QP level. The encoder or decoder may determine an absolute distance between a current POC and a minimum GP in the current POC. Based on the absolute distance between the current POC and the minimum QP in the current POC, the dynamic parameter may be computed.
[0261] If different QPs are used for each picture, the values of the dynamic parameter applied for the blocks of a picture (e.g., all blocks of a picture) may rely on the current absolute distance between the current POC and the picture with the minimum QP in the current GOP, as shown in the following equation: distMinQP = abs MinQPPOC — currPOC) where MinQPPOC = POC / Argmin(QP) A k i e GOP J
[0262] A specific dynamic parameter may be associated with each QP values (or set of QPs). The dynamic parameter value may increase if the distMinQP value increases. The proposed function f may be linear, piecewise linear, or may take the form of a lookup table that may map parameter distMinQP to a value of d. These set of values of the dynamic parameter may be applied in the same way at the encoder or the decoder side.
[0263] In examples, the dynamic parameter d may depend on the value of motion vector candidate similarity. The encoder or decoder may determine a similarity between a first motion vector predictor and a second motion vector predictor. Based on the similarity between the first motion vector predictor and the second motion vector predictor, the dynamic parameter may be computed.
[0264] The values of the dynamic parameter applied for a block may rely on the current similarity of the two current MVP candidates. This similarity may be calculated as the norm of vector difference between these candidates and can be expressed into several versions:
The classical version:
MVP Similarly = || ~MV1 - MV 0 || Or the normalized version:
[0265] A specific dynamic parameter may be associated with each MVPSimilarity value or range of values. The dynamic parameter value may increase if the MVPSimilarity value decreases. The function f( ) that computes the value of d may be the same at the encoder and the decoder side.
[0266] In examples, the dynamic parameter d may depend on the set of parameters available at decoder and encoder side. The values of the dynamic parameter applied for a block may rely on (e.g., may be computed based on) combinations of the parameters (e.g., any combination of all the parameters) of the examples herein, as shown by the following: f (currPoc, refPoc, TLid, QP, maxQPPoc, MVPSimilarity)
[0267] In examples, the merge with motion vector difference (MMVD) merge case may be applied. If one of the two first MVP candidates of the merge list for whole-block-based motion compensation is used, a motion vector difference may be signaled and may be additively applied to the selected MVP. The examples described herein may be applied to the two first merge candidates (e.g., if MMVD is used to encode the motion data of a given inter PU).
[0268] Examples of high-level control of proposed motion vector coding are provided herein. In examples, the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated sequence parameter set (SPS) signaling indication (e.g., flag). In examples, the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated picture parameter set (PPS) signaling indication. In examples, the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated picture header syntax element. In examples, the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated slice header syntax element. In examples, the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated sub-picture level syntax element. In examples, the proposed repartition of dynamic parameters may normatively be transmitted by means of a dedicated coding tree unit (CTU) level syntax element.
[0269] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer-readable storage media. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
Claims
1 . A method for video decoding, the method comprising: determining that a selected motion vector predictor for a current block is non-valid; based on determining the selected motion vector predictor is non-valid, computing a dynamic parameter; determining a replacement motion vector predictor based on the computed dynamic parameter; and decoding the current block based on the replacement motion vector predictor.
2. The method of claim 1 , further comprising: determining a distance between a current picture order count (POC) and a reference POC; and computing the dynamic parameter based on the distance between the current POC and the reference POC.
3. The method of claim 1 or 2, further comprising: determining a temporal layer ID of a current picture; and computing the dynamic parameter based on the temporal layer ID of the current picture.
4. The method of any one of claims 1 to 3, further comprising: determining an absolute distance between a current POC and a minimum quantization parameter (QP) in the current POC; and computing the dynamic parameter based on the absolute distance between the current POC and the minimum QP in the current POC.
5. The method of any one of claims 1 to 4, further comprising: determining a similarity between a first motion vector predictor and a second motion vector predictor; and computing the dynamic parameter based on the similarity between the first motion vector predictor and the second motion vector predictor.
6. A device for video decoding, comprising: a processor configured to: determine that a selected motion vector predictor for a current block is non-valid;
based on determining the selected motion vector predictor is non-valid, compute a dynamic parameter; determine a replacement motion vector predictor based on the computed dynamic parameter; and decode the current block based on the replacement motion vector predictor.
7. The device of claim 6, wherein the processor is further configured to: determine a distance between a current POC and a reference POC; and compute the dynamic parameter based on the distance between the current POC and the reference POC.
8. The device of claim 6 or 7, wherein the processor is further configured to: determine a temporal layer ID of a current picture; and compute the dynamic parameter based on the temporal layer ID of the current picture.
9. The device of any one of claims 6 to 8, wherein the processor is further configured to: determine an absolute distance between a current POC and a minimum QP in the current POC; and compute the dynamic parameter based on the absolute distance between the current POC and the minimum QP in the current POC.
10. The device of any one of claims 6 to 9, wherein the processor is further configured to: determine a similarity between a first motion vector predictor and a second motion vector predictor; and compute the dynamic parameter based on the similarity between the first motion vector predictor and the second motion vector predictor.
11. A method for video encoding, the method comprising: computing a dynamic parameter; determining that a selected motion vector predictor is non-valid; based on determining that the selected motion vector predictor is non-valid, determining a replacement motion vector predictor based on the computed dynamic parameter; and encoding a current block based on the replacement motion vector predictor.
12. The method of claim 11 , further comprising: determining a distance between a current POC and a reference POC; and computing the dynamic parameter based on the distance between the current POC and the reference POC.
13. The method of claim 11 or 12, further comprising: determining a temporal layer ID of a current picture; and computing the dynamic parameter based on the temporal layer ID of the current picture.
14. The method of any one of claims 11 to 13, further comprising: determining an absolute distance between a current POC and a minimum QP in the current POC; and computing the dynamic parameter based on the absolute distance between the current POC and the minimum QP in the current POC.
15. The method of any one of claims 11 to 14, further comprising: determining a similarity between a first motion vector predictor and a second motion vector predictor; and computing the dynamic parameter based on the similarity between the first motion vector predictor and the second motion vector predictor.
16. A device for video encoding, comprising: a processor configured to: compute a dynamic parameter; determine that a selected predictor is non-valid; based on determining that the selected predictor is non-valid, determine a replacement motion vector predictor based on the computed dynamic parameter; and encode a current block based on the replacement motion vector predictor.
17. The device of claim 16, wherein the processor is further configured to: determine a distance between a current POC and a reference POC; and compute the dynamic parameter based on the distance between the current POC and the reference POC.
18. The device of claim 16 or 17, wherein the processor is further configured to: determine a temporal layer ID of a current picture; and compute the dynamic parameter based on the temporal layer ID of the current picture.
19. The device of any one of claims 16 to 18, wherein the processor is further configured to: determine an absolute distance between a current POC and a minimum QP in the current POC; and compute the dynamic parameter based on the absolute distance between the current POC and the minimum QP in the current POC.
20. The device of any one of claims 16 to 19, wherein the processor is further configured to: determine a similarity between a first motion vector predictor and a second motion vector predictor; and compute the dynamic parameter based on the similarity between the first motion vector predictor and the second motion vector predictor.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP24305051 | 2024-01-09 | ||
| EP24305051.5 | 2024-01-09 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025149542A1 true WO2025149542A1 (en) | 2025-07-17 |
Family
ID=89661633
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/050375 Pending WO2025149542A1 (en) | 2024-01-09 | 2025-01-08 | Adaptive thresholding for motion information coding |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025149542A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130114722A1 (en) * | 2011-11-07 | 2013-05-09 | Fujitsu Limited | Video encoding apparatus and video decoding apparatus |
| US20230081842A1 (en) * | 2020-02-07 | 2023-03-16 | Beijing Bytedance Network Technology Co., Ltd. | Bv list construction process of ibc blocks under merge estimation region |
-
2025
- 2025-01-08 WO PCT/EP2025/050375 patent/WO2025149542A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130114722A1 (en) * | 2011-11-07 | 2013-05-09 | Fujitsu Limited | Video encoding apparatus and video decoding apparatus |
| US20230081842A1 (en) * | 2020-02-07 | 2023-03-16 | Beijing Bytedance Network Technology Co., Ltd. | Bv list construction process of ibc blocks under merge estimation region |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3939303A1 (en) | Symmetric merge mode motion vector coding | |
| EP4032267A1 (en) | Adaptive interpolation filter for motion compensation | |
| EP4505718A1 (en) | Implicit intra mode for combined inter merge/intra prediction and geometric partitioning mode intra/inter prediction | |
| WO2025133069A1 (en) | Intra tmp-lic merge mode | |
| WO2025003037A1 (en) | Intra tmp and lic combination | |
| WO2020247394A1 (en) | Block boundary prediction refinement with optical flow | |
| EP4633142A1 (en) | Interaction between adaptive dual tree and chroma direct block vector prediction mode | |
| EP4661392A1 (en) | Adaptive dual tree and chroma intra prediction modes reordering | |
| EP4633160A1 (en) | Intra block copy merge / advanced motion compensation list and intra template matching prediction merge list enrichment | |
| WO2025149542A1 (en) | Adaptive thresholding for motion information coding | |
| EP4629633A1 (en) | Intra-block copy local illumination compensation slope adjustment enhancements | |
| EP4629634A1 (en) | Itmp merge mode with default and augmented candidates | |
| EP4633138A1 (en) | Interaction between adaptive dual tree and intra-block copy | |
| WO2025073657A1 (en) | Extended amvp-merge motion representation | |
| EP4639894A1 (en) | Gpm combination with inter tools | |
| WO2025002871A1 (en) | Affine block vector model for intra block copy | |
| WO2025073519A1 (en) | Filtering applied to chroma direct block vector | |
| WO2025003493A1 (en) | Bi-predictive intra block copy with weighted averaging | |
| WO2025003104A1 (en) | Intra block copy geometric partitioning mode (ibc-gpm) with bi-predictive block vectors | |
| WO2025131654A1 (en) | Intratmp lic extended template and probing | |
| WO2025002858A1 (en) | Improvements to intra block copy | |
| WO2025003313A1 (en) | Adaptive ibc/intra tmp filtering | |
| WO2025003042A1 (en) | Block vector guided chroma direct mode | |
| WO2025002756A1 (en) | Bi-prediction intra block copy with local illumination compensation | |
| WO2025002873A1 (en) | Bi-predictive merge list for intra block copy coding |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25700669 Country of ref document: EP Kind code of ref document: A1 |