WO2013037254A1 - Method and apparatus for reduction of deblocking filter - Google Patents
Method and apparatus for reduction of deblocking filter Download PDFInfo
- Publication number
- WO2013037254A1 WO2013037254A1 PCT/CN2012/079889 CN2012079889W WO2013037254A1 WO 2013037254 A1 WO2013037254 A1 WO 2013037254A1 CN 2012079889 W CN2012079889 W CN 2012079889W WO 2013037254 A1 WO2013037254 A1 WO 2013037254A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- boundary
- filter
- sub
- group
- decision
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Definitions
- the present invention relates to video coding system.
- the present invention relates to method and apparatus for reduction of line buffers associated with deblocking filter.
- Motion estimation is an effective inter-frame coding technique to exploit temporal redundancy in video sequences.
- Motion-compensated inter-frame coding has been widely used in various international video coding standards.
- the motion estimation adopted in various coding standards is often a block-based technique, where motion information such as coding mode and motion vector is determined for each macroblock or similar block configuration.
- intra-coding is also adaptively applied, where the picture is processed without reference to any other picture.
- the inter-predicted or intra- predicted residues are usually further processed by transformation, quantization, and entropy coding to generate a compressed video bitstream.
- coding artifacts are introduced, particularly in the quantization process.
- additional processing has been applied to reconstructed video to enhance picture quality in newer coding systems.
- the additional processing is often configured in an in-loop operation so that the encoder and decoder may derive the same reference pictures to achieve improved system performance.
- Fig. 1 A illustrates an exemplary adaptive inter/intra video coding system incorporating in-loop processing.
- Motion Estimation (ME)/Motion Compensation (MC) 112 is used to provide prediction data based on video data from other picture or pictures.
- Switch 114 selects Intra Prediction 110 or inter-prediction data and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues.
- the prediction error is then processed by Transformation (T) 118 followed by Quantization (Q) 120.
- T Transformation
- Q Quantization
- the transformed and quantized residues are then coded by Entropy Encoder 122 to form a video bitstream corresponding to the compressed video data.
- the bitstream associated with the transform coefficients is then packed with side information such as motion, mode, and other information associated with the image area.
- the side information may also be subject to entropy provided to Entropy Encoder 122 as shown in Fig. 1A.
- Entropy Encoder 122 When an inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well. Consequently, the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues.
- IQ Inverse Quantization
- IT Inverse Transformation
- the residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data.
- the reconstructed video data may be stored in
- Reference Picture Buffer 134 and used for prediction of other frames.
- incoming video data undergoes a series of processing in the encoding system.
- the reconstructed video data from REC 128 may be subject to various impairments due to a series of processing. Accordingly, various in-loop processing is applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality.
- HEVC High Efficiency Video Coding
- Deblocking Filter (DF) 130, Sample Adaptive Offset (SAO) 131 and Adaptive Loop Filter (ALF) 132 have been developed to enhance picture quality.
- the in-loop filter information may have to be incorporated in the bitstream so that a decoder can properly recover the required information.
- in-loop filter information from SAO and ALF is provided to Entropy Encoder 122 for incorporation into the bitstream.
- DF 130 is applied to the reconstructed video first; SAO 131 is then applied to DF-processed video; and ALF 132 is applied to SAO-processed video.
- SAO 131 is then applied to DF-processed video; and ALF 132 is applied to SAO-processed video.
- the processing order among DF, SAO and ALF can be re-arranged.
- FIG. IB A corresponding decoder for the encoder of Fig. 1A is shown in Fig. IB.
- the video bitstream is decoded by Video Decoder 142 to recover the transformed and quantized residues, SAO/ALF information and other system information.
- Video Decoder 142 At the decoder side, only Motion Compensation (MC) 113 is performed instead of ME/MC.
- MC Motion Compensation
- the decoding process is similar to the reconstruction loop at the encoder side.
- the recovered transformed and quantized residues, SAO/ALF information and other system information are used to reconstruct the video data.
- the reconstructed video is further processed by DF 130, SAO 131 and ALF 132 to produce the final enhanced decoded video.
- the coding process in HEVC encodes or decodes a picture using a block structure named Largest Coding Unit (LCU).
- LCU Largest Coding Unit
- the LCU is adaptively partitioned into coding units (CUs) using quadtree.
- DF is performed for each 8x8 block and in HEVC Test Model Version 4.0 (HM-4.0), the DF is applied to 8x8 block boundaries.
- HM-4.0 HEVC Test Model Version 4.0
- HM-4.0 HEVC Test Model Version 4.0
- HM-4.0 HEVC Test Model Version 4.0
- horizontal filtering across vertical block boundaries also called vertical edges
- vertical filtering across horizontal block boundaries also called horizontal edges
- four pixels on each side of the boundary are involved in filter parameter derivation, and up to three pixels on each side of the boundary may be changed after filtering.
- Fig. 2A illustrates the pixels involved in the DF process for a vertical edge 210 between two blocks, where each smallest square represents one pixel.
- the pixels on the left side (i.e., pixel columns pO to p3 as indicated by 220) of the edge are from one 8x8 block, and the pixels on the right side (i.e., pixel columns qO to q3 as indicated by 230) of the edge are from another 8x8 block.
- the coding information of the two 8x8 blocks is used to calculate the boundary strength of the edge first.
- the boundary strength is determined using other schemes.
- columns p0-p3 and q0-q3 of the reconstructed pixels are used to derive filter parameters including filter on/off decision and strong/weak filter selection as shown in Fig. 2B and Fig. 2C 240 (counted from top) and the sixth line 250 according to HM-4.0.
- Fig. 2C illustrates an example of filter strong/weak decision for each line based on respective boundary pixels as indicated by the thick- lined boxes 260-267.
- the derivation is only required for the luma component.
- reconstructed pixels are horizontally filtered to generate DF intermediate pixels.
- pixels in columns p0-p3 and q0-q3 are referenced, but only pixels in columns p0-p2 and q0-q2 may be modified (i.e., filtered).
- unfiltered reconstructed pixels i.e., pre-DF pixels
- filter parameter derivation For horizontal filtering across vertical block boundaries, unfiltered reconstructed pixels (i.e., pre-DF pixels) are used for filter parameter derivation and also used as source pixels for the filter operation.
- unfiltered reconstructed pixels i.e., pre-DF pixels
- DF intermediate pixels i.e. pixels after horizontal filtering
- two pixels on each side are involved in filter parameter derivation, and at most one pixel on each side may be modified after filtering.
- pixels in columns ⁇ -pl and qO-ql are referenced, but only pixels in columns pO and qO are filtered.
- Fig. 3 illustrates the boundary pixels involved in the DF process for a horizontal edge 310, where each smallest square represents one pixel.
- the pixels on the upper side (i.e., pixel rows pO to p3 as indicated by 320) of the edge are from one 8x8 block, and the pixels on the lower side (i.e., pixel rows qO to q3 as indicated by 330) of the edge are from another 8x8 block.
- the DF process for the horizontal edge is similar to the DF process for the vertical edge.
- the coding information of the two 8x8 blocks is used to calculate the boundary strength of the edge.
- rows p0-p3 and q0-q3 of reconstructed pixels are used to derive filter parameters including filter on/off decision and strong/weak filter selection. Again, this is only required for luma.
- reconstructed pixels are used for deriving filter decisions.
- DF intermediate pixels are vertically filtered to generate DF output pixels.
- pixels in rows p0-p3 and q0-q3 are referenced, but only pixels in rows p0-p2 and q0-q2 are filtered.
- pixels in rows ⁇ -pl and qO-ql are referenced, but only pixels in rows pO and qO are filtered.
- FIG. 4A illustrates the pixels involved in the DF process of the vertical boundary between a current LCU 410 and an adjacent LCU 412 on the left, where four pixel columns from the adjacent LCU 412 are required.
- four pixel rows from the above LCUs will also be buffered for the vertical DF process. Accordingly, four pixel rows for the adjacent LCUs 410a, 420a, 412a and 422a corresponding to LCUs 410, 420, 412 and 422 respectively are buffered.
- an unfiltered pixel 401 is indicated by a non-shaded smallest square.
- a pixel 404 are indicated by different shaded patterns.
- the three pixel columns on each side of the vertical boundaries may be changed after the horizontal DF filtering.
- vertical DF process can be applied to the horizontal edges of LCU 410 except for the bottom edge.
- the horizontally filtered pixels, vertically filtered pixels, and horizontally and vertically filtered pixels after the vertical DF filtering are shown in Fig. 4B.
- the DF process is then moved to the next LCU 420.
- Horizontal DF process is applied to the vertical edges of LCU 420 except for the rightmost edge and the horizontally filtered pixels are indicated by respective shaded areas in Fig. 4C.
- the boundary pixels of LCU 410 corresponding to the vertical edge between LCU 410 and LCU 420 are also processed by the horizontal DF process during this step.
- Fig. 4D After horizontal DF process of vertical edges of LCU 420, the vertical DF process is applied to the horizontal edges of LCU 420 except for the bottom edge.
- the corresponding processed pixels are shown in Fig. 4D.
- the DF process shown in Fig. 4A to Fig. 4D are intended to illustrate an example of data dependency associated with the DF process. Depending on the particular DF process used, the line and column buffer requirement due to data dependency may be different.
- these column buffers are often implemented as on-chip registers or SRAMs since the storage requirement for preceding pixel columns is relatively small. For example, four reconstructed pixel columns of one LCU height and two reconstructed pixel columns of one LCU height are required for processing DF on luma and chroma respectively.
- the line buffers for storing the four pixels rows of one picture width for luma and two pixel rows of one picture width for chroma corresponding to the LCUs above may be sizeable, particularly for large size pictures.
- Line buffer implementation based on-chip memory e.g. Static Random Access Memory (SRAM)
- SRAM Static Random Access Memory
- line buffer implementation based on off-chip memory e.g. Dynamic Random Access Memory (DRAM)
- DRAM Dynamic Random Access Memory
- a method and apparatus for deblocking of reconstructed video are disclosed.
- the method divides a block boundary into two sub-boundaries and separates lines or columns across the sub-boundaries into two groups.
- the deblocking filter decision for each group is determined based on the lines or columns in the respective group. Accordingly, a system embodying the present invention leads to reduced storage requirement. Further, the filter decision can be based on one or more of the lines or columns. For example, in the case of an 8x8 block, the filter decision for each of the two groups can be based on the first and the fourth lines or columns respectively.
- the filter decision comprises boundary strength, filter on/off decision, strong/weak filter decision or a combination thereof.
- the method divides block edges of blocks in the LCUs into two edge groups, where the first edge group corresponds to horizontal block in the first edge group.
- the number of lines processed by a vertical filter in the first edge group is less than the number of lines processed by a vertical filter in the second edge group. Accordingly, a system embodying the present invention leads to reduced storage requirement.
- additional lines from the first group can be used to determine filter decision and/or filter operation for the vertical filtering.
- the additional lines may be stored in a sub-sampled pattern.
- the additional lines may correspond to reconstructed pixels or DF intermediate pixels.
- FIG. 1A illustrates an exemplary adaptive inter/intra video encoding system incorporating DF, SAO and ALF in-loop processing.
- Fig. IB illustrates an exemplary adaptive inter/intra video decoding system incorporating DF, SAO and ALF in-loop processing.
- Fig. 2 A illustrates an example of a vertical edge between two 8x8 blocks.
- Fig. 2B illustrates an example of filter on/off decision for the vertical edge based on line 2 and line 5 according to HM-4.0.
- Fig. 2C illustrates an example of filter strong/weak decision for each line across the vertical boundary based on pixels in the respective line.
- Fig. 3 illustrates an example of a horizontal edge between two 8x8 blocks.
- Fig. 4A-Fig. 4D illustrate an example of various stages of horizontal and vertical DF process.
- Fig. 5A-Fig. 5D illustrate various examples of sub-sampling pattern to store additional lines for filter decision and/or filter operation according to an embodiment of the present invention.
- Fig. 6A illustrates an example of dividing a vertical block boundary into two sub-boundaries according to an embodiment of the present invention.
- Fig. 6B illustrates an example of dividing a horizontal block boundary into two sub- boundaries according to an embodiment of the present invention.
- Fig. 7A-Fig. 7B illustrate 4x4 boundary strength determination for a vertical and horizontal boundary between two 4x4 blocks respectively.
- Fig. 8 illustrates an example of boundary strength determination according to an embodiment of the present invention.
- Fig. 9 illustrates an example of dependency of filter decision threshold ⁇ and filter clipping threshold 3 ⁇ 4 on the quantization parameter QP for deblocking filter.
- Fig. 10A illustrates an example of pixels used for filter on/off decision and filter strong/weak decision for a vertical sub-boundary between two 4x4 blocks.
- Fig. 10B illustrates an example of pixels used for filter on/off decision and filter strong/weak decision for a horizontal sub-boundary between two 4x4 blocks.
- Fig. 1 lillustrates an example of luma weak filtering, where A's are equal to zero if p p ⁇ , PQ, qo, q ⁇ and q2 lie on a straight line.
- the line buffer for storing pixel rows of the above LCUs according to the LCU-based DF processing is reduced. For horizontal edges between two LCU rows, only reconstructed pixels pO and q0-q3 are used to derive filter on/off and strong/weak decisions for the luma component.
- the vertical filtering will only be applied to pixels corresponding to rows of pO and q0-q3, where the vertical filtering is applied to DF intermediate pixels pO and q0-q3.
- the vertical filtering will only be applied to pixels corresponding to rows of pO and qO, where the vertical filtering is applied to DF intermediate pixels pO and qO-ql .
- the DF process according to HM-4.0 can be used. Accordingly, only one luma line buffer and one chroma line buffer are required to store reconstructed pixels of row pO from the LCUs above.
- HM-4.0 one luma line buffer and one chroma line buffer for the bottom row of the blocks above are already used for intra prediction, and the same line buffers can be used to fulfill the need of line buffers for the DF process according to the present invention. Consequently, for the DF process according to the present invention, there is no need for any additional line buffers beyond what have already been used in the encoder or decoder system for intra prediction.
- the filtering decisions and filter parameter derivation can be extended to include pixels corresponding to lines pl-p3 for potential improvement of the DF filtering.
- the computations may become more complicated if more pixels are involved.
- an embodiment according to the present invention utilizes sub-sampled pixels from lines pl-p3.
- the pixel data stored in the additional line buffers may correspond to either reconstructed pixels or DF intermediate pixels.
- any sub-sampling pattern may be used to reduce the computations as well as the storage requirement involved with the filtering decision.
- Fig. 5A to Fig. 5D illustrate four examples of sub-sampled patterns of pixel data for filter decision derivations. These samples may also be used for the vertical DF filtering operation at the LCU horizontal boundaries.
- the decision is based on pixels from line 2 (i.e., the third line) and line 5 (i.e., the sixth line) according to HM-4.0. Therefore, when applying the DF filtering to the bottom four pixel rows of the above LCUs (410a, 420a, 412a and 422a), the on/off decision for the 8x8 blocks will have to be stored.
- An embodiment according to the present invention can eliminate the requirement to store the on/off decision for DF filtering on the bottom four pixel rows of the above LCUs.
- the on/off decision of the horizontal DF filtering for the upper four lines of a block is based on line 2
- the on/off decision of the horizontal DF filtering for the lower four lines is based on line 5. Accordingly, the on/off decision for the lower four lines and the upper four lines can be determined based on pixels within respective groups without referring to each other. There is no need to store the on/off decision of the horizontal DF filtering for the lower four pixel rows of the above LCUs.
- an embodiment of the present invention treats the boundary between two 8x8 luma blocks as two sub-boundaries.
- the two sub-boundaries correspond to a lower boundary 610 and an upper boundary 620 between two adjacent 8x8 blocks as shown in Fig. 6A.
- the sub- boundary pixels associated with a lower sub-boundary 610 are indicated by box 612 and the sub-boundary pixels associated with the upper sub-boundary 620 (shown in short dashed line) is indicated by box 622 as shown in Fig. 6A.
- the sub-boundary pixels 612 are also called a first pixel group, which comprises a first group of line segments across the lower sub-boundary 610 of the vertical boundary.
- the sub-boundary pixels 622 are also called a second pixel group, which comprises a second group of line segments across the upper sub-boundary 620 of the vertical boundary.
- the two sub-boundaries correspond to a left boundary 630 (shown in short dashed line) and a right boundary 640 (shown in long dashed line) between two adjacent 8x8 blocks are shown in Fig. 6B.
- the sub-boundary pixels associated with a left sub-boundary 630 are indicated by box 632, as shown in Fig. 6B and the pixels in box 632 are called the first pixel group.
- the first pixel group in this case comprises a first group of column segments across the left sub-boundary 630 of the horizontal boundary.
- the sub-boundary pixels associated with the right sub-boundary are indicated by box 642, as shown in Fig. 6B and the pixels in box 642 are called the second pixel group.
- the second pixel group in this case comprises a second group of column segments across the right sub-boundary 640 of the horizontal boundary.
- An embodiment according to the present invention determines the filter on/off and strong/weak decisions and applies the DF filtering individually based on pixels from the respective pixel group.
- the boundary strength, filter on/off decision, strong/weak filter decision, or a combination thereof for the first pixel group is determined solely based on pixels from the first pixel group.
- the edge activity measure, dl can be computed as follows:
- the edge activity measure, dljupper for the upper sub-boundary in Fig. 6A can be computed based on one of the upper four lines.
- the edge activity measure, dljower for the lower sub-boundary in Fig. 6 A can be computed based on one of the lower four lines.
- the edge activity measure consists of two parts, where the first part, dl_upper R or dlJower R is associated with pixels on the left side of the sub-boundary and the second part, dljupperi or dljower is associated with pixels on the right side of the sub-boundary.
- the condition regarding whether to apply the DF filtering across the respective sub-boundary is tested according to
- BetaJ ⁇ uma is a threshold. If equation (4) is satisfied, the horizontal DF filtering is applied to the upper sub-boundary. If equation (5) is satisfied, the horizontal DF filtering is applied to the lower sub- lines, more than one line may also be used to determine filter on/off control. Similarly, more than one line from the lower four lines may be used to determined filter on/off control for the lower four lines.
- an additional test is performed to determine whether to use a weak DF filter or a strong filter.
- the edge activity measures corresponding to the right side and left side of the sub-boundary are compared with another threshold, sideThreshold. For example, the follow tests are performed for the upper sub-boundary
- the weak filter is applied to the second pixel from the vertical boundary for each upper line on the right side of the upper sub-boundary. If the condition in equation (7) is satisfied, the weak filter is applied to the second pixel from the vertical boundary for each upper line on the left side of the upper sub-boundary. Similar process for the lower sub-boundary can be performed by evaluating the conditions:
- An embodiment according to the present invention treats the boundary between two 4x4 chroma blocks as two sub-boundaries, where a vertical sub-boundary can be separated into an upper sub- boundary and a lower sub-boundary, and a horizontal boundary can be separated into a left sub-boundary and a right sub-boundary.
- the embodiments of the present invention for the luma component are applicable to the chroma components, where the DF process may be performed in reduced resolution.
- the derivations of filter on/off and strong/weak decision illustrated above are for a vertical boundary.
- the derivations of filter on/off and strong/weak decision for a horizontal boundary can be derived similarly. While one line from the upper four lines is used to determine strong/weak filter for the upper four lines, more than one line may also be used to determine strong/weak filter control. Similarly, more than one line from the lower four lines may be used to determined strong/weak filter control for the lower four lines.
- deblocking filter is applied to boundaries of 8x8 blocks, where the boundary strength (BS) is determined based on 4x4 blocks.
- the boundary strength For the luma component, the stronger of the boundary strengths for the two neighboring 4x4 blocks associated with an 8x8 block is used as the boundary strength of the corresponding boundary of the 8x8 block.
- An embodiment according to the present invention derives the boundary strength for the sub-boundaries between two 8x8 blocks individually based on the respective 4x4 blocks.
- Fig. 7A illustrates an example of a horizontal sub-boundary 710 between two 4x4 blocks, P and Q.
- Fig. 7B illustrates an example of a vertical sub-boundary 720 between two 4x4 blocks, P and Q.
- FIG. 8 An exemplary boundary strength derivation for the sub-boundary is shown in Fig. 8.
- the boundary strength decision starts from block 810.
- a test, "P or Q is intra coded?" is performed in step 820. If the result is Yes (as indicated by "Y” in Fig. 8), BS is assigned a value of 2. If the test result of step 820 is No (as indicated by "N” in Fig. 8), a further test 830 is performed. In step 830, the test, "((Boundary is TU boundary) and (P or Q contains coefficients)) or (P and Q have different reference picture or MV difference > 4)?" is performed. If the test result is Yes, BS is assigned a value of 1. Otherwise, BS is assigned a value of 0. While the use of 4x4 sub-boundary for 8x8 blocks is boundary sizes.
- the BS value can be used to control deblocking operation such as filter on/off control.
- An exemplary BS usage is shown in Table 1. If BS value is 0, the deblocking filter is turned off. If BS value is 1 , the luma deblocking filtering is turned on and the filter parameter, t c offset as defined in the HEVC standard is set to 0. If BS value is 2, both luma and chroma deblocking filtering is turned on and t c offset is set to 2.
- deblocking parameters comprise ⁇ and t c .
- the parameter ⁇ is used to determine filter decision threshold and the value is related to quantization parameter (QP) of the block.
- the dependency of ⁇ on QP is shown in Fig. 9.
- the parameter t c is used for filter clipping threshold.
- the dependency of t c on QP is shown in Fig. 9. If BS is greater than 1, the parameter t c is specified using QP+2 as the table input.
- the parameter t c is specified using QP as the table input.
- the deblocking filtering may be on only if BS value is greater than 0.
- the parameter ⁇ is determined as shown in Fig. 9.
- the filter on/off decision can be determined according to the BS value for the respective sub-boundary and the edge activity measured using the 4x4 block.
- Fig. 10A illustrates an example of deriving the edge activity based on two lines across the sub-boundary:
- BS > 0 and d ⁇ ⁇ then apply the deblocking filter. Otherwise, the deblocking filtering is not applied.
- the above derivation illustrates a specific example to derive edge activity based on two lines from the two neighboring 4x4 blocks. However, more or less lines may be used to derive the edge activity. Furthermore, while specific formula is used to derive the edge activity, a person skilled in the art may use other formulas to measure the edge activity.
- the strong/weak filter decision can also be derived based on the edge activity and other measure. For example, if (d ⁇ ( ⁇ » 2 ) && ⁇ ⁇ -po
- the deblocking filtering can be applied to luma and chroma signals.
- P2' Clip3( ?2 - 2Hc, po + 2*t c , ( 2* p + 3* p 2 + p ⁇ +po + qo + 4 ) » 3 ) (15)
- 0 1 ' Clip3( 0 1 - 2*tc, j po + 2*tc, ( ⁇ o + W + ⁇ l + q2 + 2 ) » 2 ) (17)
- Clip3(w ⁇ x>c, min, x) is a function that clip variable x between the max and min.
- ⁇ Clip3(-t c , t c , ⁇ ), where Cliply( ) clips the x value between the maximum and minimum luminance values.
- Embodiment of the present invention as described above may be implemented in various hardware, software code, or a combination of both.
- compression software to perform the processing described herein.
- An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array
- FPGA field-programmable gate arrays
- These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware code may be developed in different programming languages and different formats or styles.
- the software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A method and apparatus for deblocking of reconstructed video are disclosed. In one embodiment, the method divides a block boundary into two sub-boundaries and separates lines or column across the sub-boundaries into two groups. The deblocking filter decision for each group is determined based on the lines or columns in the respective group. In another embodiment, the method divides block edges of blocks in the LCUs into two edge groups, where the first edge group corresponds to horizontal block edges between two LCUs and the second edge group corresponds to remaining block edges not included in the first edge group. The number of lines processed by a vertical filter in the first edge group is less than the number of lines processed by a vertical filter in the second edge group. Accordingly, a system embodying the present invention has reduced storage requirement.
Description
i n u \ > ur. niAJ^ ivi > i
FILTER
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present invention claims priority to U.S. Provisional Patent Application, Serial No. 61/533,892, filed on September 13, 2011, entitled "Line Buffers Reduction for Deblocking Filter", and Chinese Patent Application, Serial No. 201110270680.5, filed on September 14, 2011, entitled "A Method of Deblocking Filter". The U.S. Provisional Patent Application and the Chinese Patent Application are hereby incorporated by reference in their entireties.
TECHNICAL FIELD
[0002] The present invention relates to video coding system. In particular, the present invention relates to method and apparatus for reduction of line buffers associated with deblocking filter.
BACKGROUND
[0003] Motion estimation is an effective inter-frame coding technique to exploit temporal redundancy in video sequences. Motion-compensated inter-frame coding has been widely used in various international video coding standards. The motion estimation adopted in various coding standards is often a block-based technique, where motion information such as coding mode and motion vector is determined for each macroblock or similar block configuration. In addition, intra-coding is also adaptively applied, where the picture is processed without reference to any other picture. The inter-predicted or intra- predicted residues are usually further processed by transformation, quantization, and entropy coding to generate a compressed video bitstream. During the encoding process, coding artifacts are introduced, particularly in the quantization process. In order to alleviate the coding artifacts, additional processing has been applied to reconstructed video to enhance picture quality in newer coding systems. The additional processing is often configured in an in-loop operation so that the encoder and decoder may derive the same reference pictures to achieve improved system performance.
[0004] Fig. 1 A illustrates an exemplary adaptive inter/intra video coding system incorporating in-loop processing. For inter-prediction, Motion Estimation (ME)/Motion Compensation (MC) 112 is used to provide prediction data based on video data from other picture or pictures. Switch 114 selects Intra Prediction 110 or inter-prediction data and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues. The prediction error is then processed by Transformation (T) 118 followed by Quantization (Q) 120. The transformed and quantized residues are then coded by Entropy Encoder 122 to form a video bitstream corresponding to the compressed video data. The bitstream associated with the transform coefficients is then packed with side information such as motion, mode, and other information associated with the image area. The side information may also be subject to entropy
provided to Entropy Encoder 122 as shown in Fig. 1A. When an inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well. Consequently, the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues. The residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data. The reconstructed video data may be stored in
Reference Picture Buffer 134 and used for prediction of other frames.
[0005] As shown in Fig. 1A, incoming video data undergoes a series of processing in the encoding system. The reconstructed video data from REC 128 may be subject to various impairments due to a series of processing. Accordingly, various in-loop processing is applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality. In the High Efficiency Video Coding (HEVC) standard being developed, Deblocking Filter (DF) 130, Sample Adaptive Offset (SAO) 131 and Adaptive Loop Filter (ALF) 132 have been developed to enhance picture quality. The in-loop filter information may have to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, in-loop filter information from SAO and ALF is provided to Entropy Encoder 122 for incorporation into the bitstream. In Fig. 1A, DF 130 is applied to the reconstructed video first; SAO 131 is then applied to DF-processed video; and ALF 132 is applied to SAO-processed video. However, the processing order among DF, SAO and ALF can be re-arranged.
[0006] A corresponding decoder for the encoder of Fig. 1A is shown in Fig. IB. The video bitstream is decoded by Video Decoder 142 to recover the transformed and quantized residues, SAO/ALF information and other system information. At the decoder side, only Motion Compensation (MC) 113 is performed instead of ME/MC. The decoding process is similar to the reconstruction loop at the encoder side. The recovered transformed and quantized residues, SAO/ALF information and other system information are used to reconstruct the video data. The reconstructed video is further processed by DF 130, SAO 131 and ALF 132 to produce the final enhanced decoded video.
[0007] The coding process in HEVC encodes or decodes a picture using a block structure named Largest Coding Unit (LCU). The LCU is adaptively partitioned into coding units (CUs) using quadtree. In each leaf CU, DF is performed for each 8x8 block and in HEVC Test Model Version 4.0 (HM-4.0), the DF is applied to 8x8 block boundaries. For each 8x8 block, horizontal filtering across vertical block boundaries (also called vertical edges) is first applied, and then vertical filtering across horizontal block boundaries (also called horizontal edges) is applied. During processing of a luma block boundary, four pixels on each side of the boundary are involved in filter parameter derivation, and up to three pixels on each side of the boundary may be changed after filtering. Fig. 2A illustrates the pixels involved in the DF process for a vertical edge 210 between two blocks, where each smallest square represents one pixel. The pixels on the left side (i.e., pixel columns pO to p3 as indicated by 220) of the edge are from one 8x8 block, and the pixels on the right side (i.e., pixel columns qO to q3 as indicated by 230) of the edge are from another 8x8 block. In the DF process according to HM-4.0, the coding information of the two 8x8 blocks is used to calculate the boundary strength of the edge first. However, there are also variations where the boundary strength is determined using other schemes. After the boundary strength is determined, columns p0-p3 and q0-q3 of the reconstructed pixels are used to derive filter parameters including filter on/off decision and strong/weak filter selection as shown in Fig. 2B and Fig. 2C
240 (counted from top) and the sixth line 250 according to HM-4.0. Fig. 2C illustrates an example of filter strong/weak decision for each line based on respective boundary pixels as indicated by the thick- lined boxes 260-267. In HM-4.0, the derivation is only required for the luma component. Finally, reconstructed pixels are horizontally filtered to generate DF intermediate pixels. During the luma filtering horizontally across the vertical boundary 210, pixels in columns p0-p3 and q0-q3 are referenced, but only pixels in columns p0-p2 and q0-q2 may be modified (i.e., filtered).
[0008] For horizontal filtering across vertical block boundaries, unfiltered reconstructed pixels (i.e., pre-DF pixels) are used for filter parameter derivation and also used as source pixels for the filter operation. For vertical filtering across horizontal block boundaries, unfiltered reconstructed pixels (i.e., pre-DF pixels) are used for filter parameter derivation, and DF intermediate pixels (i.e. pixels after horizontal filtering) are used as source pixels for the vertical filtering. For DF process of a chroma block boundary, two pixels on each side are involved in filter parameter derivation, and at most one pixel on each side may be modified after filtering. During chroma filtering, pixels in columns ρθ-pl and qO-ql are referenced, but only pixels in columns pO and qO are filtered.
[0009] Fig. 3 illustrates the boundary pixels involved in the DF process for a horizontal edge 310, where each smallest square represents one pixel. The pixels on the upper side (i.e., pixel rows pO to p3 as indicated by 320) of the edge are from one 8x8 block, and the pixels on the lower side (i.e., pixel rows qO to q3 as indicated by 330) of the edge are from another 8x8 block. The DF process for the horizontal edge is similar to the DF process for the vertical edge. First, the coding information of the two 8x8 blocks is used to calculate the boundary strength of the edge. Next, rows p0-p3 and q0-q3 of reconstructed pixels are used to derive filter parameters including filter on/off decision and strong/weak filter selection. Again, this is only required for luma. In HM-4.0, reconstructed pixels are used for deriving filter decisions. Finally, DF intermediate pixels are vertically filtered to generate DF output pixels. During the luma filtering, pixels in rows p0-p3 and q0-q3 are referenced, but only pixels in rows p0-p2 and q0-q2 are filtered. During chroma filtering, pixels in rows ρθ-pl and qO-ql are referenced, but only pixels in rows pO and qO are filtered.
[0010] When DF is processed on an LCU by LCU basis in a raster scan order, there will be data dependency between LCUs as shown in Fig. 4A through Fig. 4D. Vertical edges in each LCU are horizontally filtered first and horizontal edges are then vertically filtered. The rightmost vertical edge of the current LCU cannot be horizontally filtered until the involved boundary pixels from the next LCU become available. Similarly, the lowest horizontal edge of the current LCU cannot be vertically filtered until the involved boundary pixels from the below LCU become available. Accordingly, data buffers are required to accommodate filtering operation due to the data dependency. For the horizontal DF process of the vertical boundary between two adjacent LCUs, four reconstructed pixel columns of one LCU height will be required for the luma component and two reconstructed pixel columns of one LCU height are required for the chroma component. Fig. 4A illustrates the pixels involved in the DF process of the vertical boundary between a current LCU 410 and an adjacent LCU 412 on the left, where four pixel columns from the adjacent LCU 412 are required. Similarly, four pixel rows from the above LCUs will also be buffered for the vertical DF process. Accordingly, four pixel rows for the adjacent LCUs 410a, 420a, 412a and 422a corresponding to LCUs 410, 420, 412 and 422 respectively are buffered. In Figs. 4A-4D, an unfiltered pixel 401 is indicated by a non-shaded smallest square. On the other hand, a
pixel 404 are indicated by different shaded patterns. As shown in Fig. 4A, the three pixel columns on each side of the vertical boundaries may be changed after the horizontal DF filtering.
[0011] After horizontal filtering of the vertical edges of LCU 410, vertical DF process can be applied to the horizontal edges of LCU 410 except for the bottom edge. The horizontally filtered pixels, vertically filtered pixels, and horizontally and vertically filtered pixels after the vertical DF filtering are shown in Fig. 4B. The DF process is then moved to the next LCU 420. Horizontal DF process is applied to the vertical edges of LCU 420 except for the rightmost edge and the horizontally filtered pixels are indicated by respective shaded areas in Fig. 4C. The boundary pixels of LCU 410 corresponding to the vertical edge between LCU 410 and LCU 420 are also processed by the horizontal DF process during this step. After horizontal DF process of vertical edges of LCU 420, the vertical DF process is applied to the horizontal edges of LCU 420 except for the bottom edge. The corresponding processed pixels are shown in Fig. 4D. The DF process shown in Fig. 4A to Fig. 4D are intended to illustrate an example of data dependency associated with the DF process. Depending on the particular DF process used, the line and column buffer requirement due to data dependency may be different.
[0012] In addition to pixel line buffers for unfiltered and filtered pixels of neighboring LCUs, there is also a need for storing other information to support LCU-based DF process.
[0013] For hardware based implementation, these column buffers are often implemented as on-chip registers or SRAMs since the storage requirement for preceding pixel columns is relatively small. For example, four reconstructed pixel columns of one LCU height and two reconstructed pixel columns of one LCU height are required for processing DF on luma and chroma respectively. On the other hand, the line buffers for storing the four pixels rows of one picture width for luma and two pixel rows of one picture width for chroma corresponding to the LCUs above may be sizeable, particularly for large size pictures. Line buffer implementation based on on-chip memory (e.g. Static Random Access Memory (SRAM)) may significantly increase the chip cost. On the other hand, line buffer implementation based on off-chip memory (e.g. Dynamic Random Access Memory (DRAM)) may significantly increase power consumption and system bandwidth. Therefore, it is desirable to reduce line buffers required for the DF process.
SUMMARY [0014] A method and apparatus for deblocking of reconstructed video are disclosed. In one embodiment of the present invention, the method divides a block boundary into two sub-boundaries and separates lines or columns across the sub-boundaries into two groups. The deblocking filter decision for each group is determined based on the lines or columns in the respective group. Accordingly, a system embodying the present invention leads to reduced storage requirement. Further, the filter decision can be based on one or more of the lines or columns. For example, in the case of an 8x8 block, the filter decision for each of the two groups can be based on the first and the fourth lines or columns respectively. The filter decision comprises boundary strength, filter on/off decision, strong/weak filter decision or a combination thereof.
[0015] In another embodiment according to the present invention, the method divides block edges of blocks in the LCUs into two edge groups, where the first edge group corresponds to horizontal block
in the first edge group. The number of lines processed by a vertical filter in the first edge group is less than the number of lines processed by a vertical filter in the second edge group. Accordingly, a system embodying the present invention leads to reduced storage requirement. Furthermore, additional lines from the first group can be used to determine filter decision and/or filter operation for the vertical filtering. The additional lines may be stored in a sub-sampled pattern. Furthermore, the additional lines may correspond to reconstructed pixels or DF intermediate pixels.
BRIEF DESCRIPTION OF DRAWINGS
[0016] Fig. 1A illustrates an exemplary adaptive inter/intra video encoding system incorporating DF, SAO and ALF in-loop processing.
[0017] Fig. IB illustrates an exemplary adaptive inter/intra video decoding system incorporating DF, SAO and ALF in-loop processing.
[0018] Fig. 2 A illustrates an example of a vertical edge between two 8x8 blocks.
[0019] Fig. 2B illustrates an example of filter on/off decision for the vertical edge based on line 2 and line 5 according to HM-4.0.
[0020] Fig. 2C illustrates an example of filter strong/weak decision for each line across the vertical boundary based on pixels in the respective line.
[0021] Fig. 3 illustrates an example of a horizontal edge between two 8x8 blocks.
[0022] Fig. 4A-Fig. 4D illustrate an example of various stages of horizontal and vertical DF process.
[0023] Fig. 5A-Fig. 5D illustrate various examples of sub-sampling pattern to store additional lines for filter decision and/or filter operation according to an embodiment of the present invention.
[0024] Fig. 6A illustrates an example of dividing a vertical block boundary into two sub-boundaries according to an embodiment of the present invention.
[0025] Fig. 6B illustrates an example of dividing a horizontal block boundary into two sub- boundaries according to an embodiment of the present invention.
[0026] Fig. 7A-Fig. 7B illustrate 4x4 boundary strength determination for a vertical and horizontal boundary between two 4x4 blocks respectively.
[0027] Fig. 8 illustrates an example of boundary strength determination according to an embodiment of the present invention.
[0028] Fig. 9 illustrates an example of dependency of filter decision threshold β and filter clipping threshold ¾ on the quantization parameter QP for deblocking filter.
[0029] Fig. 10A illustrates an example of pixels used for filter on/off decision and filter strong/weak decision for a vertical sub-boundary between two 4x4 blocks.
[0030] Fig. 10B illustrates an example of pixels used for filter on/off decision and filter strong/weak decision for a horizontal sub-boundary between two 4x4 blocks.
[0031] Fig. 1 lillustrates an example of luma weak filtering, where A's are equal to zero if p p\, PQ, qo, q\ and q2 lie on a straight line.
[0032] In an embodiment of the present invention, the line buffer for storing pixel rows of the above LCUs according to the LCU-based DF processing is reduced. For horizontal edges between two LCU rows, only reconstructed pixels pO and q0-q3 are used to derive filter on/off and strong/weak decisions for the luma component. Furthermore, according to the present invention, the vertical filtering will only be applied to pixels corresponding to rows of pO and q0-q3, where the vertical filtering is applied to DF intermediate pixels pO and q0-q3. For the chroma component according to the present invention, the vertical filtering will only be applied to pixels corresponding to rows of pO and qO, where the vertical filtering is applied to DF intermediate pixels pO and qO-ql . For the DF process on other edges, the DF process according to HM-4.0 can be used. Accordingly, only one luma line buffer and one chroma line buffer are required to store reconstructed pixels of row pO from the LCUs above. In HM-4.0, one luma line buffer and one chroma line buffer for the bottom row of the blocks above are already used for intra prediction, and the same line buffers can be used to fulfill the need of line buffers for the DF process according to the present invention. Consequently, for the DF process according to the present invention, there is no need for any additional line buffers beyond what have already been used in the encoder or decoder system for intra prediction.
[0033] While the vertical DF filtering across a horizontal edge between two LCUs according to the present invention may only modify line pO, the filtering decisions and filter parameter derivation can be extended to include pixels corresponding to lines pl-p3 for potential improvement of the DF filtering. The computations may become more complicated if more pixels are involved. As a tradeoff between the cost and subjective quality related to the DF process, an embodiment according to the present invention utilizes sub-sampled pixels from lines pl-p3. The pixel data stored in the additional line buffers may correspond to either reconstructed pixels or DF intermediate pixels. Furthermore, any sub-sampling pattern may be used to reduce the computations as well as the storage requirement involved with the filtering decision. Fig. 5A to Fig. 5D illustrate four examples of sub-sampled patterns of pixel data for filter decision derivations. These samples may also be used for the vertical DF filtering operation at the LCU horizontal boundaries.
[0034] In the example of filter on/off decision as shown in Fig. 2B, the decision is based on pixels from line 2 (i.e., the third line) and line 5 (i.e., the sixth line) according to HM-4.0. Therefore, when applying the DF filtering to the bottom four pixel rows of the above LCUs (410a, 420a, 412a and 422a), the on/off decision for the 8x8 blocks will have to be stored. An embodiment according to the present invention can eliminate the requirement to store the on/off decision for DF filtering on the bottom four pixel rows of the above LCUs. According to the present invention, the on/off decision of the horizontal DF filtering for the upper four lines of a block is based on line 2, and the on/off decision of the horizontal DF filtering for the lower four lines is based on line 5. Accordingly, the on/off decision for the lower four lines and the upper four lines can be determined based on pixels within respective groups without referring to each other. There is no need to store the on/off decision of the horizontal DF filtering for the lower four pixel rows of the above LCUs.
[0035] The above example illustrates modified horizontal DF process to reduce memory requirement by removing the data dependency between the upper four lines and the lower four lines of the 8x8 blocks
block boundaries. Furthermore, the filter decision derivation is not restricted to line 2 and line 5.
Accordingly, an embodiment of the present invention treats the boundary between two 8x8 luma blocks as two sub-boundaries. For a vertical boundary, the two sub-boundaries correspond to a lower boundary 610 and an upper boundary 620 between two adjacent 8x8 blocks as shown in Fig. 6A. The sub- boundary pixels associated with a lower sub-boundary 610 (shown in long dashed line) are indicated by box 612 and the sub-boundary pixels associated with the upper sub-boundary 620 (shown in short dashed line) is indicated by box 622 as shown in Fig. 6A. The sub-boundary pixels 612 are also called a first pixel group, which comprises a first group of line segments across the lower sub-boundary 610 of the vertical boundary. Similarly, the sub-boundary pixels 622 are also called a second pixel group, which comprises a second group of line segments across the upper sub-boundary 620 of the vertical boundary. For a horizontal boundary, the two sub-boundaries correspond to a left boundary 630 (shown in short dashed line) and a right boundary 640 (shown in long dashed line) between two adjacent 8x8 blocks are shown in Fig. 6B. The sub-boundary pixels associated with a left sub-boundary 630 are indicated by box 632, as shown in Fig. 6B and the pixels in box 632 are called the first pixel group. The first pixel group in this case comprises a first group of column segments across the left sub-boundary 630 of the horizontal boundary. The sub-boundary pixels associated with the right sub-boundary are indicated by box 642, as shown in Fig. 6B and the pixels in box 642 are called the second pixel group. The second pixel group in this case comprises a second group of column segments across the right sub-boundary 640 of the horizontal boundary. An embodiment according to the present invention determines the filter on/off and strong/weak decisions and applies the DF filtering individually based on pixels from the respective pixel group. In general, the boundary strength, filter on/off decision, strong/weak filter decision, or a combination thereof for the first pixel group is determined solely based on pixels from the first pixel group. For example, the edge activity measure, dl can be computed as follows:
d\= \p2-2pl+pQ\ +\ q2-2ql+qO\ , (1)
where the computation is performed using one line of the respective sub-boundary pixels. Accordingly, the edge activity measure, dljupper for the upper sub-boundary in Fig. 6A can be computed based on one of the upper four lines. Similarly, the edge activity measure, dljower for the lower sub-boundary in Fig. 6 A can be computed based on one of the lower four lines. For example, dl_upper can be determined using line 3 (i.e., the fourth line) and dljower can be determined using line 4 (i.e., the fifth line), dl_itpper = \p23 -2pl3+p03 \+\ q23 -2ql3 +q03 \, and (2)
dljower = \ p24 -2p +p04 q24 -2q\4 + 4 . (3)
[0036] As shown in equations (2) and (3), the edge activity measure consists of two parts, where the first part, dl_upperR or dlJowerR is associated with pixels on the left side of the sub-boundary and the second part, dljupperi or dljower is associated with pixels on the right side of the sub-boundary.
After the edge activity measure for a sub-boundary is determined, the condition regarding whether to apply the DF filtering across the respective sub-boundary is tested according to
(dl _upper«l) < BetaJ^uma, and (4)
(dlJower«l) < BetaJ,uma, (5)
where BetaJ^uma is a threshold. If equation (4) is satisfied, the horizontal DF filtering is applied to the upper sub-boundary. If equation (5) is satisfied, the horizontal DF filtering is applied to the lower sub-
lines, more than one line may also be used to determine filter on/off control. Similarly, more than one line from the lower four lines may be used to determined filter on/off control for the lower four lines.
[0037] In one variation of DF process in HEVC, an additional test is performed to determine whether to use a weak DF filter or a strong filter. The edge activity measures corresponding to the right side and left side of the sub-boundary are compared with another threshold, sideThreshold. For example, the follow tests are performed for the upper sub-boundary
(dl _ upperR « 1) < sideThreshold , and (6)
(dl _ pperL « 1) < sideThreshold . (7)
If the condition in equation (6) is satisfied, the weak filter is applied to the second pixel from the vertical boundary for each upper line on the right side of the upper sub-boundary. If the condition in equation (7) is satisfied, the weak filter is applied to the second pixel from the vertical boundary for each upper line on the left side of the upper sub-boundary. Similar process for the lower sub-boundary can be performed by evaluating the conditions:
(dl _ lower R « 1) < sideThreshold , and (8)
(dl _ lowerL « 1) < sideThreshold . (9)
[0038] An embodiment according to the present invention treats the boundary between two 4x4 chroma blocks as two sub-boundaries, where a vertical sub-boundary can be separated into an upper sub- boundary and a lower sub-boundary, and a horizontal boundary can be separated into a left sub-boundary and a right sub-boundary. The embodiments of the present invention for the luma component are applicable to the chroma components, where the DF process may be performed in reduced resolution.
[0039] The derivations of filter on/off and strong/weak decision illustrated above are for a vertical boundary. The derivations of filter on/off and strong/weak decision for a horizontal boundary can be derived similarly. While one line from the upper four lines is used to determine strong/weak filter for the upper four lines, more than one line may also be used to determine strong/weak filter control. Similarly, more than one line from the lower four lines may be used to determined strong/weak filter control for the lower four lines.
[0040] In HEVC, deblocking filter is applied to boundaries of 8x8 blocks, where the boundary strength (BS) is determined based on 4x4 blocks. For the luma component, the stronger of the boundary strengths for the two neighboring 4x4 blocks associated with an 8x8 block is used as the boundary strength of the corresponding boundary of the 8x8 block. An embodiment according to the present invention derives the boundary strength for the sub-boundaries between two 8x8 blocks individually based on the respective 4x4 blocks. Fig. 7A illustrates an example of a horizontal sub-boundary 710 between two 4x4 blocks, P and Q. Fig. 7B illustrates an example of a vertical sub-boundary 720 between two 4x4 blocks, P and Q. An exemplary boundary strength derivation for the sub-boundary is shown in Fig. 8. The boundary strength decision starts from block 810. A test, "P or Q is intra coded?" is performed in step 820. If the result is Yes (as indicated by "Y" in Fig. 8), BS is assigned a value of 2. If the test result of step 820 is No (as indicated by "N" in Fig. 8), a further test 830 is performed. In step 830, the test, "((Boundary is TU boundary) and (P or Q contains coefficients)) or (P and Q have different reference picture or MV difference > 4)?" is performed. If the test result is Yes, BS is assigned a value of 1. Otherwise, BS is assigned a value of 0. While the use of 4x4 sub-boundary for 8x8 blocks is
boundary sizes.
[0041] The BS value can be used to control deblocking operation such as filter on/off control. An exemplary BS usage is shown in Table 1. If BS value is 0, the deblocking filter is turned off. If BS value is 1 , the luma deblocking filtering is turned on and the filter parameter, tc offset as defined in the HEVC standard is set to 0. If BS value is 2, both luma and chroma deblocking filtering is turned on and tc offset is set to 2.
Table 1.
[0042] According to the HEVC standard, deblocking parameters comprise β and tc. The parameter β is used to determine filter decision threshold and the value is related to quantization parameter (QP) of the block. An embodiment according to the present invention determines the QP for sub-boundary according to QP = (QPp + QPQ)/2, where QPP is the QP for block P and QPQ is the QP for block Q. The dependency of β on QP is shown in Fig. 9. On the other hand, the parameter tc is used for filter clipping threshold. The dependency of tc on QP is shown in Fig. 9. If BS is greater than 1, the parameter tc is specified using QP+2 as the table input. Otherwise, the parameter tc is specified using QP as the table input. As shown in Table 1, the deblocking filtering may be on only if BS value is greater than 0. The parameter β is determined as shown in Fig. 9. The filter on/off decision can be determined according to the BS value for the respective sub-boundary and the edge activity measured using the 4x4 block. Fig. 10A illustrates an example of deriving the edge activity based on two lines across the sub-boundary:
dp = \Ρ -2ρ\ +Α \+\ρ -ΐρ +Α Ι (10)
dq = \ q -2q +q \ +\ q2i -2q +qt i \, and (11)
d -dp+dg. (12)
If BS > 0 and d < β, then apply the deblocking filter. Otherwise, the deblocking filtering is not applied. The above derivation illustrates a specific example to derive edge activity based on two lines from the two neighboring 4x4 blocks. However, more or less lines may be used to derive the edge activity. Furthermore, while specific formula is used to derive the edge activity, a person skilled in the art may use other formulas to measure the edge activity.
[0043] The strong/weak filter decision can also be derived based on the edge activity and other measure. For example, if (d < ( β » 2 ) && \ ρβ -po | + | qo - q3 | < ( β » 3 ) && | po - qo I <
( 5*tc + 1 ) » 1) is true for both line 0 and line 3, a strong filter is selected. Otherwise, a weak filter is selected. After the BS, filter on/off control and strong/weak filter decision are made, the deblocking filtering can be applied to luma and chroma signals. For the luma signal with pixels P3, P2, ΡΙ, ΡΟ, qo, qi, qi, and q^ across the boundary, the strong luma filtering can be performed according to:
pi ' = Clip30! - 2Hc, p0 + 2*tc, (p2 + Pl + Po + qo + 2 ) » 2 ) (14)
P2' = Clip3( ?2 - 2Hc, po + 2*tc, ( 2* p + 3* p2 + p\ +po + qo + 4 ) » 3 ) (15)
0O' = Clip3( 0o - 2*ΐ0, ) + 2*tc, (Pl + 2* />0 + 2* 00 + 2* ø! + 02 + 4 ) » 3 ) (16)
01 ' = Clip3( 01 - 2*tc, jpo + 2*tc, (^o + W + ^l + q2 + 2 ) » 2 ) (17)
02' = Clip3( 02 - 2*tc, pQ + 2*tc, (/>0 + 00 + 91 + 3* 02 + 2* 03 + 4 ) » 3 ) (18)
where ρ^ , ΡΙ, P\ Ρθ', qo q\ , qi , and 03 ' are the filtered pixel data. Clip3(w<x>c, min, x) is a function that clip variable x between the max and min.
[0044] When weak filter is selected, a decision (dEpl) regarding whether to filter p\ is determined by testing "if (dp < ((β + (β » 1) ) » 3))". If the condition is true, dEpl is set to 1. Also a decision (dEql) regarding whether to filter p\ is determined by testing "if (dq < (( ? + ( ? » 1) ) » 3))". If the condition is true, dEql is set to 1. Furthermore, Δ is calculated, where Δ = (9 * (00 - po) - 3 * (q\ -p\) + 8) » 4.
If abs(A) < tc* 10, then Δ = Clip3(-tc, tc, Δ),
where Cliply( ) clips the x value between the maximum and minimum luminance values. When dEpl is set to 1, Δρ = Clip3( -(tc » 1), tc » 1, (((p2 + po + 1) » 1) -p\ + Δ) »1) and p\ = Cliply^i+Ap). When dEql is set to 1, Aq = Clip3( -( tc » 1), tc » 1, (((02 + 00 + 1) » 1) - q\ + Δ) »1) and q\ = CliplY(0i+Aq).
If P2, Pl, P0, qo, qi, and 02 he on the same line, as shown in Fig. 11, then Δ, Δρ and Aq will be zero. The example shown above illustrates derivation of boundary strength, filter on/off decision, strong/weak filter decision based on four lines across a boundary between two 8x8 blocks. The present invention may also be applied to other block sizes by dividing the boundary into sub-boundaries and derives the boundary strength, filter on/off decision, strong/weak filter decision based on pixels with respect to individual sub- boundaries.
[0045] For chroma filtering, the boundary strength (BS) is used to determine whether the deblocking filtering should be applied. If BS>1, then Δ = Clip3(-tc, tc, ((((00 - po)«2) + p\- q\ + 4) »3)), /?o'= liplc(PO+A) and 0ο'=Οιρ1^(0ο+Δ), where Cliplc(x) clips the x value between the maximum and minimum chroma values.
[0046] The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
[0047] Embodiment of the present invention as described above may be implemented in various hardware, software code, or a combination of both. For example, an embodiment of the present invention
compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array
(FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
[0048] The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims
1. A method of deblocking for reconstructed video in a video coding system, the method comprising: receiving reconstructed pixel data associated with a block boundary between two adjacent NxN blocks, where N is an integer;
separating the block boundary into a first sub-boundary and a second sub-boundary for a vertical boundary or horizontal boundary;
determining a first pixel group, wherein the first pixel group comprises a first group of line segments across the first sub-boundary of the vertical boundary or a first group of column segments across the first sub-boundary of the horizontal boundary;
determining a second pixel group, wherein the second pixel group comprises a second group of line segments across the second sub-boundary of the vertical boundary or a second group of column segments across the second sub-boundary of the horizontal boundary;
determining a first filter decision for the first sub-boundary based on the first pixel group;
determining a second filter decision for the second sub-boundary based on the second pixel group; applying deblocking filter to the first sub-boundary according to the first filter decision; and applying deblocking filter to the second sub-boundary according to the second filter decision.
2. The method of Claim 1, wherein N corresponds to 8, the first pixel group comprises 4 line segments or 4 column segments, and the second pixel group comprises 4 line segments or 4 column segments.
3. The method of Claim 2, wherein at least one line segment or column segment of the first pixel group is used to determine filter on/off decision of the first filter decision, and at least one line segment or column segment of the second pixel group is used to determine filter on/off decision of the second filter decision.
4. The method of Claim 1, wherein filter strong/weak decision of the first filter decision for the first pixel group is derived based on at least one line segment or column segment of the first pixel group, and filter strong/weak decision of the second filter decision for the second pixel group is derived based on at least one line segment or column segment of the second pixel group.
5. The method of Claim 1, wherein boundary strength of the first filter decision is determined according to coding parameters of two sub-blocks associated with the first sub-boundary and the boundary strength of the second filter decision is determined according to coding parameters of two sub- blocks associated with the second sub-boundary.
6. The method of Claim 1, wherein the first filter decision comprises boundary strength, filter on/off decision, strong/weak filter decision or a combination thereof, and the second filter decision comprises boundary strength, filter on/off decision, strong/weak filter decision or a combination thereof.
7. A method of deblocking for reconstructed video in a video coding system, the method comprising: receiving reconstructed pixel data, wherein the reconstructed pixel data is configured into LCUs (largest coding units) and each LCU is divided into blocks;
identifying horizontal block edges of the blocks in the LCUs, wherein the horizontal block edges are divided into a first edge group and a second edge group, and wherein the first edge group corresponds to horizontal block edges between two LCUs and the second edge group corresponds to remaining horizontal block edges not included in the first edge group; appl — """al filtering to the reconstructed pixel data correspondir ~ A ~~~ ^rst lines
WO 2013/037254 PCT/CN2012/079889 above a first horizontal block edge in the first edge group; and
applying second vertical filtering to the reconstructed pixel data corresponding to one or more second lines above a second horizontal block edge in the second edge group, wherein a first number of said one or more first lines is smaller than a second number of said one or more second lines.
8. The method of Claim 7, wherein the first number is one.
9. The method of Claim 8, wherein said one or more first lines are stored in one line buffer, and wherein said one line buffer is shared with intra prediction process used in the video coding system.
10. The method of Claim 7, wherein additional pixel data corresponding to one or more third lines above the first horizontal block edge are used to determine filter decision and/or filter operation for the first vertical filtering, and wherein the additional pixel data is stored in a sub-sampled pattern.
11. The method of Claim 10, wherein the additional pixel data corresponds to the reconstructed pixel data or intermediate pixel data.
12. An apparatus for deblocking of reconstructed video in a video coding system, the apparatus comprising:
means for receiving reconstructed pixel data associated with a block boundary between two adjacent NxN blocks, where N is an integer;
means for separating the block boundary into a first sub-boundary and a second sub-boundary for a vertical boundary or horizontal boundary;
means for determining a first pixel group, wherein the first pixel group comprises a first group of line segments across the first sub-boundary of the vertical boundary or a first group of column segments across the first sub-boundary of the horizontal boundary;
means for determining a second pixel group, wherein the second pixel group comprises a second group of line segments across the second sub-boundary of the vertical boundary or a second group of column segments across the second sub-boundary of the horizontal boundary;
means for determining a first filter decision for the first sub-boundary based on the first pixel group; means for determining a second filter decision for the second sub-boundary based on the second pixel group;
means for applying deblocking filter to the first sub-boundary according to the first filter decision; and
means for applying deblocking filter to the second sub-boundary according to the second filter decision.
13. The apparatus of Claim 12, wherein N corresponds to 8, the first pixel group comprises 4 line segments or 4 column segments, and the second pixel group comprises 4 line segments or 4 column segments.
14. The apparatus of Claim 13, wherein at least one line segment or column segment of the first pixel group is used to determine filter on/off decision of the first filter decision, and at least one line segment or column segment of the second pixel group is used to determine filter on/off decision of the second filter decision.
15. The apparatus of Claim 12, wherein filter strong/weak decision of the first filter decision for the first pixel group is derived based on at least one line segment or column segment of the first pixel group, and filter strong/weak decision of the second filter decision for the second pixel group is derived based on ~Α —
WO 2013/037254
16. The appai
determined according to coding parameters of two sub-blocks associated with the first sub-boundary and boundary strength of the second filter decision is determined according to coding parameters of two sub- blocks associated with the second sub-boundary.
17. An apparatus of deblocking for reconstructed video in a video coding system, the apparatus comprising:
means for receiving reconstructed pixel data, wherein the reconstructed pixel data is configured into LCUs (largest coding units) and each LCU is divided into blocks;
means for identifying horizontal block edges of the blocks in the LCUs, wherein the horizontal block edges are divided into a first edge group and a second edge group, and wherein the first edge group corresponds to horizontal block edges between two LCUs and the second edge group corresponds to remaining horizontal block edges not included in the first edge group;
means for applying first vertical filtering to the reconstructed pixel data corresponding to one or more first lines above a first horizontal block edge in the first edge group; and
means for applying second vertical filtering to the reconstructed pixel data corresponding to one or more second lines above a second horizontal block edge in the second edge group, wherein a first number of said one or more first lines is smaller than a second number of said one or more second lines.
18. The apparatus of Claim 17, wherein the first number is one.
19. The apparatus of Claim 18, wherein said one or more first lines are stored in one line buffer, and wherein said one line buffer is shared with intra prediction process used in the video coding apparatus.
20. The apparatus of Claim 17, wherein additional pixel data corresponding to one or more third lines above the first horizontal block edge are used to determine filter decision and/or filter operation for the first vertical filtering, and wherein the additional pixel data is stored in a sub-sampled pattern.
21. The apparatus of Claim 20, wherein the additional pixel data corresponds to the reconstructed pixel data or intermediate pixel data.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/342,334 US9554128B2 (en) | 2011-09-13 | 2012-08-09 | Method and apparatus for reduction of deblocking filter |
EP12831238.6A EP2737704B1 (en) | 2011-09-13 | 2012-08-09 | Method and apparatus for reduction of deblocking filter |
CN201280044241.5A CN103947208B (en) | 2011-09-13 | 2012-08-09 | Method and device for reducing deblocking filter |
IL230286A IL230286A (en) | 2011-09-13 | 2014-01-02 | Method and apparatus for reduction of deblocking filter |
US15/375,596 US10003798B2 (en) | 2011-09-13 | 2016-12-12 | Method and apparatus for reduction of deblocking filter |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161533892P | 2011-09-13 | 2011-09-13 | |
US61/533,892 | 2011-09-13 | ||
CN201110270680 | 2011-09-14 | ||
CN201110270680.5 | 2011-09-14 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/342,334 A-371-Of-International US9554128B2 (en) | 2011-09-13 | 2012-08-09 | Method and apparatus for reduction of deblocking filter |
US15/375,596 Division US10003798B2 (en) | 2011-09-13 | 2016-12-12 | Method and apparatus for reduction of deblocking filter |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013037254A1 true WO2013037254A1 (en) | 2013-03-21 |
Family
ID=47882598
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2012/079889 WO2013037254A1 (en) | 2011-09-13 | 2012-08-09 | Method and apparatus for reduction of deblocking filter |
Country Status (5)
Country | Link |
---|---|
US (2) | US9554128B2 (en) |
EP (1) | EP2737704B1 (en) |
CN (1) | CN103947208B (en) |
IL (1) | IL230286A (en) |
WO (1) | WO2013037254A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105745931A (en) * | 2013-11-24 | 2016-07-06 | Lg电子株式会社 | Method and apparatus for encoding and decoding video signal using adaptive sampling |
WO2017045101A1 (en) * | 2015-09-14 | 2017-03-23 | Mediatek Singapore Pte. Ltd. | Advanced deblocking filter in video coding |
EP3178228A4 (en) * | 2014-09-15 | 2018-02-21 | HFI Innovation Inc. | Method of deblocking for intra block copy in video coding |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2737704B1 (en) * | 2011-09-13 | 2020-02-19 | HFI Innovation Inc. | Method and apparatus for reduction of deblocking filter |
US9591302B2 (en) * | 2012-07-02 | 2017-03-07 | Microsoft Technology Licensing, Llc | Use of chroma quantization parameter offsets in deblocking |
US9414054B2 (en) | 2012-07-02 | 2016-08-09 | Microsoft Technology Licensing, Llc | Control and use of chroma quantization parameter values |
US20160173897A1 (en) * | 2014-12-10 | 2016-06-16 | Haihua Wu | High Parallelism Dependency Pattern for GPU Based Deblock |
US10455254B2 (en) * | 2016-11-10 | 2019-10-22 | Mediatek Inc. | Method and apparatus of video coding |
WO2019007492A1 (en) * | 2017-07-04 | 2019-01-10 | Huawei Technologies Co., Ltd. | Decoder side intra mode derivation tool line memory harmonization with deblocking filter |
US11153607B2 (en) * | 2018-01-29 | 2021-10-19 | Mediatek Inc. | Length-adaptive deblocking filtering in video coding |
CN117640949A (en) | 2018-05-23 | 2024-03-01 | 松下电器(美国)知识产权公司 | Decoding device and encoding device |
WO2020005031A1 (en) * | 2018-06-28 | 2020-01-02 | 한국전자통신연구원 | Video encoding/decoding method and device, and recording medium for storing bitstream |
CA3119935A1 (en) * | 2018-11-14 | 2020-05-22 | Sharp Kabushiki Kaisha | Systems and methods for applying deblocking filters to reconstructed video data |
CN111800643A (en) * | 2020-07-03 | 2020-10-20 | 北京博雅慧视智能技术研究院有限公司 | Deblocking filter for video coding and filtering method thereof |
US11917144B2 (en) * | 2021-09-29 | 2024-02-27 | Mediatek Inc. | Efficient in-loop filtering for video coding |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050013494A1 (en) | 2003-07-18 | 2005-01-20 | Microsoft Corporation | In-loop deblocking filter |
TW200629909A (en) * | 2004-12-01 | 2006-08-16 | Samsung Electronics Co Ltd | A pipelined deblocking filter |
US20070291857A1 (en) * | 2006-06-16 | 2007-12-20 | Via Technologies, Inc. | Systems and Methods of Video Compression Deblocking |
US20080043853A1 (en) * | 2006-08-17 | 2008-02-21 | Fujitsu Limited | Deblocking filter, image encoder, and image decoder |
US20080117980A1 (en) | 2006-11-16 | 2008-05-22 | Ching-Yu Hung | Deblocking Filters |
US20100027685A1 (en) * | 2008-07-30 | 2010-02-04 | Samsung Electronics Co., Ltd. | Method of processing boundary strength by deblocking filter and coding apparatus |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6907079B2 (en) | 2002-05-01 | 2005-06-14 | Thomson Licensing S.A. | Deblocking filter conditioned on pixel brightness |
JP4646574B2 (en) | 2004-08-30 | 2011-03-09 | 株式会社日立製作所 | Data processing system |
CN101421935B (en) * | 2004-09-20 | 2011-05-25 | Divx公司 | Video deblocking filter |
WO2008084996A1 (en) | 2007-01-11 | 2008-07-17 | Samsung Electronics Co, . Ltd. | Method and apparatus for deblocking-filtering video data |
US8204129B2 (en) * | 2007-03-27 | 2012-06-19 | Freescale Semiconductor, Inc. | Simplified deblock filtering for reduced memory access and computational complexity |
JP2009194617A (en) * | 2008-02-14 | 2009-08-27 | Sony Corp | Image processor, image processing method, program of image processing method and recording medium with program of image processing method recorded thereon |
CN101540900A (en) * | 2008-03-20 | 2009-09-23 | 矽统科技股份有限公司 | Method for reducing blocking artifacts in video streams |
KR101705138B1 (en) * | 2008-04-11 | 2017-02-09 | 톰슨 라이센싱 | Deblocking filtering for displaced intra prediction and template matching |
KR101590500B1 (en) * | 2008-10-23 | 2016-02-01 | 에스케이텔레콤 주식회사 | / Video encoding/decoding apparatus Deblocking filter and deblocing filtering method based intra prediction direction and Recording Medium therefor |
KR101457396B1 (en) * | 2010-01-14 | 2014-11-03 | 삼성전자주식회사 | Method and apparatus for video encoding using deblocking filtering, and method and apparatus for video decoding using the same |
JP2011223302A (en) * | 2010-04-09 | 2011-11-04 | Sony Corp | Image processing apparatus and image processing method |
TWI600318B (en) * | 2010-05-18 | 2017-09-21 | Sony Corp | Image processing apparatus and image processing method |
CN101883285A (en) * | 2010-07-06 | 2010-11-10 | 西安交通大学 | Design Method of Parallel Pipeline Deblocking Filter VLSI Structure |
US8861617B2 (en) * | 2010-10-05 | 2014-10-14 | Mediatek Inc | Method and apparatus of region-based adaptive loop filtering |
US9525884B2 (en) * | 2010-11-02 | 2016-12-20 | Hfi Innovation Inc. | Method and apparatus of slice boundary filtering for high efficiency video coding |
US20120106622A1 (en) * | 2010-11-03 | 2012-05-03 | Mediatek Inc. | Method and Apparatus of Slice Grouping for High Efficiency Video Coding |
WO2012096623A1 (en) * | 2011-01-14 | 2012-07-19 | Telefonaktiebolaget L M Ericsson (Publ) | Deblocking filtering |
US9338476B2 (en) * | 2011-05-12 | 2016-05-10 | Qualcomm Incorporated | Filtering blockiness artifacts for video coding |
CN103563374B (en) * | 2011-05-27 | 2017-02-08 | 索尼公司 | Image-processing device and method |
KR101956284B1 (en) * | 2011-06-30 | 2019-03-08 | 엘지전자 주식회사 | Interpolation Method And Prediction method thereof |
US9521418B2 (en) * | 2011-07-22 | 2016-12-13 | Qualcomm Incorporated | Slice header three-dimensional video extension for slice header prediction |
US9232237B2 (en) * | 2011-08-05 | 2016-01-05 | Texas Instruments Incorporated | Block-based parallel deblocking filter in video coding |
KR102039076B1 (en) * | 2011-09-09 | 2019-10-31 | 선 페이턴트 트러스트 | Low complex deblocking filter decisions |
EP2737704B1 (en) * | 2011-09-13 | 2020-02-19 | HFI Innovation Inc. | Method and apparatus for reduction of deblocking filter |
US9167269B2 (en) * | 2011-10-25 | 2015-10-20 | Qualcomm Incorporated | Determining boundary strength values for deblocking filtering for video coding |
-
2012
- 2012-08-09 EP EP12831238.6A patent/EP2737704B1/en active Active
- 2012-08-09 CN CN201280044241.5A patent/CN103947208B/en active Active
- 2012-08-09 WO PCT/CN2012/079889 patent/WO2013037254A1/en active Application Filing
- 2012-08-09 US US14/342,334 patent/US9554128B2/en active Active
-
2014
- 2014-01-02 IL IL230286A patent/IL230286A/en active IP Right Grant
-
2016
- 2016-12-12 US US15/375,596 patent/US10003798B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050013494A1 (en) | 2003-07-18 | 2005-01-20 | Microsoft Corporation | In-loop deblocking filter |
TW200629909A (en) * | 2004-12-01 | 2006-08-16 | Samsung Electronics Co Ltd | A pipelined deblocking filter |
US20070291857A1 (en) * | 2006-06-16 | 2007-12-20 | Via Technologies, Inc. | Systems and Methods of Video Compression Deblocking |
US20080043853A1 (en) * | 2006-08-17 | 2008-02-21 | Fujitsu Limited | Deblocking filter, image encoder, and image decoder |
US20080117980A1 (en) | 2006-11-16 | 2008-05-22 | Ching-Yu Hung | Deblocking Filters |
US20100027685A1 (en) * | 2008-07-30 | 2010-02-04 | Samsung Electronics Co., Ltd. | Method of processing boundary strength by deblocking filter and coding apparatus |
Non-Patent Citations (1)
Title |
---|
See also references of EP2737704A4 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105745931A (en) * | 2013-11-24 | 2016-07-06 | Lg电子株式会社 | Method and apparatus for encoding and decoding video signal using adaptive sampling |
EP3072300A4 (en) * | 2013-11-24 | 2017-07-05 | LG Electronics Inc. | Method and apparatus for encoding and decoding video signal using adaptive sampling |
US10462464B2 (en) | 2013-11-24 | 2019-10-29 | Lg Electronics Inc. | Method and apparatus for encoding and decoding video signal using adaptive sampling |
EP3178228A4 (en) * | 2014-09-15 | 2018-02-21 | HFI Innovation Inc. | Method of deblocking for intra block copy in video coding |
US10743034B2 (en) | 2014-09-15 | 2020-08-11 | Hfi Innovation Inc. | Method of deblocking for intra block copy in video coding |
US11297352B2 (en) | 2014-09-15 | 2022-04-05 | Hfi Innovation Inc. | Method of deblocking for intra block copy in video coding |
WO2017045101A1 (en) * | 2015-09-14 | 2017-03-23 | Mediatek Singapore Pte. Ltd. | Advanced deblocking filter in video coding |
Also Published As
Publication number | Publication date |
---|---|
US20140211848A1 (en) | 2014-07-31 |
EP2737704A1 (en) | 2014-06-04 |
CN103947208B (en) | 2017-07-07 |
EP2737704B1 (en) | 2020-02-19 |
US20170094273A1 (en) | 2017-03-30 |
IL230286A (en) | 2017-10-31 |
CN103947208A (en) | 2014-07-23 |
US10003798B2 (en) | 2018-06-19 |
EP2737704A4 (en) | 2016-02-17 |
US9554128B2 (en) | 2017-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10003798B2 (en) | Method and apparatus for reduction of deblocking filter | |
EP2708027B1 (en) | Method and apparatus for reduction of in-loop filter buffer | |
AU2012327672B2 (en) | Method and apparatus for non-cross-tile loop filtering | |
CN106331709B (en) | Method and apparatus for processing video using in-loop processing | |
US9967563B2 (en) | Method and apparatus for loop filtering cross tile or slice boundaries | |
US8913656B2 (en) | Method and apparatus for in-loop filtering | |
US11303900B2 (en) | Method and apparatus for motion boundary processing | |
CN107071485B (en) | Video coding method and apparatus with sample adaptive offset processing | |
CN105898335B (en) | Loop filtering method and loop filtering device for improving hardware efficiency | |
EP3057320A1 (en) | Method and apparatus of loop filters for efficient hardware implementation | |
CN109845266B (en) | Smoothing filtering method and apparatus for removing ripple effects | |
WO2014023207A1 (en) | Method and apparatus for sample adaptive offset in a video decoder | |
CN103051892B (en) | Embedded loop filtering method and embedded loop filtering device | |
CN107040778A (en) | Loop filtering method and loop filtering device | |
EP2880861A1 (en) | Method and apparatus for video processing incorporating deblocking and sample adaptive offset | |
WO2018134363A1 (en) | Filter apparatus and methods | |
CN114342380B (en) | Deblocking filter selection in video or image coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12831238 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012831238 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14342334 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |