[go: up one dir, main page]

EP1164792A2 - Format converter using bidirectional motion vector and method thereof - Google Patents

Format converter using bidirectional motion vector and method thereof Download PDF

Info

Publication number
EP1164792A2
EP1164792A2 EP01104859A EP01104859A EP1164792A2 EP 1164792 A2 EP1164792 A2 EP 1164792A2 EP 01104859 A EP01104859 A EP 01104859A EP 01104859 A EP01104859 A EP 01104859A EP 1164792 A2 EP1164792 A2 EP 1164792A2
Authority
EP
European Patent Office
Prior art keywords
motion vector
interpolated
frame
field
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01104859A
Other languages
German (de)
French (fr)
Other versions
EP1164792A3 (en
Inventor
Sung-Hee Lee
Sung-Jea Ko
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of EP1164792A2 publication Critical patent/EP1164792A2/en
Publication of EP1164792A3 publication Critical patent/EP1164792A3/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • H04N7/012Conversion between an interlaced and a progressive signal

Definitions

  • the present invention relates to an image signal format converter and a method thereof, and more particularly, to a format converter which performs frame rate conversion and de-interlacing using a bidirectional motion vector, and a method thereof.
  • HDTV high definition TV
  • FIG. 1 is a block diagram of a conventional frame rate conversion apparatus.
  • an image dividing unit 110 divides an image into a changed region and an unchanged region for efficient motion estimation as shown in FIG. 2.
  • the unchanged region is divided again into a covered region, an uncovered region, a background, and an object.
  • a motion estimating unit 120 generates the motion vector of a block using a block matching algorithm which is generally used in video coding.
  • a motion vector is found out for each block on the assumption that pixels in a predetermined-size block, as shown in FIG. 3, only moved without rotation, magnification, or reduction.
  • FIG. 3 it is assumed that the motion vector of an N x N size base block located on an arbitrary coordinates (x c , y c ) in the current frame (f c ) is estimated in a range of ⁇ P pixels from the previous frame (f p ). Then, the search range in the previous frame is (N+2P) x (N+2P).
  • the motion vector is determined as a location which has the maximum correlation among total (2P+1) 2 candidate locations.
  • the difference between the base block in the current frame and the candidate block in the previous frame is calculated as a mean absolute difference (MAD) as the following equation 1:
  • the motion vector of the final block is determined as a location (m, n) of a search range of which the mean absolute difference between the current block and the candidate block is the minimum.
  • a spatiotemporal smoothing unit 130 refines an inappropriate motion vector obtained in the motion estimating unit 120, and refines the smoothness of a motion vector as shown in FIG. 4.
  • a motion-compensated interpolation unit 140 finds a forward motion vector for the previous frame and the next frame of an image to be interpolated, and using the obtained motion vector, performs bidirectional interpolation according to region classification information generated in the image dividing unit 110.
  • the motion-compensated interpolation using a forward motion vector generates overlaps, where blocks are overlapped because two or more motion vectors are assigned in a frame to be interpolated, and holes where motion vector is not assigned,.
  • These overlaps and holes directly affect the picture quality of an image and degrade the picture quality. Also, since these overlaps and holes have irregular shapes, they should be handled on a pixel-by-pixel basis. Therefore, for the conventional apparatus, complicated signal processing processes and complicated hardware therefor should be implemented to remove the overlaps and holes.
  • a frequency band is compressed using an interlacing method in which two fields forms a frame.
  • image lines which are empty by de-interlacing, are generated by an arbitrary method for progressive scanning.
  • FIG. 6 is a conceptual basic diagram of ordinary de-interlacing.
  • an output frame F 0 ( x , n) is defined as the following equation 2:
  • x means a spatial location
  • n is a field number
  • F( x , n) is an input field
  • F i ( x , n) is a pixel to be interpolated.
  • FIG. 7 is a 3 x 3 window for applying edge-based line averaging (ELA) de-interlacing algorithm which does not use motion compensation.
  • ELA edge-based line averaging
  • ELA de-interlacing uses correlation between pixels considering direction (x, y) from the location of a pixel to be interpolated as the following equation 3. That is, the mean value of pixels neighboring a pixel to be interpolated and pixels to be interpolated in the previous field and the next field of the field to be interpolated is output:
  • FIG. 8 is a conceptual diagram for explaining an ordinary time-recursive (TR) de-interlacing method.
  • TR de-interlacing using a motion vector assumes that the previous field (n-1) is perfectly de-interlaced, and the missing data of the current field (n) is compensated with a motion vector.
  • a pixel to be interpolated can be the original pixel of the previous field, or a previously interpolated pixel. Therefore, a pixel to be interpolated can be expressed as the following equation 4:
  • ELA de-interlacing method does not use motion compensation, flickering occurs in a region where motions exist, and since TR de-interlacing method is continuously de-interlaced, an error occurred in an arbitrary field can be propagated to other fields.
  • a frame rate converting method having the steps of (a) estimating a bidirectional motion vector for a frame to be interpolated using motion vectors between the current frame and the previous frame; (b) setting the motion vector of a neighboring block that has the minimum error distortion, among motions vectors estimated in the step (a) in a frame to be interpolated, as the motion vector of the current block; and (c) forming a frame to be interpolated with the motion vector set in the step (b).
  • a frame rate converter having a bidirectional motion estimating means for obtaining the motion vector between the current frame and the previous frame, assigning the motion vector to a frame to be interpolated, and estimating the assigned motion vector for a frame to be interpolated; a spatiotemporal smoothing unit for evaluating the accuracy of the motion vector of the current block in the frame to be interpolated in the bidirectional motion estimating means, and then setting the motion vector of a neighboring block, which has the minimum error distortion, as the motion vector of the current block; and an interpolation unit for extending the block to be interpolated, and interpolating with the motion vector obtained in the spatiotemporal smoothing unit in an overlapped region with different weights.
  • a de-interlacing method having (a) estimating a bidirectional motion vector for a pixel to be interpolated using motion vectors between the previous field and the next field; (b) setting the motion vector of which neighboring error distortion is the minimum in the step (a) as the motion vector of a pixel to be interpolated; and (c) forming the pixel to be interpolated with the motion vector set in the step (b).
  • a de-interlacing apparatus having a bidirectional motion estimating means for obtaining the motion vector between the current field and the previous field, assigning the motion vector to a field to be interpolated, and estimating the assigned motion vector for a field to be interpolated; a spatiotemporal smoothing unit for evaluating the accuracy of the motion vector of the current block in the field to be interpolated in the bidirectional motion estimating means, and then setting the motion vector of a neighboring block, which has the minimum error distortion, as the motion vector of the current block; and a signal converting unit for forming a pixel of a line having no data, with the median value of pixel values obtained by applying the motion vector set in the spatiotemporal smoothing unit, the mean value of the pixel values, and the values of pixels vertically neighboring the pixel to be interpolated.
  • FIG. 9 is a block diagram of a frame rate conversion apparatus according to the present invention.
  • the apparatus of FIG. 9 includes a motion estimating unit 210, a spatiotemporal smoothing unit 220, and a refined motion-compensated interpolation unit 230.
  • the motion estimating unit 210 obtains a motion vector between the current frame and the previous frame, and assigns the motion vector to a frame to be interpolated, and estimates a bidirectional motion vector for the frame to be interpolated.
  • the spatiotemporal smoothing unit 220 evaluates the accuracy of the motion vector of the current block estimated in frame to be interpolated, and then sets a motion vector of neighboring blocks, of which error distortion is the minimum, to the motion vector of the current block.
  • the refined motion-compensated interpolation unit 230 forms a block to be interpolated with the mean of blocks in the previous frame and the next frame of the frame to be interpolated, using the motion vector obtained in the spatiotemporal smoothing unit 220. At this time, the refined motion-compensation interpolation unit 230 extends the block to be interpolated and interpolates with different weights in an overlap region.
  • FIGS. 10A through 10C are conceptual diagrams for obtaining a bidirectional motion vector.
  • F n-1 is (n-1)-th frame
  • F n+1 is (n+1)-th frame
  • F n is n-th frame.
  • a bidirectional motion vector is obtained through a motion vector initialization state (FIGS. 10A and 10B) and a motion vector adjusting stage (FIG. 10C).
  • (n-1)-th frame/field ( F n-1 ) and (n+1)-th frame/field (F n+1 ) are decimated in 2:1 ratio, and reconstructed to (n-1)-th frame/field ( F and n -1 ) and (n+1)-th frame/field ( F and n +1 ).
  • the (n+1)-th frame/field ( ⁇ F n +1 ) is divided into a plurality of blocks, and a search range for each block is determined. Then, in the search range, by applying a block matching algorithm (BMA), a forward motion vector (MV) is estimated. Then, as shown in FIG. 10B, the n-th frame/field ( F and n ) to be interpolated is divided by block, and the estimated forward MV is set as an initial MV of the n-th frame/field ( F and n ) to be interpolated. Therefore, by compensating a motion according to the block grid of each block using a bidirectional motion vector as shown in FIG. 10B, overlaps and holes do not occur in the existing video signal.
  • BMA block matching algorithm
  • the motion vector adjusting stage will now be explained.
  • the initial MV obtained in the motion vector initialization stage changes a little.
  • a small search range ( ⁇ d ) is newly set.
  • BMA again in the small search range ( ⁇ d ) the motion vector set in the initial stage is corrected, and then a bidirectional motion vector is generated.
  • an arbitrary block (B ti ) in the n-th frame/field ( F and n ) to be interpolated is considered.
  • the initial MV ( D i ) simultaneously shows the motion between the n-th frame/field ( F and n ) to be interpolated for the arbitrary block (B ti ) and (n+1)-th frame/field ( F and n +1 ) and the motion from the (n-1)-th frame/field ( F and n -1 ) to the n-th frame/field ( F and n ) to be interpolated.
  • the arbitrary block (B ti ) on the n-th frame/field ( F and n ) to be interpolated moves by the initial MV ( D i )
  • the arbitrary block (B ti ) is generated by the block (B t1 ) of the (n-1)-th frame/field ( F and n -1 ) and the block (B t2 ) of the (n+1)-th frame/field ( F and n +1 ).
  • the arbitrary block (B ti ) is located on a fixed location, and each of block (B t1 ) and block (B t2 ) moves from the initial location within the search range ( ⁇ d ).
  • the motion between the block (B t1 ) and the arbitrary block (B ti ) should be the same as the motion between the arbitrary block (B ti ) and the block (B t2 ).
  • the block (B t1 ) and the block (B t2 ) should move symmetrically from the center of the block (B ti ) to be interpolated.
  • the number of possible combinations in case of having search range ( ⁇ d ) is (2d + 1) 2 .
  • the bidirectional vector between the (n-1)-th frame/field ( F and n -1 ) and the (n+1)-th frame/field ( F and n +1 ) for the n-th frame/field ( F and n ) is obtained.
  • the n-th frame/field ( F and n ) should be at the center between the (n-1)-th frame/field ( F and n -1 ) and the (n+1)-th frame/field ( F and n +1 ), the motion vector to each direction has the same value.
  • FIG. 11 is a conceptual diagram for refining a motion vector of the spatiotemporal smoothing unit 220 of FIG. 9.
  • the current block is set as MV 0
  • the motion vector of the block is set as D( ⁇ ).
  • the motion vector of a block having the minimum MAD among the motion vectors of the candidate blocks is set as the motion vector of the current block. That is, as the following equation 6, using the bidirectional motion vector between two neighboring frame/fields, the displaced frame difference (DFD) of the current block is obtained and then the motion vector of the candidate block having the minimum DFD is set as the motion vector of the current block.
  • the spatiotemporal smoothing refines picture quality by removing inappropriate motion vector detected in the motion estimation.
  • FIG. 12 is a conceptual diagram for explaining a motion-compensated interpolation method of the refined motion-compensated interpolation unit 230 of FIG. 9
  • the refined motion-compensated interpolation unit 230 forms a frame to be interpolated after taking the block mean of the two neighboring frames as the following equation 7 using the motion vector obtained bidirectionally.
  • the frame to be interpolated extends the original block size horizontally and vertically, and is interpolated in the overlapped region with different weights.
  • ⁇ ti ( p ) 1 2 [ ⁇ t 1 ( p - D ( B ( p ))) - ⁇ t 2 ( p - D ( B ( p )))]
  • FIG. 13 is a block diagram of a de-interlacing apparatus according to the present invention.
  • F n-1 which is input first, is (n-1)-th field, F n is n-th field, and F n+1 is (n+1)-th field.
  • n is a video signal which is converted from the n-th field (F n ) for progressive scanning.
  • a motion estimating unit 410 obtains the motion vector of the n-th field (F n ), which corresponds to the location of the field to be interpolated, after obtaining the bidirectional motion vector, from the (n-1)-th field (F n-1 ) and the (n+1)-th field (F n+1 ).
  • the bidirectional motion vector to be obtained in the n-th field (F n ) is obtained by calculating the fields, for which decimation conversion have been performed, through the motion vector initialization stage (FIGS. 10A and 10B) and the motion vector adjusting stage (FIG. 10C). As a result, for the field to be interpolated, the bidirectional motion vector between the previous field and the next field is calculated.
  • a spatiotemporal smoothing unit 420 obtains a bidirectional motion vector, which is smoothed through spatiotemporal smoothing, because bidirectional motion vectors obtained in the motion estimating unit 410 have some discontinuity.
  • a signal converting unit 430 is an interlaced-to-progressive conversion block, restores no-data lines of n-th field (F n ) with the mean of pixels to which bidirectional motion vectors generated in the spatiotemporal smoothing unit 420 are applied, and outputs the final frame ( n ).
  • FIG. 14 is a conceptual diagram for showing decimation conversion of the motion estimating unit 410 of FIG. 13.
  • the (n-1)-th field (F n-1 ) and the (n+1)-th field (F n+1 ) are input and reconstructed to the (n-1)-th field ( F and n -1 ) and the (n+1)-th field ( F and n +1 ), only with the lines having data. That is, the reconstructed (n-1)-th field ( F and n -1 ) and (n+1)-th field ( F and n +1 ) are reduced from the input (n-1)-th field (F n-1 ) and (n+1)-th field (F n+1 ) by half vertically. Therefore, the reconstructed (n-1)-th field ( F and n -1 ) and (n+1)-th field ( F and n +1 ) are decimated in a 2:1 ratio vertically and horizontally.
  • FIG. 15 is a conceptual diagram for showing motion-compensated de-interlacing in the signal conversion unit 430 of FIG. 13.
  • x and y are the abscissa value and the ordinate value respectively in each field
  • h and v are the horizontal component and the vertical component, respectively, of the bidirectional motion vector.
  • FIG. 16 is a conceptual diagram for showing spatiotemporal interpolation using a median filter in the signal conversion unit of FIG. 13.
  • D is a bidirectional motion vector
  • u y is (0, 1) T
  • (C+D)/2 is the resulting value of the motion-compensated de-interlacing as the equation 9.
  • the frame ( F and n ) to be finally output takes the original pixel if the line has data, and otherwise is interpolated with the median value of the pixel (C) of the (n-1)-th field, the pixel (D) of the (n+1)-th field, pixels (A, B) which are vertically neighboring the pixel (Z) to be interpolated in the n-th field, and de-interlaced pixel ((C+D)/2), as the pixel (Z) of the n-th field.
  • FIG. 17 is another embodiment of a de-interlacing apparatus according to the present invention.
  • a motion-compensated interpolation unit 172 interpolates with the mean of pixels using the interpolation value of a frame, that is a motion vector, as shown in FIG. 13 according to the present invention, or outputs the median value of pixel values, to which a motion vector is applied, the mean value of the pixels, and the value between two pixels which are vertically neighboring a pixel to be interpolated.
  • a spatiotemporal interpolation unit 176 outputs the mean value of pixels neighboring a pixel to be interpolated and pixels on the locations to be interpolated in the previous field and the next field of the field to be interpolated, as the interpolation value of a frame.
  • a motion evaluation unit 174 evaluates the degree of motion using the MAD value of the current block calculated in the motion estimating unit 410 of FIG. 13.
  • a motion adaptation unit 178 sets the value of a pixel to be finally interpolated by adaptively calculating the output value of the motion-compensated interpolation unit 172 and the output value of the spatiotemporal interpolation unit 176 according to the degree of motion evaluated in the motion evaluation unit 174.
  • the de-interlacing apparatus of FIG. 17 prevents the error which occurs when an inaccurate motion vector is used in the process for determining the presence of a motion.
  • the present invention by obtaining the bidirectional motion vector between two frames for a frame to be interpolated, overlaps and holes do not occur. Therefore, picture quality can be improved and particularly, panning or zooming image having a camera motion can be efficiently processed. Also, noise on a time axis between fields and flickers between lines, which occur in the existing method, can be reduced, and the ability to keep outlines is better than the existing de-interlacing methods. Also, by adaptively selecting between a motion-compensated interpolation value and a spatiotemporal interpolation value according to the degree of the motion of an input image, reliability of information on a motion is enhanced compared to the method simply using a motion-compensated interpolation value, and artifact can be efficiently reduced.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Television Systems (AREA)

Abstract

A format converter which performs frame rate conversion and de-interlacing using a bidirectional motion vector and a method thereof are provided. The method includes the steps of (a) estimating a bidirectional motion vector between the current frame and the previous frame from a frame to be interpolated; setting the motion vector of a neighboring block that has the minimum error distortion, among motions vectors estimated in the step (a), as the motion vector of the current block; and (c) forming a frame to be interpolated with the motion vector set in the step (b).

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention
The present invention relates to an image signal format converter and a method thereof, and more particularly, to a format converter which performs frame rate conversion and de-interlacing using a bidirectional motion vector, and a method thereof.
2. Description of the Related Art
Generally, in a PC or a high definition TV (HDTV), conversion of a format such as a frame rate and de-interlacing is needed to exchange programs having various signal standards.
FIG. 1 is a block diagram of a conventional frame rate conversion apparatus.
Referring to FIG. 1, an image dividing unit 110 divides an image into a changed region and an unchanged region for efficient motion estimation as shown in FIG. 2. The unchanged region is divided again into a covered region, an uncovered region, a background, and an object.
A motion estimating unit 120 generates the motion vector of a block using a block matching algorithm which is generally used in video coding. In a representative existing block matching algorithm, a motion vector is found out for each block on the assumption that pixels in a predetermined-size block, as shown in FIG. 3, only moved without rotation, magnification, or reduction. In FIG. 3, it is assumed that the motion vector of an N x N size base block located on an arbitrary coordinates (xc, yc) in the current frame (fc) is estimated in a range of ± P pixels from the previous frame (fp). Then, the search range in the previous frame is (N+2P) x (N+2P). Therefore, the motion vector is determined as a location which has the maximum correlation among total (2P+1)2 candidate locations. At this time, the difference between the base block in the current frame and the candidate block in the previous frame is calculated as a mean absolute difference (MAD) as the following equation 1:
Figure 00020001
Here, the motion vector of the final block is determined as a location (m, n) of a search range of which the mean absolute difference between the current block and the candidate block is the minimum.
A spatiotemporal smoothing unit 130 refines an inappropriate motion vector obtained in the motion estimating unit 120, and refines the smoothness of a motion vector as shown in FIG. 4.
A motion-compensated interpolation unit 140 finds a forward motion vector for the previous frame and the next frame of an image to be interpolated, and using the obtained motion vector, performs bidirectional interpolation according to region classification information generated in the image dividing unit 110. At this time, as shown in FIG. 5, the motion-compensated interpolation using a forward motion vector generates overlaps, where blocks are overlapped because two or more motion vectors are assigned in a frame to be interpolated, and holes where motion vector is not assigned,. These overlaps and holes directly affect the picture quality of an image and degrade the picture quality. Also, since these overlaps and holes have irregular shapes, they should be handled on a pixel-by-pixel basis. Therefore, for the conventional apparatus, complicated signal processing processes and complicated hardware therefor should be implemented to remove the overlaps and holes.
Also, for an ordinary television video signal, a frequency band is compressed using an interlacing method in which two fields forms a frame. Recently, in a PC or an HDTV which displays generally using a progressive scanning, to display an interlaced image, image lines, which are empty by de-interlacing, are generated by an arbitrary method for progressive scanning.
FIG. 6 is a conceptual basic diagram of ordinary de-interlacing.
Referring to FIG. 6, de-interlacing changes a field containing only an odd-numbered pixel, or only an even-numbered pixel, into a frame. At this time, an output frame F0( x , n) is defined as the following equation 2:
Figure 00030001
Here, x means a spatial location, and n is a field number. F( x , n) is an input field, and Fi( x , n) is a pixel to be interpolated.
FIG. 7 is a 3 x 3 window for applying edge-based line averaging (ELA) de-interlacing algorithm which does not use motion compensation.
Referring to FIG. 7, ELA de-interlacing uses correlation between pixels considering direction (x, y) from the location of a pixel to be interpolated as the following equation 3. That is, the mean value of pixels neighboring a pixel to be interpolated and pixels to be interpolated in the previous field and the next field of the field to be interpolated is output:
Figure 00030002
FIG. 8 is a conceptual diagram for explaining an ordinary time-recursive (TR) de-interlacing method.
Referring to FIG. 8, TR de-interlacing using a motion vector assumes that the previous field (n-1) is perfectly de-interlaced, and the missing data of the current field (n) is compensated with a motion vector. A pixel to be interpolated can be the original pixel of the previous field, or a previously interpolated pixel. Therefore, a pixel to be interpolated can be expressed as the following equation 4:
Figure 00040001
However, since ELA de-interlacing method does not use motion compensation, flickering occurs in a region where motions exist, and since TR de-interlacing method is continuously de-interlaced, an error occurred in an arbitrary field can be propagated to other fields.
SUMMARY OF THE INVENTION
To solve the above problems, it is an object of the present invention to provide a frame rate converting method which refines a picture quality by directly obtaining a bidirectional motion vector of two continuous frames for a pixel to be interpolated.
It is another object to provide a frame rate converter using the frame rate converting method.
It is another object to provide a de-interlacing method which by estimating a bidirectional motion vector between two continuous fields for a pixel to be interpolated, is easy to perform and has an excellent outline-keeping ability.
It is another object to provide a de-interlacing apparatus using the de-interlacing method.
It is another object to provide a de-interlacing apparatus which can enhance reliability of motion information and reduce errors in a pixel to be interpolated by adaptively selecting between motion-compensated interpolation value and spatiotemporal interpolation value according to the degree of motion of an input image.
To accomplish the above object of the present invention, there is provided a frame rate converting method having the steps of (a) estimating a bidirectional motion vector for a frame to be interpolated using motion vectors between the current frame and the previous frame; (b) setting the motion vector of a neighboring block that has the minimum error distortion, among motions vectors estimated in the step (a) in a frame to be interpolated, as the motion vector of the current block; and (c) forming a frame to be interpolated with the motion vector set in the step (b).
To accomplish another object of the present invention, there is also provided a frame rate converter having a bidirectional motion estimating means for obtaining the motion vector between the current frame and the previous frame, assigning the motion vector to a frame to be interpolated, and estimating the assigned motion vector for a frame to be interpolated; a spatiotemporal smoothing unit for evaluating the accuracy of the motion vector of the current block in the frame to be interpolated in the bidirectional motion estimating means, and then setting the motion vector of a neighboring block, which has the minimum error distortion, as the motion vector of the current block; and an interpolation unit for extending the block to be interpolated, and interpolating with the motion vector obtained in the spatiotemporal smoothing unit in an overlapped region with different weights.
To accomplish another object of the present invention, there is also provided a de-interlacing method having (a) estimating a bidirectional motion vector for a pixel to be interpolated using motion vectors between the previous field and the next field; (b) setting the motion vector of which neighboring error distortion is the minimum in the step (a) as the motion vector of a pixel to be interpolated; and (c) forming the pixel to be interpolated with the motion vector set in the step (b).
To accomplish another object of the present invention, there is also provided a de-interlacing apparatus having a bidirectional motion estimating means for obtaining the motion vector between the current field and the previous field, assigning the motion vector to a field to be interpolated, and estimating the assigned motion vector for a field to be interpolated; a spatiotemporal smoothing unit for evaluating the accuracy of the motion vector of the current block in the field to be interpolated in the bidirectional motion estimating means, and then setting the motion vector of a neighboring block, which has the minimum error distortion, as the motion vector of the current block; and a signal converting unit for forming a pixel of a line having no data, with the median value of pixel values obtained by applying the motion vector set in the spatiotemporal smoothing unit, the mean value of the pixel values, and the values of pixels vertically neighboring the pixel to be interpolated.
BRIEF DESCRIPTION OF THE DRAWINGS
The above objects and advantages of the present invention will become more apparent by describing in detail a preferred embodiment thereof with reference to the attached drawings in which:
  • FIG. 1 is a block diagram of a conventional frame rate conversion apparatus;
  • FIG. 2 is a diagram for explaining an image dividing method in the image dividing unit of FIG. 1;
  • FIG. 3 is a diagram for explaining a motion estimating method in the motion estimating unit of FIG. 1;
  • FIG. 4 illustrates a screen before refinement and a screen after refinement in the spatiotemporal smoothing unit of FIG. 1;
  • FIG. 5 is an example of the structure of an image interpolated by motion compensation in the motion-compensated interpolation unit of FIG. 1;
  • FIG. 6 is a conceptual basic diagram of ordinary de-interlacing;
  • FIG. 7 is a 3x3 window for applying edge-based line averaging (ELA) de-interlacing algorithm which does not use motion compensation;
  • FIG. 8 is a diagram for explaining motion vector estimation for each block in the spatiotemporal smoothing unit of FIG. 6;
  • FIG. 9 is a block diagram of a frame rate conversion apparatus according to the present invention;
  • FIGS. 10A through 10C are conceptual diagrams for obtaining a bidirectional motion vector;
  • FIG. 11 is a conceptual diagram for refining a motion vector of the spatiotemporal smoothing unit of FIG. 9;
  • FIG. 12 is a conceptual diagram for explaining a motion-compensated interpolation method of the refined motion-compensated interpolation unit of FIG. 9;
  • FIG. 13 is a block diagram of a de-interlacing apparatus according to the present invention;
  • FIG. 14 is a conceptual diagram for showing decimation conversion of the motion estimating unit of FIG. 13;
  • FIG. 15 is a conceptual diagram for showing motion-compensated de-interlacing in the signal conversion unit of FIG. 13;
  • FIG. 16 is a conceptual diagram for showing spatiotemporal interpolation using a median filter in the signal conversion unit of FIG. 13; and
  • FIG. 17 is another embodiment of a de-interlacing apparatus according to the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
    Hereinafter, embodiments of the present invention will be described in detail with reference to the attached drawings. The present invention is not restricted to the following embodiments, and many variations are possible within the spirit and scope of the present invention. The embodiments of the present invention are provided in order to more completely explain the present invention to anyone skilled in the art.
    FIG. 9 is a block diagram of a frame rate conversion apparatus according to the present invention
    The apparatus of FIG. 9 includes a motion estimating unit 210, a spatiotemporal smoothing unit 220, and a refined motion-compensated interpolation unit 230.
    Referring to FIG. 9, the motion estimating unit 210 obtains a motion vector between the current frame and the previous frame, and assigns the motion vector to a frame to be interpolated, and estimates a bidirectional motion vector for the frame to be interpolated.
    The spatiotemporal smoothing unit 220 evaluates the accuracy of the motion vector of the current block estimated in frame to be interpolated, and then sets a motion vector of neighboring blocks, of which error distortion is the minimum, to the motion vector of the current block.
    The refined motion-compensated interpolation unit 230 forms a block to be interpolated with the mean of blocks in the previous frame and the next frame of the frame to be interpolated, using the motion vector obtained in the spatiotemporal smoothing unit 220. At this time, the refined motion-compensation interpolation unit 230 extends the block to be interpolated and interpolates with different weights in an overlap region.
    FIGS. 10A through 10C are conceptual diagrams for obtaining a bidirectional motion vector.
    First, in two neighboring frames, Fn-1 is (n-1)-th frame, Fn+1 is (n+1)-th frame, and Fn is n-th frame. In the n-th frame ( Fn), as shown in FIGS. 10A through 10C, a bidirectional motion vector is obtained through a motion vector initialization state (FIGS. 10A and 10B) and a motion vector adjusting stage (FIG. 10C).
    Referring to FIG. 10A, the motion vector initialization stage will now be explained.
    First, (n-1)-th frame/field ( Fn-1) and (n+1)-th frame/field (Fn+1) are decimated in 2:1 ratio, and reconstructed to (n-1)-th frame/field (F and n-1) and (n+1)-th frame/field (F and n+1).
    Then, as shown in FIG. 10A, the (n+1)-th frame/field (^F n+1) is divided into a plurality of blocks, and a search range for each block is determined. Then, in the search range, by applying a block matching algorithm (BMA), a forward motion vector (MV) is estimated. Then, as shown in FIG. 10B, the n-th frame/field (F and n ) to be interpolated is divided by block, and the estimated forward MV is set as an initial MV of the n-th frame/field (F and n ) to be interpolated. Therefore, by compensating a motion according to the block grid of each block using a bidirectional motion vector as shown in FIG. 10B, overlaps and holes do not occur in the existing video signal.
    Next, referring to FIG. 10C, the motion vector adjusting stage will now be explained. First, since the forward MV is used in the initialization stage, the initial MV obtained in the motion vector initialization stage changes a little. To correct this, taking the forward MV obtained in the motion vector initialization stage as an initial value, a small search range (± d) is newly set. Then, using BMA again in the small search range (± d), the motion vector set in the initial stage is corrected, and then a bidirectional motion vector is generated. To explain the adjusting stage for the initial MV shown in FIG. 10C, an arbitrary block (Bti) in the n-th frame/field (F and n ) to be interpolated is considered. The center of the block (Bti) is (x,y), and corresponds to the initial MV (( D i) = (h, v)) . At this time, the initial MV ( D i) simultaneously shows the motion between the n-th frame/field (F and n ) to be interpolated for the arbitrary block (Bti) and (n+1)-th frame/field (F and n+1) and the motion from the (n-1)-th frame/field (F and n-1) to the n-th frame/field (F and n ) to be interpolated. Then, if the arbitrary block (Bti) on the n-th frame/field (F and n ) to be interpolated moves by the initial MV ( D i), the arbitrary block (Bti) is generated by the block (Bt1) of the (n-1)-th frame/field (F and n-1) and the block (Bt2) of the (n+1)-th frame/field (F and n+1). That is, the centers of the initial block (Bt1)and block (Bt2) can be expressed as the following equation 5: Bt1(xt1,yt1) = (x, y)-(h, v) = (x-h, y+v) Bt2 (xt2,yt2)= (x, y)+(h, v) = (x+h, y+v)
    Here, the arbitrary block (Bti) is located on a fixed location, and each of block (Bt1) and block (Bt2) moves from the initial location within the search range (± d). At this time, if the n-th frame/field (F and n ) should be located in the center between the (n-1)-th frame/field (F andn-1 ) and the (n+1)-th frame/field (F^ n+1), the motion between the block (Bt1) and the arbitrary block (Bti) should be the same as the motion between the arbitrary block (Bti) and the block (Bt2). For this, on the motion trajectory of the initial MV, the block (Bt1) and the block (Bt2) should move symmetrically from the center of the block (Bti) to be interpolated.
    Therefore, the number of possible combinations in case of having search range (± d) is (2d + 1)2. After this process, the bidirectional vector between the (n-1)-th frame/field (F and n-1) and the (n+1)-th frame/field (F and n+1) for the n-th frame/field (F and n ) is obtained. At this time, if the n-th frame/field (F and n ) should be at the center between the (n-1)-th frame/field (F and n-1) and the (n+1)-th frame/field (F and n+1), the motion vector to each direction has the same value.
    FIG. 11 is a conceptual diagram for refining a motion vector of the spatiotemporal smoothing unit 220 of FIG. 9.
    Referring to FIG. 11, first, in the frame/field to be interpolated, the current block is set as MV0, candidate MV blocks surrounding the current block are set as MVi (i=1,..,8), and the motion vector of the block is set as D(· ). Then, the motion vector of a block having the minimum MAD among the motion vectors of the candidate blocks is set as the motion vector of the current block. That is, as the following equation 6, using the bidirectional motion vector between two neighboring frame/fields, the displaced frame difference (DFD) of the current block is obtained and then the motion vector of the candidate block having the minimum DFD is set as the motion vector of the current block. In conclusion, the spatiotemporal smoothing refines picture quality by removing inappropriate motion vector detected in the motion estimation.
    Figure 00100001
    FIG. 12 is a conceptual diagram for explaining a motion-compensated interpolation method of the refined motion-compensated interpolation unit 230 of FIG. 9
    Referring to FIG. 12, the refined motion-compensated interpolation unit 230 forms a frame to be interpolated after taking the block mean of the two neighboring frames as the following equation 7 using the motion vector obtained bidirectionally. At this time, the frame to be interpolated extends the original block size horizontally and vertically, and is interpolated in the overlapped region with different weights. ƒti (p) = 12 [ƒt 1(p - D(B(p)))- ƒt 2(p- D(B(p)))]
    FIG. 13 is a block diagram of a de-interlacing apparatus according to the present invention.
    Referring to FIG. 13, Fn-1, which is input first, is (n-1)-th field, Fn is n-th field, and Fn+1 is (n+1)-th field.
    Figure 00110001
    n is a video signal which is converted from the n-th field (Fn) for progressive scanning.
    A motion estimating unit 410 obtains the motion vector of the n-th field (Fn), which corresponds to the location of the field to be interpolated, after obtaining the bidirectional motion vector, from the (n-1)-th field (Fn-1) and the (n+1)-th field (Fn+1). The bidirectional motion vector to be obtained in the n-th field (Fn) is obtained by calculating the fields, for which decimation conversion have been performed, through the motion vector initialization stage (FIGS. 10A and 10B) and the motion vector adjusting stage (FIG. 10C). As a result, for the field to be interpolated, the bidirectional motion vector between the previous field and the next field is calculated.
    A spatiotemporal smoothing unit 420, as described in FIG. 11, obtains a bidirectional motion vector, which is smoothed through spatiotemporal smoothing, because bidirectional motion vectors obtained in the motion estimating unit 410 have some discontinuity.
    A signal converting unit 430 is an interlaced-to-progressive conversion block, restores no-data lines of n-th field (Fn) with the mean of pixels to which bidirectional motion vectors generated in the spatiotemporal smoothing unit 420 are applied, and outputs the final frame ( n ).
    FIG. 14 is a conceptual diagram for showing decimation conversion of the motion estimating unit 410 of FIG. 13.
    Referring to FIG. 14, the (n-1)-th field (Fn-1) and the (n+1)-th field (Fn+1) are input and reconstructed to the (n-1)-th field (F andn -1) and the (n+1)-th field (F andn +1), only with the lines having data. That is, the reconstructed (n-1)-th field (F andn -1) and (n+1)-th field (F and n+1) are reduced from the input (n-1)-th field (Fn-1) and (n+1)-th field (Fn+1) by half vertically. Therefore, the reconstructed (n-1)-th field (F and n-1) and (n+1)-th field (F and n+1) are decimated in a 2:1 ratio vertically and horizontally.
    FIG. 15 is a conceptual diagram for showing motion-compensated de-interlacing in the signal conversion unit 430 of FIG. 13.
    Referring to FIG. 15, no-data lines of the n-th field (Fn) is restored, using the bidirectional motion vector of the field (F and n ) to be interpolated. The restoration process can be expressed as the following equation 8:
    Figure 00120001
    Here, x and y are the abscissa value and the ordinate value respectively in each field, and h and v are the horizontal component and the vertical component, respectively, of the bidirectional motion vector.
    FIG. 16 is a conceptual diagram for showing spatiotemporal interpolation using a median filter in the signal conversion unit of FIG. 13.
    The performance of a de-interlacing method is affected greatly by the result of motion estimation. Therefore, to reduce an error in motion estimation, no data lines in the field (F andn ) to be interpolated are interpolated using a median filter as shown in FIG. 16, and it can be expressed as the following equation 9:
    Figure 00120002
    Here, pixels A, B, C, and D are defined as follows: A= Fn ( p - u y ), B = Fn ( p + u y ), C = Fn -1( p - D ), D = Fn+ 1( p + D )
    Here, D is a bidirectional motion vector, u y is (0, 1)T, and (C+D)/2 is the resulting value of the motion-compensated de-interlacing as the equation 9.
    As this, if the median filter is used, the frame (F andn ) to be finally output takes the original pixel if the line has data, and otherwise is interpolated with the median value of the pixel (C) of the (n-1)-th field, the pixel (D) of the (n+1)-th field, pixels (A, B) which are vertically neighboring the pixel (Z) to be interpolated in the n-th field, and de-interlaced pixel ((C+D)/2), as the pixel (Z) of the n-th field.
    FIG. 17 is another embodiment of a de-interlacing apparatus according to the present invention.
    Referring to FIG. 17, a motion-compensated interpolation unit 172 interpolates with the mean of pixels using the interpolation value of a frame, that is a motion vector, as shown in FIG. 13 according to the present invention, or outputs the median value of pixel values, to which a motion vector is applied, the mean value of the pixels, and
    the value between two pixels which are vertically neighboring a pixel to be interpolated.
    A spatiotemporal interpolation unit 176 outputs the mean value of pixels neighboring a pixel to be interpolated and pixels on the locations to be interpolated in the previous field and the next field of the field to be interpolated, as the interpolation value of a frame.
    A motion evaluation unit 174 evaluates the degree of motion using the MAD value of the current block calculated in the motion estimating unit 410 of FIG. 13.
    A motion adaptation unit 178 sets the value of a pixel to be finally interpolated by adaptively calculating the output value of the motion-compensated interpolation unit 172 and the output value of the spatiotemporal interpolation unit 176 according to the degree of motion evaluated in the motion evaluation unit 174.
    Therefore, the de-interlacing apparatus of FIG. 17 prevents the error which occurs when an inaccurate motion vector is used in the process for determining the presence of a motion.
    As described above, according to the present invention, by obtaining the bidirectional motion vector between two frames for a frame to be interpolated, overlaps and holes do not occur. Therefore, picture quality can be improved and particularly, panning or zooming image having a camera motion can be efficiently processed. Also, noise on a time axis between fields and flickers between lines, which occur in the existing method, can be reduced, and the ability to keep outlines is better than the existing de-interlacing methods. Also, by adaptively selecting between a motion-compensated interpolation value and a spatiotemporal interpolation value according to the degree of the motion of an input image, reliability of information on a motion is enhanced compared to the method simply using a motion-compensated interpolation value, and artifact can be efficiently reduced.

    Claims (20)

    1. A frame rate converting method comprising the steps of:
      (a) estimating a bidirectional motion vector for a frame to be interpolated using motion vectors between the current frame and the previous frame;
      (b) setting the motion vector of a neighboring block that has the minimum error distortion, among motions vectors estimated in the step (a) in a frame to be interpolated, as the motion vector of the current block; and
      (c) forming a frame to be interpolated with the motion vector set in the step (b).
    2. The method of claim 1, wherein the step (a) further comprises the sub-steps of:
      (a-1) detecting a motion vector between the current frame and the previous frame, and assigning the motion vector to a frame to be interpolated; and
      (a-2) adjusting the motion vector assigned in the step (a-1) in the frame to be interpolated according to a block grid.
    3. The method of claim 2, wherein the detecting in the step (a-1) further comprises the steps of:
      decimating an image; and
      estimating a motion vector from the decimated image.
    4. The method of claim 2, wherein the step (a-2) is the step for estimating a value, which has the minimum error among blocks of the previous frame and the current frame which linearly pass through the center of a block formed according to the block grid, as the bidirectional motion vector of the frame block to be interpolated in the frame to be interpolated.
    5. The method of claim 1, wherein the step (b) includes the step for evaluating the accuracy of the motion vector of the current block in the frame to be interpolated and setting the motion vector of neighboring block which has the minimum error distortion, as the motion vector of the current block.
    6. The method of claim 1, wherein the step (b) includes the steps of:
      assigning a motion vector for the frame to be interpolated;
      evaluating the accuracy of the motion vector of the current block; and
      setting the motion vector of a neighboring block which has the minimum error distortion, as the motion vector of the current block.
    7. The method of claim 1, wherein in the step (c) a block to be interpolated is formed with the mean of blocks, using the estimated motion vector in a frame to be interpolated.
    8. The method of claim 1, wherein in the step (c) the block to be interpolated is extended and interpolated in an overlapped region with different weights.
    9. A frame rate converter comprising:
      a bidirectional motion estimating means for obtaining the motion vector between the current frame and the previous frame, assigning the motion vector to a frame to be interpolated, and estimating the assigned motion vector for a frame to be interpolated;
      a spatiotemporal smoothing unit for evaluating the accuracy of the motion vector of the current block in the frame to be interpolated in the bidirectional motion estimating means, and then setting the motion vector of a neighboring block, which has the minimum error distortion, as the motion vector of the current block; and
      an interpolation unit for extending the block to be interpolated, and interpolating with the motion vector obtained in the spatiotemporal smoothing unit in an overlapped region with different weights.
    10. A de-interlacing method comprising:
      (a) estimating a bidirectional motion vector for a pixel to be interpolated using motion vectors between the previous field and the next field;
      (b) setting the motion vector of which neighboring error distortion is the minimum in the step (a) as the motion vector of a pixel to be interpolated; and
      (c) forming the pixel to be interpolated with the motion vector set in the step (b).
    11. The de-interlacing method of claim 10, wherein the step (a) further comprises the sub-steps of:
      (a-1) detecting the motion vector between the current field and the previous field, and assigning the motion vector to a field to be interpolated; and
      (a-2) adjusting the motion vector assigned in the step (a-1) according to a block grid in a frame to be interpolated.
    12. The de-interlacing method of claim 10, wherein the step (a-2) is the step for estimating a location value, which has the minimum error among blocks of the previous field and the current field which linearly pass through the center of block formed according to the block grid in the field to be interpolated, as the bidirectional motion vector of the block of the field to be interpolated.
    13. The de-interlacing method of claim 10, wherein the step (b) includes the step for evaluating the accuracy of the motion vector of the current block in the field to be interpolated and setting the motion vector of neighboring block which has the minimum error distortion, as the motion vector of the current block.
    14. The de-interlacing method of claim 10, wherein the step (b) includes the steps of:
      adjusting a motion vector for the field to be interpolated;
      evaluating the accuracy of the motion vector of the current block; and
      setting the motion vector of a neighboring block which has the minimum error distortion, as the motion vector of the current block.
    15. The de-interlacing method of claim 10, wherein in the step (c) a pixel to be interpolated is formed with the mean of pixels, using the estimated motion vector in a field to be interpolated.
    16. The de-interlacing method of claim 10, wherein in the step (c) the median value of pixel values, to which the estimated motion vector is applied, of the previous field and the next field of the field to be interpolated, the mean value of the pixels, and the values of two pixels vertically neighboring a pixel to be interpolated, is set to a pixel to be interpolated.
    17. The de-interlacing method of claim 10, wherein in the step (c) the field to be interpolated takes the original pixel if the line has data, and otherwise, takes the median value of the pixel value on the same location of the (n-1)-th field, the pixel value on the same location of the (n+1)-th field, the values of pixels vertically neighboring the pixel to be interpolated in the n-th field, and the mean value of these pixel values.
    18. A de-interlacing apparatus comprising:
      a bidirectional motion estimating means for obtaining the motion vector between the current field and the previous field, assigning the motion vector to a field to be interpolated, and estimating the assigned motion vector for a field to be interpolated;
      a spatiotemporal smoothing unit for evaluating the accuracy of the motion vector of the current block in the field to be interpolated in the bidirectional motion estimating means, and then setting the motion vector of a neighboring block, which has the minimum error distortion, as the motion vector of the current block; and
      a signal converting unit for forming a pixel of a line having no data, with the median value of pixel values obtained by applying the motion vector set in the spatiotemporal smoothing unit, the mean value of the pixel values, and the values of pixels vertically neighboring the pixel to be interpolated.
    19. An adaptive de-interlacing apparatus comprising:
      a motion evaluating unit for evaluating the degree of motion referring to the value of a motion vector of which error distortion between blocks of the previous field and the current field is the minimum;
      a motion-compensated interpolation unit for interpolating with the mean of pixels to which bidirectional motion vector detected for a pixel to be interpolated is applied, or interpolating with the median value of pixel values, to which a motion vector is applied, the mean value of the pixels, and the value between two pixels vertically neighboring the pixel to be interpolated;
      a spatiotemporal interpolation unit for interpolating with the mean value of pixels neighboring the pixel to be interpolated and pixels to be interpolated in the previous field and the next field of the field to be interpolated; and
      a motion adaptation unit for adaptively selecting between the interpolation value of the motion-compensated interpolation unit and the interpolation value of the spatiotemporal interpolation unit according to the degree of motion evaluated in the motion evaluating unit.
    20. An adaptive frame rate converter comprising:
      a motion evaluating unit for evaluating the degree of motion referring to the value of a motion vector of which error distortion between blocks of the previous frame and the current frame is the minimum;
      a motion-compensated interpolation unit for interpolating with the mean of pixels to which bidirectional motion vector detected for a frame to be interpolated is applied;
      a spatiotemporal interpolation unit for interpolating with the mean value of pixels neighboring the pixel to be interpolated and pixels to be interpolated in the previous frame and the next frame of the frame to be interpolated; and
      a motion adaptation unit for adaptively selecting between the interpolation value of the motion-compensated interpolation unit and the interpolation value of the spatiotemporal interpolation unit according to the degree of motion evaluated in the motion evaluating unit.
    EP01104859A 2000-06-13 2001-02-28 Format converter using bidirectional motion vector and method thereof Withdrawn EP1164792A3 (en)

    Applications Claiming Priority (2)

    Application Number Priority Date Filing Date Title
    KR2000032388 2000-06-13
    KR1020000032388A KR100708091B1 (en) 2000-06-13 2000-06-13 Apparatus and method for frame rate conversion using bidirectional motion vector

    Publications (2)

    Publication Number Publication Date
    EP1164792A2 true EP1164792A2 (en) 2001-12-19
    EP1164792A3 EP1164792A3 (en) 2003-08-13

    Family

    ID=19671823

    Family Applications (1)

    Application Number Title Priority Date Filing Date
    EP01104859A Withdrawn EP1164792A3 (en) 2000-06-13 2001-02-28 Format converter using bidirectional motion vector and method thereof

    Country Status (5)

    Country Link
    US (1) US6900846B2 (en)
    EP (1) EP1164792A3 (en)
    JP (1) JP4563603B2 (en)
    KR (1) KR100708091B1 (en)
    CN (1) CN1183768C (en)

    Cited By (23)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    EP1339234A3 (en) * 2002-02-25 2005-02-09 Samsung Electronics Co., Ltd. Apparatus for and method of converting scanning format
    AU2003264647B2 (en) * 2002-12-26 2005-03-03 Samsung Electronics Co., Ltd Apparatus and method for converting frame rate
    WO2005025213A1 (en) * 2003-09-04 2005-03-17 Koninklijke Philips Electronics N.V. Robust de-interlacing of video signals
    WO2006012382A1 (en) * 2004-07-20 2006-02-02 Qualcomm Incorporated Method and apparatus for frame rate up conversion with multiple reference frames and variable block sizes
    EP1659791A1 (en) 2004-11-17 2006-05-24 Samsung Electronics Co., Ltd. Deinterlacing using motion estimation and compenstation
    WO2007047693A2 (en) 2005-10-17 2007-04-26 Qualcomm Incorporated Method and apparatus for spatio-temporal deinterlacing aided by motion compensation for field-based video
    WO2007114995A1 (en) 2006-04-03 2007-10-11 Qualcomm Incorporated Preprocessor method and apparatus
    EP1429547A3 (en) * 2002-12-10 2007-10-24 Samsung Electronics Co., Ltd. Deinterlacing apparatus and method
    WO2007075885A3 (en) * 2005-12-21 2007-10-25 Analog Device Inc Methods and apparatus for progressive scanning of interlaced video
    EP1418754A3 (en) * 2002-10-08 2007-11-28 Broadcom Corporation Progressive conversion of interlaced video based on coded bitstream analysis
    CN100450155C (en) * 2003-09-04 2009-01-07 皇家飞利浦电子股份有限公司 Robust de-interlacing of video signals
    EP2026560A3 (en) * 2007-08-08 2009-11-11 Canon Kabushiki Kaisha Image processing apparatus and control method
    GB2475369A (en) * 2009-11-09 2011-05-18 Intel Corp Frame rate convertor using motion estimation and pixel interpolation
    GB2476143A (en) * 2009-12-08 2011-06-15 Intel Corp Frame rate conversion using bi-directional, local and global motion estimation
    US8411931B2 (en) 2006-06-23 2013-04-02 Imax Corporation Methods and systems for converting 2D motion pictures for stereoscopic 3D exhibition
    US8421917B2 (en) 2007-08-08 2013-04-16 Canon Kabushiki Kaisha Image processing apparatus and method of controlling the same
    US8654848B2 (en) 2005-10-17 2014-02-18 Qualcomm Incorporated Method and apparatus for shot detection in video streaming
    US8780957B2 (en) 2005-01-14 2014-07-15 Qualcomm Incorporated Optimal weights for MMSE space-time equalizer of multicode CDMA system
    US8842730B2 (en) 2006-01-27 2014-09-23 Imax Corporation Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality
    US8879635B2 (en) 2005-09-27 2014-11-04 Qualcomm Incorporated Methods and device for data alignment with time domain boundary
    US8948260B2 (en) 2005-10-17 2015-02-03 Qualcomm Incorporated Adaptive GOP structure in video streaming
    US9131164B2 (en) 2006-04-04 2015-09-08 Qualcomm Incorporated Preprocessor method and apparatus
    CN108305216A (en) * 2018-03-15 2018-07-20 信阳师范学院 A kind of image magnification method of bilateral four interpolation

    Families Citing this family (127)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US7663695B2 (en) * 2000-05-05 2010-02-16 Stmicroelectronics S.R.L. Method and system for de-interlacing digital images, and computer program product therefor
    US7266150B2 (en) * 2001-07-11 2007-09-04 Dolby Laboratories, Inc. Interpolation of video compression frames
    US8111754B1 (en) * 2001-07-11 2012-02-07 Dolby Laboratories Licensing Corporation Interpolation of video compression frames
    KR100396558B1 (en) * 2001-10-25 2003-09-02 삼성전자주식회사 Apparatus and method for converting frame and/or field rate using adaptive motion compensation
    KR100412501B1 (en) * 2001-11-30 2003-12-31 삼성전자주식회사 Pixel-data selection device for motion compensation and method of the same
    US20030107648A1 (en) * 2001-12-12 2003-06-12 Richard Stewart Surveillance system and method with adaptive frame rate
    US6885706B2 (en) * 2002-01-05 2005-04-26 Aiptek International Inc. Method for promoting temporal resolution of sequential images
    JP4114859B2 (en) * 2002-01-09 2008-07-09 松下電器産業株式会社 Motion vector encoding method and motion vector decoding method
    US7003035B2 (en) 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
    JP3840129B2 (en) * 2002-03-15 2006-11-01 株式会社東芝 Motion vector detection method and apparatus, interpolation image generation method and apparatus, and image display system
    JP2003339029A (en) * 2002-05-20 2003-11-28 Sony Corp Motion vector correction circuit and method therefor
    US20040001546A1 (en) 2002-06-03 2004-01-01 Alexandros Tourapis Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
    AU2012244113B2 (en) * 2002-06-28 2013-06-27 Dolby Laboratories Licensing Corporation Interpolation of video compression frames
    US7154952B2 (en) * 2002-07-19 2006-12-26 Microsoft Corporation Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures
    US7362374B2 (en) * 2002-08-30 2008-04-22 Altera Corporation Video interlacing using object motion estimation
    JP4198550B2 (en) * 2002-09-10 2008-12-17 株式会社東芝 Frame interpolation method and apparatus using the frame interpolation method
    KR100506864B1 (en) 2002-10-04 2005-08-05 엘지전자 주식회사 Method of determining motion vector
    JP4462823B2 (en) * 2002-11-20 2010-05-12 ソニー株式会社 Image signal processing apparatus and processing method, coefficient data generating apparatus and generating method used therefor, and program for executing each method
    KR100486284B1 (en) * 2002-11-22 2005-04-29 삼성전자주식회사 Apparatus and method of deinterlacing interlaced frames capable of outputting consecutive two deinterlaced frames
    KR20040061244A (en) * 2002-12-30 2004-07-07 삼성전자주식회사 Method and apparatus for de-interlacing viedo signal
    KR100499144B1 (en) * 2003-03-06 2005-07-04 삼성전자주식회사 Method and apparatus for generating the motion vector of interpolated image using the adaptive bi-directional motion estimation
    JP4053490B2 (en) 2003-03-25 2008-02-27 株式会社東芝 Interpolated image creation method for frame interpolation, image display system using the same, and interpolated image creation apparatus
    JP3887346B2 (en) * 2003-04-28 2007-02-28 株式会社東芝 Video signal processing apparatus, video signal processing method, and video display apparatus
    US8824553B2 (en) 2003-05-12 2014-09-02 Google Inc. Video compression method
    KR100530223B1 (en) * 2003-05-13 2005-11-22 삼성전자주식회사 Frame interpolation method and apparatus at frame rate conversion
    KR100517504B1 (en) * 2003-07-01 2005-09-28 삼성전자주식회사 Method and apparatus for determining motion compensation mode of B-picture
    US7129987B1 (en) 2003-07-02 2006-10-31 Raymond John Westwater Method for converting the resolution and frame rate of video data using Discrete Cosine Transforms
    US7170469B2 (en) * 2003-07-18 2007-01-30 Realtek Semiconductor Corp. Method and apparatus for image frame synchronization
    US7609763B2 (en) 2003-07-18 2009-10-27 Microsoft Corporation Advanced bi-directional predictive coding of video frames
    DE60312981D1 (en) * 2003-08-26 2007-05-16 St Microelectronics Srl Method and system for canceling the interlacing process during the presentation of video images
    US8064520B2 (en) 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
    CN1316824C (en) * 2004-02-02 2007-05-16 扬智科技股份有限公司 Method of motion vector de-interleaving
    KR100565066B1 (en) * 2004-02-11 2006-03-30 삼성전자주식회사 Motion Compensation Interpolation Method Using Overlapping Block-based Motion Estimation and Frame Rate Conversion Apparatus
    GB2411784B (en) * 2004-03-02 2006-05-10 Imagination Tech Ltd Motion compensation deinterlacer protection
    US20110025911A1 (en) * 2004-05-17 2011-02-03 Weisgerber Robert C Method of enhancing motion pictures for exhibition at a higher frame rate than that in which they were originally produced
    KR101127220B1 (en) * 2004-07-28 2012-04-12 세종대학교산학협력단 Apparatus for motion compensation-adaptive de-interlacing and method the same
    TWI280798B (en) * 2004-09-22 2007-05-01 Via Tech Inc Apparatus and method of adaptive de-interlace of image
    JP4359223B2 (en) 2004-10-29 2009-11-04 株式会社 日立ディスプレイズ Video interpolation device, frame rate conversion device using the same, and video display device
    JP4396496B2 (en) * 2004-12-02 2010-01-13 株式会社日立製作所 Frame rate conversion device, video display device, and frame rate conversion method
    JP4736456B2 (en) * 2005-02-15 2011-07-27 株式会社日立製作所 Scanning line interpolation device, video display device, video signal processing device
    RU2402885C2 (en) 2005-03-10 2010-10-27 Квэлкомм Инкорпорейтед Classification of content for processing multimedia data
    US7567294B2 (en) * 2005-03-28 2009-07-28 Intel Corporation Gradient adaptive video de-interlacing
    JPWO2006117878A1 (en) * 2005-04-28 2008-12-18 株式会社日立製作所 Frame rate conversion device and video display device
    US8018998B2 (en) * 2005-05-20 2011-09-13 Microsoft Corporation Low complexity motion compensated frame interpolation method
    JP4887727B2 (en) * 2005-10-20 2012-02-29 ソニー株式会社 Image signal processing apparatus, camera system, and image signal processing method
    US20070171280A1 (en) * 2005-10-24 2007-07-26 Qualcomm Incorporated Inverse telecine algorithm based on state machine
    US7546026B2 (en) * 2005-10-25 2009-06-09 Zoran Corporation Camera exposure optimization techniques that take camera and scene motion into account
    KR100870115B1 (en) * 2005-12-21 2008-12-10 주식회사 메디슨 Image Formation Method Using Block Matching and Motion Compensation Interpolation
    US8204104B2 (en) * 2006-03-09 2012-06-19 Sony Corporation Frame rate conversion system, method of converting frame rate, transmitter, and receiver
    EP1855474A1 (en) * 2006-05-12 2007-11-14 Sony Deutschland Gmbh Method for generating an interpolated image between two images of an input image sequence
    US20080018788A1 (en) * 2006-07-20 2008-01-24 Samsung Electronics Co., Ltd. Methods and systems of deinterlacing using super resolution technology
    KR100848509B1 (en) * 2006-10-10 2008-07-25 삼성전자주식회사 Display device and control method
    US7697836B2 (en) * 2006-10-25 2010-04-13 Zoran Corporation Control of artificial lighting of a scene to reduce effects of motion in the scene on an image being acquired
    US8259226B2 (en) * 2006-11-24 2012-09-04 Sharp Kabushiki Kaisha Image display device
    JP4438795B2 (en) 2006-12-28 2010-03-24 株式会社日立製作所 Video conversion device, video display device, and video conversion method
    US8942505B2 (en) * 2007-01-09 2015-01-27 Telefonaktiebolaget L M Ericsson (Publ) Adaptive filter representation
    JP4513819B2 (en) 2007-03-19 2010-07-28 株式会社日立製作所 Video conversion device, video display device, and video conversion method
    JP2008244846A (en) * 2007-03-27 2008-10-09 Toshiba Corp Device and method for interpolating frame
    US8115863B2 (en) * 2007-04-04 2012-02-14 Freescale Semiconductor, Inc. Video de-interlacer using pixel trajectory
    US20080260021A1 (en) * 2007-04-23 2008-10-23 Chih-Ta Star Sung Method of digital video decompression, deinterlacing and frame rate conversion
    TWI342714B (en) * 2007-05-16 2011-05-21 Himax Tech Ltd Apparatus and method for frame rate up conversion
    US8433159B1 (en) * 2007-05-16 2013-04-30 Varian Medical Systems International Ag Compressed target movement model using interpolation
    GB2450121A (en) * 2007-06-13 2008-12-17 Sharp Kk Frame rate conversion using either interpolation or frame repetition
    US8254455B2 (en) 2007-06-30 2012-08-28 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
    US20090207314A1 (en) * 2008-02-14 2009-08-20 Brian Heng Method and system for motion vector estimation using a pivotal pixel search
    US8482620B2 (en) 2008-03-11 2013-07-09 Csr Technology Inc. Image enhancement based on multiple frames and motion estimation
    US8805101B2 (en) * 2008-06-30 2014-08-12 Intel Corporation Converting the frame rate of video streams
    US9445121B2 (en) 2008-08-04 2016-09-13 Dolby Laboratories Licensing Corporation Overlapped block disparity estimation and compensation architecture
    JP4385077B1 (en) 2008-08-27 2009-12-16 三菱電機株式会社 Motion vector detection device and image processing device
    JP5200788B2 (en) * 2008-09-09 2013-06-05 富士通株式会社 Video signal processing apparatus, video signal processing method, and video signal processing program
    US8325796B2 (en) 2008-09-11 2012-12-04 Google Inc. System and method for video coding using adaptive segmentation
    US8385404B2 (en) 2008-09-11 2013-02-26 Google Inc. System and method for video encoding using constructed reference frame
    US8326075B2 (en) 2008-09-11 2012-12-04 Google Inc. System and method for video encoding using adaptive loop filter
    WO2010038857A1 (en) * 2008-10-02 2010-04-08 ソニー株式会社 Image processing apparatus and method
    KR100991624B1 (en) 2008-11-03 2010-11-04 엘지전자 주식회사 How to determine motion vector
    KR100991618B1 (en) 2008-11-14 2010-11-04 엘지전자 주식회사 Method of determining motion vector
    KR100991568B1 (en) 2008-11-14 2010-11-04 엘지전자 주식회사 How to determine motion vector
    US8189666B2 (en) 2009-02-02 2012-05-29 Microsoft Corporation Local picture identifier and computation of co-located information
    US8218075B2 (en) * 2009-02-06 2012-07-10 Analog Devices, Inc. Method and system for efficient de-interlacing
    DE102009001520B4 (en) * 2009-03-12 2016-02-25 Entropic Communications, Inc. Apparatus and method for interframe interpolation
    CN101534445B (en) * 2009-04-15 2011-06-22 杭州华三通信技术有限公司 A video processing method and system and deinterleaving processor
    KR20110055196A (en) * 2009-11-19 2011-05-25 삼성전자주식회사 Display Device and Image Signal Processing Method
    JP5372721B2 (en) * 2009-12-11 2013-12-18 ルネサスエレクトロニクス株式会社 Video signal processing apparatus, method, and program
    WO2011090790A1 (en) 2010-01-22 2011-07-28 Thomson Licensing Methods and apparatus for sampling -based super resolution vido encoding and decoding
    JP5805665B2 (en) * 2010-01-22 2015-11-04 トムソン ライセンシングThomson Licensing Data pruning for video compression using Example-based super-resolution
    JP5424926B2 (en) * 2010-02-15 2014-02-26 パナソニック株式会社 Video processing apparatus and video processing method
    US20130170564A1 (en) 2010-09-10 2013-07-04 Thomson Licensing Encoding of a picture in a video sequence by example-based data pruning using intra-frame patch similarity
    WO2012033972A1 (en) 2010-09-10 2012-03-15 Thomson Licensing Methods and apparatus for pruning decision optimization in example-based data pruning compression
    EP2606648A1 (en) 2010-10-05 2013-06-26 General instrument Corporation Coding and decoding utilizing adaptive context model selection with zigzag scan
    US8514328B2 (en) * 2010-12-27 2013-08-20 Stmicroelectronics, Inc. Motion vector based image segmentation
    US8938001B1 (en) 2011-04-05 2015-01-20 Google Inc. Apparatus and method for coding using combinations
    US8638854B1 (en) 2011-04-07 2014-01-28 Google Inc. Apparatus and method for creating an alternate reference frame for video compression using maximal differences
    US8780996B2 (en) 2011-04-07 2014-07-15 Google, Inc. System and method for encoding and decoding video data
    US8780971B1 (en) 2011-04-07 2014-07-15 Google, Inc. System and method of encoding using selectable loop filters
    US8781004B1 (en) 2011-04-07 2014-07-15 Google Inc. System and method for encoding video using variable loop filter
    US9154799B2 (en) 2011-04-07 2015-10-06 Google Inc. Encoding and decoding motion via image segmentation
    US8891616B1 (en) 2011-07-27 2014-11-18 Google Inc. Method and apparatus for entropy encoding based on encoding cost
    US8885706B2 (en) 2011-09-16 2014-11-11 Google Inc. Apparatus and methodology for a video codec system with noise reduction capability
    US9247257B1 (en) 2011-11-30 2016-01-26 Google Inc. Segmentation based entropy encoding and decoding
    JP6222514B2 (en) * 2012-01-11 2017-11-01 パナソニックIpマネジメント株式会社 Image processing apparatus, imaging apparatus, and computer program
    US9262670B2 (en) 2012-02-10 2016-02-16 Google Inc. Adaptive region of interest
    US9131073B1 (en) 2012-03-02 2015-09-08 Google Inc. Motion estimation aided noise reduction
    US11039138B1 (en) 2012-03-08 2021-06-15 Google Llc Adaptive coding of prediction modes using probability distributions
    WO2013162980A2 (en) 2012-04-23 2013-10-31 Google Inc. Managing multi-reference picture buffers for video data coding
    US9609341B1 (en) 2012-04-23 2017-03-28 Google Inc. Video data encoding and decoding using reference picture lists
    US9014266B1 (en) 2012-06-05 2015-04-21 Google Inc. Decimated sliding windows for multi-reference prediction in video coding
    US8819525B1 (en) 2012-06-14 2014-08-26 Google Inc. Error concealment guided robustness
    US9774856B1 (en) 2012-07-02 2017-09-26 Google Inc. Adaptive stochastic entropy coding
    US9344729B1 (en) 2012-07-11 2016-05-17 Google Inc. Selective prediction signal filtering
    US9277169B2 (en) * 2013-02-21 2016-03-01 Robert C. Weisgerber Method for enhancing motion pictures for exhibition at a higher frame rate than that in which they were originally produced
    US9509998B1 (en) 2013-04-04 2016-11-29 Google Inc. Conditional predictive multi-symbol run-length coding
    GB2514557A (en) * 2013-05-28 2014-12-03 Snell Ltd Image processing
    US9756331B1 (en) 2013-06-17 2017-09-05 Google Inc. Advance coded reference prediction
    US9392288B2 (en) 2013-10-17 2016-07-12 Google Inc. Video coding using scatter-based scan tables
    US9179151B2 (en) 2013-10-18 2015-11-03 Google Inc. Spatial proximity context entropy coding
    AU2015248303C1 (en) * 2014-04-17 2019-03-28 Nippon Steel Corporation Austenitic stainless steel and method for producing the same
    CN104219533B (en) * 2014-09-24 2018-01-12 苏州科达科技股份有限公司 A kind of bi-directional motion estimation method and up-conversion method of video frame rate and system
    US10102613B2 (en) 2014-09-25 2018-10-16 Google Llc Frequency-domain denoising
    JP6603455B2 (en) * 2014-12-24 2019-11-06 シチズン時計株式会社 Electronics
    US10430664B2 (en) * 2015-03-16 2019-10-01 Rohan Sanil System for automatically editing video
    CN104811723B (en) * 2015-04-24 2018-03-16 宏祐图像科技(上海)有限公司 Local Motion Vector Correction Method in MEMC Technology
    US10523961B2 (en) 2017-08-03 2019-12-31 Samsung Electronics Co., Ltd. Motion estimation method and apparatus for plurality of frames
    KR101959888B1 (en) * 2017-12-27 2019-03-19 인천대학교 산학협력단 Motion Vector Shifting apparatus and method for Motion-Compensated Frame Rate Up-Conversion
    US10469869B1 (en) * 2018-06-01 2019-11-05 Tencent America LLC Method and apparatus for video coding
    TWI719522B (en) * 2018-06-30 2021-02-21 大陸商北京字節跳動網絡技術有限公司 Symmetric bi-prediction mode for video coding
    US11516490B2 (en) 2018-07-16 2022-11-29 Lg Electronics Inc. Method and device for inter predicting on basis of DMVR
    CN109068083B (en) * 2018-09-10 2021-06-01 河海大学 A Square-Based Adaptive Motion Vector Field Smoothing Method

    Family Cites Families (24)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    JPS61200789A (en) * 1985-03-04 1986-09-05 Kokusai Denshin Denwa Co Ltd <Kdd> System for detecting dynamic vector of object on picture plane
    US5198901A (en) * 1991-09-23 1993-03-30 Matsushita Electric Corporation Of America Derivation and use of motion vectors in a differential pulse code modulation system
    FR2700090B1 (en) * 1992-12-30 1995-01-27 Thomson Csf Method for deinterlacing frames of a sequence of moving images.
    GB2283385B (en) * 1993-10-26 1998-04-01 Sony Uk Ltd Motion compensated video signal processing
    US5453799A (en) * 1993-11-05 1995-09-26 Comsat Corporation Unified motion estimation architecture
    KR0126871B1 (en) * 1994-07-30 1997-12-29 심상철 HIGH SPEED BMA FOR Bi-DIRECTIONAL MOVING VECTOR ESTIMATION
    KR100287211B1 (en) * 1994-08-30 2001-04-16 윤종용 Bidirectional motion estimation method and system
    JP2671820B2 (en) * 1994-09-28 1997-11-05 日本電気株式会社 Bidirectional prediction method and bidirectional prediction device
    US5886745A (en) * 1994-12-09 1999-03-23 Matsushita Electric Industrial Co., Ltd. Progressive scanning conversion apparatus
    JP3191583B2 (en) 1994-12-12 2001-07-23 ソニー株式会社 Information decryption device
    CN1108061C (en) 1995-03-20 2003-05-07 大宇电子株式会社 Apparatus for encoding vided signal using search grid
    US5661525A (en) * 1995-03-27 1997-08-26 Lucent Technologies Inc. Method and apparatus for converting an interlaced video frame sequence into a progressively-scanned sequence
    JP2798120B2 (en) * 1995-08-04 1998-09-17 日本電気株式会社 Motion compensated interframe prediction method and motion compensated interframe prediction device
    EP1274252A3 (en) * 1995-08-29 2005-10-05 Sharp Kabushiki Kaisha Video coding device and video decoding device with a motion compensated interframe prediction
    JP3855286B2 (en) * 1995-10-26 2006-12-06 ソニー株式会社 Image encoding device, image encoding method, image decoding device, image decoding method, and recording medium
    US5778097A (en) * 1996-03-07 1998-07-07 Intel Corporation Table-driven bi-directional motion estimation using scratch area and offset valves
    US5661524A (en) * 1996-03-08 1997-08-26 International Business Machines Corporation Method and apparatus for motion estimation using trajectory in a digital video encoder
    US5784115A (en) * 1996-12-31 1998-07-21 Xerox Corporation System and method for motion compensated de-interlacing of video frames
    US6404813B1 (en) * 1997-03-27 2002-06-11 At&T Corp. Bidirectionally predicted pictures or video object planes for efficient and flexible video coding
    KR100252080B1 (en) * 1997-10-10 2000-04-15 윤종용 Image Stabilization Device and Image Stabilization Method Using Motion Correction of Input Image Using Bit Plane Matching
    KR100255648B1 (en) * 1997-10-10 2000-05-01 윤종용 Video motion detection apparatus and method by gradient pattern matching
    US6192079B1 (en) * 1998-05-07 2001-02-20 Intel Corporation Method and apparatus for increasing video frame rate
    WO1999067952A1 (en) * 1998-06-25 1999-12-29 Hitachi, Ltd. Method and device for converting number of frames of image signals
    US6594313B1 (en) * 1998-12-23 2003-07-15 Intel Corporation Increased video playback framerate in low bit-rate video applications

    Cited By (38)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US6990148B2 (en) 2002-02-25 2006-01-24 Samsung Electronics Co., Ltd. Apparatus for and method of transforming scanning format
    EP1339234A3 (en) * 2002-02-25 2005-02-09 Samsung Electronics Co., Ltd. Apparatus for and method of converting scanning format
    EP1418754A3 (en) * 2002-10-08 2007-11-28 Broadcom Corporation Progressive conversion of interlaced video based on coded bitstream analysis
    EP1429547A3 (en) * 2002-12-10 2007-10-24 Samsung Electronics Co., Ltd. Deinterlacing apparatus and method
    AU2003264647B2 (en) * 2002-12-26 2005-03-03 Samsung Electronics Co., Ltd Apparatus and method for converting frame rate
    WO2005025213A1 (en) * 2003-09-04 2005-03-17 Koninklijke Philips Electronics N.V. Robust de-interlacing of video signals
    CN100450155C (en) * 2003-09-04 2009-01-07 皇家飞利浦电子股份有限公司 Robust de-interlacing of video signals
    WO2006012382A1 (en) * 2004-07-20 2006-02-02 Qualcomm Incorporated Method and apparatus for frame rate up conversion with multiple reference frames and variable block sizes
    EP1659791A1 (en) 2004-11-17 2006-05-24 Samsung Electronics Co., Ltd. Deinterlacing using motion estimation and compenstation
    US7535513B2 (en) 2004-11-17 2009-05-19 Samsung Electronics Co., Ltd. Deinterlacing method and device in use of field variable partition type
    US8780957B2 (en) 2005-01-14 2014-07-15 Qualcomm Incorporated Optimal weights for MMSE space-time equalizer of multicode CDMA system
    US8879635B2 (en) 2005-09-27 2014-11-04 Qualcomm Incorporated Methods and device for data alignment with time domain boundary
    US8879856B2 (en) 2005-09-27 2014-11-04 Qualcomm Incorporated Content driven transcoder that orchestrates multimedia transcoding using content information
    US9113147B2 (en) 2005-09-27 2015-08-18 Qualcomm Incorporated Scalability techniques based on content information
    US9088776B2 (en) 2005-09-27 2015-07-21 Qualcomm Incorporated Scalability techniques based on content information
    US9071822B2 (en) 2005-09-27 2015-06-30 Qualcomm Incorporated Methods and device for data alignment with time domain boundary
    US8879857B2 (en) 2005-09-27 2014-11-04 Qualcomm Incorporated Redundant data encoding methods and device
    US8948260B2 (en) 2005-10-17 2015-02-03 Qualcomm Incorporated Adaptive GOP structure in video streaming
    WO2007047693A2 (en) 2005-10-17 2007-04-26 Qualcomm Incorporated Method and apparatus for spatio-temporal deinterlacing aided by motion compensation for field-based video
    WO2007047693A3 (en) * 2005-10-17 2007-07-05 Qualcomm Inc Method and apparatus for spatio-temporal deinterlacing aided by motion compensation for field-based video
    US8654848B2 (en) 2005-10-17 2014-02-18 Qualcomm Incorporated Method and apparatus for shot detection in video streaming
    WO2007075885A3 (en) * 2005-12-21 2007-10-25 Analog Device Inc Methods and apparatus for progressive scanning of interlaced video
    US8842730B2 (en) 2006-01-27 2014-09-23 Imax Corporation Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality
    WO2007114995A1 (en) 2006-04-03 2007-10-11 Qualcomm Incorporated Preprocessor method and apparatus
    US9131164B2 (en) 2006-04-04 2015-09-08 Qualcomm Incorporated Preprocessor method and apparatus
    US8411931B2 (en) 2006-06-23 2013-04-02 Imax Corporation Methods and systems for converting 2D motion pictures for stereoscopic 3D exhibition
    US8531601B2 (en) 2007-08-08 2013-09-10 Canon Kabushiki Kaisha Image processing apparatus and control method
    US8421917B2 (en) 2007-08-08 2013-04-16 Canon Kabushiki Kaisha Image processing apparatus and method of controlling the same
    US9485457B2 (en) 2007-08-08 2016-11-01 Canon Kabushiki Kaisha Image processing apparatus and method of controlling the same
    US8842220B2 (en) 2007-08-08 2014-09-23 Canon Kabushiki Kaisha Image processing apparatus and method of controlling the same
    EP2026560A3 (en) * 2007-08-08 2009-11-11 Canon Kabushiki Kaisha Image processing apparatus and control method
    US8724022B2 (en) 2009-11-09 2014-05-13 Intel Corporation Frame rate conversion using motion estimation and compensation
    GB2475369A (en) * 2009-11-09 2011-05-18 Intel Corp Frame rate convertor using motion estimation and pixel interpolation
    GB2475369B (en) * 2009-11-09 2012-05-23 Intel Corp Frame rate conversion using motion estimation and compensation
    GB2476143A (en) * 2009-12-08 2011-06-15 Intel Corp Frame rate conversion using bi-directional, local and global motion estimation
    GB2476143B (en) * 2009-12-08 2012-12-26 Intel Corp Bi-directional, local and global motion estimation based frame rate conversion
    CN108305216A (en) * 2018-03-15 2018-07-20 信阳师范学院 A kind of image magnification method of bilateral four interpolation
    CN108305216B (en) * 2018-03-15 2021-07-30 嘉兴学院 An Image Enlargement Method Based on Bilateral Quadratic Interpolation

    Also Published As

    Publication number Publication date
    US20020036705A1 (en) 2002-03-28
    JP2002027414A (en) 2002-01-25
    CN1183768C (en) 2005-01-05
    EP1164792A3 (en) 2003-08-13
    US6900846B2 (en) 2005-05-31
    CN1328405A (en) 2001-12-26
    KR20010111740A (en) 2001-12-20
    JP4563603B2 (en) 2010-10-13
    KR100708091B1 (en) 2007-04-16

    Similar Documents

    Publication Publication Date Title
    EP1164792A2 (en) Format converter using bidirectional motion vector and method thereof
    US6473460B1 (en) Method and apparatus for calculating motion vectors
    US5784115A (en) System and method for motion compensated de-interlacing of video frames
    US7042512B2 (en) Apparatus and method for adaptive motion compensated de-interlacing of video data
    EP1223748B1 (en) Motion detection in an interlaced video signal
    US6118488A (en) Method and apparatus for adaptive edge-based scan line interpolation using 1-D pixel array motion detection
    US7667773B2 (en) Apparatus and method of motion-compensation adaptive deinterlacing
    US6331874B1 (en) Motion compensated de-interlacing
    JP4145351B2 (en) Video signal scanning conversion method, apparatus, and video signal display apparatus
    US6545719B1 (en) Apparatus and method for concealing interpolation artifacts in a video interlaced to progressive scan converter
    US7720150B2 (en) Pixel data selection device for motion compensated interpolation and method thereof
    JP3845456B2 (en) Motion compensated video signal processing method
    JP4153480B2 (en) Noise attenuator and progressive scan converter
    KR100484182B1 (en) Apparatus and method for deinterlacing
    US20020001347A1 (en) Apparatus and method for converting to progressive scanning format
    US20050249288A1 (en) Adaptive-weighted motion estimation method and frame rate converting apparatus employing the method
    US7868948B2 (en) Mage signal processing apparatus, image signal processing method and program for converting an interlaced signal into a progressive signal
    KR100644601B1 (en) De-interlacing device using motion compensation interpolation and method
    JP4179089B2 (en) Motion estimation method for motion image interpolation and motion estimation device for motion image interpolation
    KR100827214B1 (en) Motion Compensation Upconversion for Video Scan Rate Conversion
    Chang et al. Four field local motion compensated de-interlacing
    KR100382650B1 (en) Method and apparatus for detecting motion using scaled motion information in video signal processing system and data interpolating method and apparatus therefor
    EP0943209B1 (en) Motion estimation and motion-compensated interpolation
    KR100382651B1 (en) Method and apparatus for detecting motion using region-wise motion decision information in video signal processing system and data interpolating method and apparatus therefor
    Zlokolica et al. Wavelet-based joint video de-interlacing and denoising

    Legal Events

    Date Code Title Description
    PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

    Free format text: ORIGINAL CODE: 0009012

    17P Request for examination filed

    Effective date: 20010228

    AK Designated contracting states

    Kind code of ref document: A2

    Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

    AX Request for extension of the european patent

    Free format text: AL;LT;LV;MK;RO;SI

    PUAL Search report despatched

    Free format text: ORIGINAL CODE: 0009013

    AK Designated contracting states

    Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

    AX Request for extension of the european patent

    Extension state: AL LT LV MK RO SI

    AKX Designation fees paid

    Designated state(s): DE GB NL

    17Q First examination report despatched

    Effective date: 20100318

    RAP1 Party data changed (applicant data changed or rights of an application transferred)

    Owner name: SAMSUNG ELECTRONICS CO., LTD.

    STAA Information on the status of an ep patent application or granted ep patent

    Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

    18D Application deemed to be withdrawn

    Effective date: 20170901