[go: up one dir, main page]

CN102215368A - Motion self-adaptive de-interlacing method based on visual characteristics - Google Patents

Motion self-adaptive de-interlacing method based on visual characteristics Download PDF

Info

Publication number
CN102215368A
CN102215368A CN201110146619XA CN201110146619A CN102215368A CN 102215368 A CN102215368 A CN 102215368A CN 201110146619X A CN201110146619X A CN 201110146619XA CN 201110146619 A CN201110146619 A CN 201110146619A CN 102215368 A CN102215368 A CN 102215368A
Authority
CN
China
Prior art keywords
interpolation
max
field
pixel
intra
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201110146619XA
Other languages
Chinese (zh)
Inventor
庞志勇
陈弟虎
谭洪舟
雷东玮
江嘉文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201110146619XA priority Critical patent/CN102215368A/en
Publication of CN102215368A publication Critical patent/CN102215368A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Television Systems (AREA)

Abstract

本发明提供了一种基于视觉特性的运动自适应去隔行方法,1)连续存储五场隔行视频,将每个像素点的左右两点加入到运动检测中,构成FIR滤波器,得到运动检测值MV;2)根据运动检测值MV,采用动态双阀值方法取得插值系数α;3)采用边缘差值获取场内插值,采用改进的三点中值滤波方法获取场间插值;4)根据插值系数α、场内插值和场间插值的结果,对待插入点进行自适应插值,最后与当前场已有行交织,以实现逐行输出。本发明加入低通FIR滤波器,避免了运动物体内部的运动丢失,有效减小噪声;根据人眼视觉特性,引入了基于亮度值的双阈值动态调整方法,提高了去隔行的主观效果;对不同的像素点引入不同场间插值方法,提高了运动缓慢像素的插值准确性。

Figure 201110146619

The present invention provides a motion adaptive de-interlacing method based on visual characteristics, 1) continuously store five interlaced videos, add the left and right points of each pixel to the motion detection to form a FIR filter, and obtain the motion detection value MV; 2) According to the motion detection value MV, the interpolation coefficient α is obtained by using the dynamic double-threshold method; 3) The intra-field interpolation is obtained by using the edge difference, and the inter-field interpolation is obtained by an improved three-point median filter method; 4) According to the interpolation Coefficient α, the result of intra-field interpolation and inter-field interpolation, performs adaptive interpolation at the point to be inserted, and finally interleaves with the existing lines of the current field to achieve progressive output. The invention adds a low-pass FIR filter to avoid the loss of motion inside the moving object and effectively reduce noise; according to the visual characteristics of the human eye, a dual-threshold dynamic adjustment method based on brightness values is introduced to improve the subjective effect of de-interlacing; Different pixel points introduce different inter-field interpolation methods, which improves the interpolation accuracy of slow-moving pixels.

Figure 201110146619

Description

基于视觉特性的运动自适应去隔行方法Motion-adaptive deinterlacing method based on visual characteristics

技术领域technical field

本发明属于视频图像处理技术领域,具体来说,涉及一种利用场内插值和场间插值实现基于视觉特性的运动自适应去隔行方法。The invention belongs to the technical field of video image processing, and in particular relates to a motion adaptive deinterlacing method based on visual characteristics by using intra-field interpolation and inter-field interpolation.

背景技术Background technique

由于早期技术不够发达,使用了大量的隔行视频技术,尤其在电视系统中,视频帧被分为两个场,每个场包括隔行的扫描线。现行三大主要模拟彩色电视系统PAL、NTSC和SECAM均采用隔行扫描方式。而数字电视则采用逐行扫描方式。Due to the lack of advanced technology in the early days, a lot of interlaced video technology was used, especially in television systems, the video frame was divided into two fields, and each field included interlaced scan lines. The current three major analog color TV systems PAL, NTSC and SECAM all use interlaced scanning. Digital TV, on the other hand, uses progressive scanning.

当前平板显示技术已经成为主流,由于隔行扫描有一些缺点:闪烁、画面抖动、边沿锯齿化等,平板显示技术基本都是采用逐行扫描的显示器件,如:液晶显示器。显示隔行视频的时候,就需要进行去隔行处理才能进行观看。At present, flat panel display technology has become the mainstream. Due to the disadvantages of interlaced scanning: flicker, picture jitter, edge jagged, etc., flat panel display technology basically adopts progressive scanning display devices, such as liquid crystal displays. When displaying interlaced video, de-interlacing processing is required for viewing.

去隔行的过程就是重建隔行扫描场的每个扫描线之间丢失的行。去隔行算法可分为线性去隔行算法,非线性去隔行算法和运动补偿去隔行算法三类。线性算法包括空间滤波去隔行算法、时间滤波去隔行算法、空间-时间滤波算法三种。空间滤波去隔行算法,也称场内插值算法,是利用了同一场内相邻抽样行之间的相关性来进行去隔行的,保证了运动假相的消除,但是当图像中的物体在垂直方向上有很高频率的运动时,该算法具有比较差的去隔行效果;时域滤波去隔行算法,也称场间插值,是利用了时域内相邻“场”之间图像的相关性来进行线性插值的,对于静止图象可以毫无失真的进行去隔行插值,但对于运动图像的边界上会产生比较严重的锯齿;空间-时间滤波算法是综合空域滤波和时域滤波,即同时考虑空间邻点和时间邻点,能在某种程序上同时具备上述两类方法的优点,对于静止的图像,它同时抑制了伪信号失真和闪烁现象,但是随着时域内频率的增加,空间上的细节会被逐步模糊掉。运动补偿算法利用视频中的运动信息,沿着图像运动轨迹进行补偿插值。其性能最优,但计算量最大,硬件实现代价最高,且对误差极其敏感。非线性算法则首先对不同的像素进行运动检测,根据运动检测出来的结果对不同运动类型的像素点作不同的处理。运动自适应去隔行算法即为非线性算法的一种,它既利用了时域算法保证垂直分辨力的优点,又用空域算法避免了时域算法在处理运动图像上的不足。Deinterlacing is the process of reconstructing the lost lines between each scanline of an interlaced field. De-interlacing algorithms can be divided into three categories: linear de-interlacing algorithms, nonlinear de-interlacing algorithms and motion compensation de-interlacing algorithms. Linear algorithms include spatial filtering deinterlacing algorithm, temporal filtering deinterlacing algorithm, and space-time filtering algorithm. The spatial filter deinterlacing algorithm, also known as the intra-field interpolation algorithm, uses the correlation between adjacent sampling lines in the same field to perform de-interlacing, which ensures the elimination of motion artifacts, but when the object in the image is in the vertical direction When there is a high frequency of motion, the algorithm has a relatively poor de-interlacing effect; the time-domain filtering de-interlacing algorithm, also known as inter-field interpolation, is based on the correlation of images between adjacent "fields" in the time domain. For linear interpolation, deinterlacing interpolation can be performed without distortion for still images, but serious aliasing will be generated on the boundaries of moving images; the space-time filtering algorithm is a combination of spatial filtering and temporal filtering, that is, considering the spatial Adjacent points and time adjacent points can have the advantages of the above two types of methods in a certain program. For static images, it can suppress the false signal distortion and flicker at the same time, but with the increase of the frequency in the time domain, the spatial Details are gradually blurred. The motion compensation algorithm uses the motion information in the video to perform compensation interpolation along the motion track of the image. Its performance is the best, but the amount of calculation is the largest, the cost of hardware implementation is the highest, and it is extremely sensitive to errors. The non-linear algorithm first performs motion detection on different pixels, and performs different processing on pixels of different motion types according to the motion detection results. Motion-adaptive deinterlacing algorithm is a kind of nonlinear algorithm. It not only utilizes the advantage of time-domain algorithm to ensure vertical resolution, but also uses space-domain algorithm to avoid the deficiency of time-domain algorithm in processing moving images.

目前,上述去隔行算法基本都没有考虑的人的视觉特性,事实上,人眼对于亮度较高和较低区域的运动敏感程度低于亮度适中的区域,因此,亮度较高和较低像素的阈值应小于亮度适中区域。At present, the above-mentioned de-interlacing algorithm basically does not consider human visual characteristics. In fact, the human eye is less sensitive to motion in areas with high and low brightness than areas with moderate brightness. Therefore, the pixels with higher and lower brightness Threshold should be smaller than the moderately bright area.

发明内容Contents of the invention

针对以上的不足,本发明提供了一种利用场内插值和场间插值实现基于视觉特性的运动自适应去隔行方法,它包括:For above deficiencies, the present invention provides a kind of utilizing intra-field interpolation and inter-field interpolation to realize the motion adaptive deinterlacing method based on visual characteristic, and it comprises:

1)连续存储五场隔行视频信息fn-2、fn-1、fn、fn+1和fn+2,对该五场隔行视频进行运动检测,得到运动检测值MV1) Continuously store five fields of interlaced video information f n-2 , f n-1 , f n , f n+1 and f n+2 , perform motion detection on the five fields of interlaced video, and obtain motion detection value MV

2)根据人的视觉特性采用动态双阀值方法得到插值系数α;2) According to the human visual characteristics, the interpolation coefficient α is obtained by using the dynamic double threshold method;

3)对视频fn进行场内插值和场间插值;3) Carry out intra-field interpolation and inter-field interpolation to video f n ;

4)对视频fn进行自适应插值,最后与当前场已有行交织,以实现逐行输出。4) Perform adaptive interpolation on the video f n , and finally interleave with existing lines in the current field to realize progressive output.

所述步骤1)将每个像素点的左右两点加入到运动检测中,构成FIR滤波器,Described step 1) add the left and right sides of each pixel in motion detection, constitute FIR filter,

A=(A1+2*A2+A3)/4,A=(A1+2*A2+A3)/4,

B=(B1+2*B2+B3)/4,B=(B1+2*B2+B3)/4,

C=(C1+2*C2+C3)/4,C=(C1+2*C2+C3)/4,

D=(D1+2*D2+D3)/4,D=(D1+2*D2+D3)/4,

E=(E1+2*E2+E3)/4,E=(E1+2*E2+E3)/4,

F=(F1+2*F2+F3)/4,F=(F1+2*F2+F3)/4,

G=(G1+2*G2+G3)/4,G=(G1+2*G2+G3)/4,

H=(H1+2*H2+H3)/4,H=(H1+2*H2+H3)/4,

MV1=|A-D|,MV1=|A-D|,

MV2=|B-E|,MV2=|B-E|,

MV3=|C-F|,MV3=|C-F|,

MV4=|D-G|,MV4=|D-G|,

MV5=|E-H|,MV5=|E-H|,

MV=max(MV1,MV2,MV3,MV4,MV5),MV=max(MV1, MV2, MV3, MV4, MV5),

其中,A1/A2/A3和B1/B2/B3表示第n-2场两行隔行像素,每行的三个像素,C1/C2/C3表示第n-1场一行的三个像素,D1/D2/D3,E1/E2/E3表示第n场两行隔行像素,每行的三个像素,F1/F2/F3表示第n+1场一行的三个像素,G1/G2/G3,H1/H2/H3表示第n+2场两行隔行像素,每行的三个像素,MV1、MV2、MV3、MV4、MV5表示每两场间对应像素点的差值,MV为运动检测的值。Among them, A1/A2/A3 and B1/B2/B3 represent two rows of interlaced pixels in field n-2, three pixels in each row, C1/C2/C3 represent three pixels in one row in field n-1, D1/ D2/D3, E1/E2/E3 represent two rows of interlaced pixels in field n, three pixels in each row, F1/F2/F3 represent three pixels in one row in field n+1, G1/G2/G3, H1/ H2/H3 represents two lines of interlaced pixels in field n+2, three pixels in each line, MV1, MV2, MV3, MV4, MV5 represent the difference of corresponding pixels between every two fields, and MV is the value of motion detection.

所述步骤2)中插值系数α采用动态双阀值方法,In the step 2), the interpolation coefficient α adopts a dynamic double-threshold method,

αα == 00 ,, (( MVMV ≤≤ MVMV LL )) 11 ,, (( MVMV ≤≤ MVMV Hh )) (( MVMV -- MVMV LL )) // (( MVMV Hh -- MVMV LL )) ,,

其中,MVL表示下阀值,MVH表示上阀值,Among them, MV L represents the lower threshold value, MV H represents the upper threshold value,

MVMV LL == 3030 -- YY ** 1010 128128 ,, whilewhile (( YY << 128128 )) 3030 -- (( 255255 -- YY )) ** 1010 128128 ,, otherwiseotherwise

MVMV Hh == 8080 -- YY ** 1010 128128 ,, whilewhile (( YY << 128128 )) 8080 -- (( 255255 -- YY )) ** 1010 128128 ,, otherwiseotherwise

Y表示待插入值的像素点的像素亮度值。Y represents the pixel brightness value of the pixel point whose value is to be interpolated.

所述步骤3)的场内插值采用边缘差值算法,定义Dire1、Dire2、Dire3和Dire4为待插入值的像素点X在包括水平和竖直方向在内的四个方向的相关性,The intra-field interpolation of said step 3) adopts the edge difference algorithm, and Dire1, Dire2, Dire3 and Dire4 are defined as the correlation of the pixel point X of the value to be interpolated in four directions including the horizontal and vertical directions,

Dire1=|D1-D3|+|E1-E3|Dire1=|D1-D3|+|E1-E3|

Dire2=|D1-E1|+|D3-E3|Dire2=|D1-E1|+|D3-E3|

Dire3=|D2-E1|+|D3-E2|Dire3=|D2-E1|+|D3-E2|

Dire4=|D1-E2|+|D2-E3|Dire4=|D1-E2|+|D2-E3|

Dire1、Dire2、Dire3和Dire4的数值大小表示待插值像素在对应方向上与存在像素点的相关性,值越小相关性越强,针对相关性最大方向进行插值,具体方法如下所示:The values of Dire1, Dire2, Dire3, and Dire4 indicate the correlation between the pixel to be interpolated and the existing pixel in the corresponding direction. The smaller the value, the stronger the correlation, and the interpolation is performed for the direction of the maximum correlation. The specific method is as follows:

如果Dire1方向相关性最大,则If the Dire1 direction has the greatest correlation, then

X_intra=(Max(Min(A,B,C),Min(D,E,F),Min(B,E))+Min(Max(A,B,C),Max(D,E,F),Max(B,E)))/2X_intra=(Max(Min(A,B,C),Min(D,E,F),Min(B,E))+Min(Max(A,B,C),Max(D,E,F) , Max(B,E)))/2

如果Dire2方向相关性最大,则If the Dire2 direction has the greatest correlation, then

X_intra=(Max(Min(A,B,D),Min(C,E,F),Min(B,E))+Min(Max(A,B,D),Max(C,E,F),Max(B,E)))/2X_intra=(Max(Min(A,B,D),Min(C,E,F),Min(B,E))+Min(Max(A,B,D),Max(C,E,F) , Max(B,E)))/2

如果Dire3方向相关性最大,则If the Dire3 direction has the greatest correlation, then

X_intra=(Max(Min(B,D),Min(C,E,),Min(B,E))+Min(Max(B,D),Max(C,E),Max(B,E)))/2X_intra=(Max(Min(B,D),Min(C,E,),Min(B,E))+Min(Max(B,D),Max(C,E),Max(B,E) ))/2

如果Dire4方向相关性最大,则If the Dire4 direction correlation is the largest, then

X_intra=(Max(Min(A,E),Min(B,F),Min(B,E))+Min(Max(A,E),Max(B,F),Max(B,E)))/2X_intra=(Max(Min(A,E),Min(B,F),Min(B,E))+Min(Max(A,E),Max(B,F),Max(B,E)) )/2

其中,X_intra表示场内插值的像素值,Min表示取最小值,Max表示取最大值。Among them, X_intra represents the pixel value of intra-field interpolation, Min represents the minimum value, and Max represents the maximum value.

所述步骤3)的场间插值采用改进的三点中值滤波方法,具体为:The inter-field interpolation of described step 3) adopts an improved three-point median filtering method, specifically:

Xx __ interinter == medianmedian (( Ff 22 ,, Xx __ intraintra ,, CC 22 )) ,, whilewhile (( &alpha;&alpha; >> ii __ selsel )) averageaverage (( Ff 22 ,, CC 22 )) ,, otherwiseotherwise

其中,X_inter表示场间插值的像素值,median为中值滤波函数,X_intra表示场内插值的像素值,C2为第n-1场相同位置的已知像素值,F2为第n+1场相同位置的已知像素值,i_sel为两种场间插值方式的选择系数,i_sel为介于0到1之间的小数,α表示插值系数。Among them, X_inter represents the pixel value of inter-field interpolation, median is the median filter function, X_intra represents the pixel value of intra-field interpolation, C2 is the known pixel value of the same position in the n-1th field, and F2 is the same in the n+1th field The known pixel value of the position, i_sel is the selection coefficient of the two inter-field interpolation methods, i_sel is a decimal between 0 and 1, and α represents the interpolation coefficient.

所述步骤4)的自适应插值的第n场待插入点的插入像素值X为:The insertion pixel value X of the point to be inserted in the nth field of the adaptive interpolation of the step 4) is:

ZZ ,, (( ythe y modmod 22 == nno modmod 22 )) &alpha;&alpha; ** Xx __ interinter ++ (( 11 -- &alpha;&alpha; )) ** Xx __ intraintra ,, otherwiseotherwise

其中,n表示待插入点的场数即当前场,y表示待插入点的行数,Z表示当前场已经存在的像素。Among them, n represents the number of fields to be inserted, that is, the current field, y represents the number of rows to be inserted, and Z represents the existing pixels in the current field.

本发明对现有运动自适应去隔行算法进行了改进,并提出了一种新的基于视觉特性的运动自适应去隔行方法。该方法首先对运动检测部分进行了改进,通过加入低通FIR滤波器,不仅避免了运动物体内部的运动丢失,而且有效的减小了噪声;然后,根据人眼的视觉特性,引入了基于亮度值的双阈值动态调整方法,提高了去隔行的主观效果;最后,对不同的像素点引入了不同的场间插值方法,极大的提高了运动缓慢像素的插值准确性,该方法对于运动缓慢图像的插值效果取得了明显的提升,对于各种类型的视频源都在主观视觉效果上取得了明显的提升。The invention improves the existing motion adaptive deinterlacing algorithm, and proposes a new motion adaptive deinterlacing method based on visual characteristics. This method first improves the motion detection part. By adding a low-pass FIR filter, not only the motion loss inside the moving object is avoided, but also the noise is effectively reduced; then, according to the visual characteristics of the human eye, a brightness-based The double-threshold dynamic adjustment method of the value improves the subjective effect of de-interlacing; finally, different inter-field interpolation methods are introduced for different pixels, which greatly improves the interpolation accuracy of slow-moving pixels. The image interpolation effect has been significantly improved, and the subjective visual effect has been significantly improved for various types of video sources.

附图说明Description of drawings

图1为本发明的基于视觉特性的运动自适应去隔行方法流程图;Fig. 1 is the flow chart of the motion adaptive deinterlacing method based on visual characteristics of the present invention;

图2为本发明的工作原理图;Fig. 2 is a working principle diagram of the present invention;

图3为本发明的运动检测方法示意图。FIG. 3 is a schematic diagram of the motion detection method of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明进行进一步阐述。The present invention will be further elaborated below in conjunction with the accompanying drawings.

如图1、图2和图3所示,本发明的基于视觉特性的运动自适应去隔行方法包括:As shown in Figure 1, Figure 2 and Figure 3, the motion adaptive deinterlacing method based on visual characteristics of the present invention includes:

1)连续存储五场隔行视频信息,对该五场隔行视频进行运动检测,将每个像素点的左右两点加入到运动检测中,构成FIR滤波器,得到运动检测值MV。1) Continuously store five fields of interlaced video information, perform motion detection on the five fields of interlaced video, add the left and right points of each pixel to the motion detection to form a FIR filter, and obtain the motion detection value MV.

2)根据运动检测值MV,采用动态双阀值方法取得插值系数α。2) According to the motion detection value MV, the interpolation coefficient α is obtained by using a dynamic double-threshold method.

3)采用边缘差值算法获取场内插值,采用改进的三点中值滤波方法获取场间插值。3) The edge difference algorithm is used to obtain intra-field interpolation, and the improved three-point median filter method is used to obtain inter-field interpolation.

4)根据插值系数α、场内插值和场间插值的结果,对待插入点进行自适应插值,最后与当前场已有行交织,以实现逐行输出。4) According to the interpolation coefficient α, the results of intra-field interpolation and inter-field interpolation, perform adaptive interpolation on the point to be inserted, and finally interleave with existing lines in the current field to realize progressive output.

下面对每一步骤进行详细阐述:Each step is described in detail below:

一、连续存储五场隔行视频信息,对该五场隔行视频进行运动检测,将每个像素点的左右两点加入到运动检测中,构成FIR滤波器,得到运动检测值MV。1. Continuously store five fields of interlaced video information, perform motion detection on the five fields of interlaced video, add the left and right points of each pixel to the motion detection to form a FIR filter, and obtain the motion detection value MV.

其中,FIR滤波器系数为【1/4,1/2,1/4】,运动检测模块的输入数据为:Among them, the FIR filter coefficient is [1/4, 1/2, 1/4], and the input data of the motion detection module is:

A=(A1+2*A2+A3)/4,A=(A1+2*A2+A3)/4,

B=(B1+2*B2+B3)/4,B=(B1+2*B2+B3)/4,

C=(C1+2*C2+C3)/4,C=(C1+2*C2+C3)/4,

D=(D1+2*D2+D3)/4,D=(D1+2*D2+D3)/4,

E=(E1+2*E2+E3)/4,E=(E1+2*E2+E3)/4,

F=(F1+2*F2+F3)/4,F=(F1+2*F2+F3)/4,

G=(G1+2*G2+G3)/4,G=(G1+2*G2+G3)/4,

H=(H1+2*H2+H3)/4,H=(H1+2*H2+H3)/4,

MV1=|A-D|,MV1=|A-D|,

MV2=|B-E|,MV2=|B-E|,

MV3=|C-F|,MV3=|C-F|,

MV4=|D-G|,MV4=|D-G|,

MV5=|E-H|,MV5=|E-H|,

MV=max(MV1,MV2,MV3,MV4,MV5)MV=max(MV1, MV2, MV3, MV4, MV5)

其中,A1/A2/A3和B1/B2/B3表示第n-2场两行隔行像素,每行的三个像素,C1/C2/C3表示第n-1场一行的三个像素,D1/D2/D3,E1/E2/E3表示第n场两行隔行像素,每行的三个像素,F1/F2/F3表示第n+1场一行的三个像素,G1/G2/G3,H1/H2/H3表示第n+2场两行隔行像素,每行的三个像素,MV1、MV2、MV3、MV4、MV5表示每两场间对应像素点的差值,MV为运动检测的值。Among them, A1/A2/A3 and B1/B2/B3 represent two rows of interlaced pixels in field n-2, three pixels in each row, C1/C2/C3 represent three pixels in one row in field n-1, D1/ D2/D3, E1/E2/E3 represent two rows of interlaced pixels in field n, three pixels in each row, F1/F2/F3 represent three pixels in one row in field n+1, G1/G2/G3, H1/ H2/H3 represents two lines of interlaced pixels in field n+2, three pixels in each line, MV1, MV2, MV3, MV4, MV5 represent the difference of corresponding pixels between every two fields, and MV is the value of motion detection.

二、根据运动检测值MV,采用动态双阀值方法取得插值系数α。2. According to the motion detection value MV, the interpolation coefficient α is obtained by using a dynamic double-threshold method.

根据人眼对于不同亮度区域运动的敏感度不同,插值系数α计算采用动态双阈值方法,人眼对于亮度较高和较低区域的运动敏感程度低于亮度适中的区域,因此,亮度较高和较低像素的阈值应小于亮度适中区域的阀值。采用便于硬件实现的双阈值调整的插值系数计算方法如下,选定了阈值上下限的分别为80和30,因此,新的双阈值可以用下面计算公式得到。According to the different sensitivities of the human eye to motion in areas of different brightness, the calculation of the interpolation coefficient α adopts the dynamic double-threshold method. The threshold for lower pixels should be smaller than the threshold for moderately bright areas. The calculation method of the interpolation coefficient that adopts the dual-threshold adjustment that is convenient for hardware implementation is as follows. The upper and lower limits of the threshold are selected as 80 and 30 respectively. Therefore, the new dual-threshold can be obtained by the following calculation formula.

MVMV LL == 3030 -- YY ** 1010 128128 ,, whilewhile (( YY << 128128 )) 3030 -- (( 255255 -- YY )) ** 1010 128128 ,, otherwiseotherwise

MVMV Hh == 8080 -- YY ** 1010 128128 ,, whilewhile (( YY << 128128 )) 8080 -- (( 255255 -- YY )) ** 1010 128128 ,, otherwiseotherwise

为场内插值和场间插值选择合适的插值系数α和(1-α),插值系数α采用下式计算得到:Select appropriate interpolation coefficients α and (1-α) for intra-field interpolation and inter-field interpolation. The interpolation coefficient α is calculated by the following formula:

&alpha;&alpha; == 00 ,, (( MVMV &le;&le; MVMV LL )) 11 ,, (( MVMV &le;&le; MVMV Hh )) (( MVMV -- MVMV LL )) // (( MVMV Hh -- MVMV LL ))

其中,MVL表示下阀值,MVH表示上阀值,Y表示待插入值的像素点的像素亮度值。Wherein, MV L represents the lower threshold value, MV H represents the upper threshold value, and Y represents the pixel brightness value of the pixel to be inserted.

三、采用边缘差值算法获取场内插值,采用改进的三点中值滤波方法获取场间插值。3. The edge difference algorithm is used to obtain intra-field interpolation, and the improved three-point median filter method is used to obtain inter-field interpolation.

3.1场间插值3.1 Field Interpolation

传统的双阈值去隔行算法如:nammc和GAH等,场间插值采用三点中值滤波,如方程:Traditional double-threshold deinterlacing algorithms such as: nammc and GAH, etc., use three-point median filtering for inter-field interpolation, such as the equation:

X_inter=median(F,X_intra,C)X_inter = median(F, X_intra, C)

当场内插值X_intra介于E和C之间时,最终插值结果为:When the interpolation value X_intra is between E and C, the final interpolation result is:

X_inter=X_intraX_inter=X_intra

此时的插值结果完全由场内插值结果决定,。而对于运动较小的像素完全由场内插值计算其结果显然会引入较大的误差,因此,这里采用考虑对运动较小的像素采用更加依赖于相邻帧像素的场间插值算法,即:The interpolation result at this time is completely determined by the intra-field interpolation result. For pixels with less motion, the results are completely calculated by intra-field interpolation, and the result will obviously introduce a large error. Therefore, here, an inter-field interpolation algorithm that is more dependent on adjacent frame pixels is considered for pixels with less motion, namely:

Xx __ interinter == medianmedian (( Ff 22 ,, Xx __ intraintra ,, CC 22 )) ,, whilewhile (( &alpha;&alpha; >> ii __ selsel )) averageaverage (( Ff 22 ,, CC 22 )) ,, otherwiseotherwise

其中,X_inter表示场间插值的像素值,median为中值滤波函数,X_intra表示场内插值的像素值,C2为第n-1场相同位置的已知像素值,F2为第n+1场相同位置的已知像素值,i_sel为两种场间插值方式的选择系数,i_sel为介于0到1之间的小数(这里一般取0.5),α表示插值系数。Among them, X_inter represents the pixel value of inter-field interpolation, median is the median filter function, X_intra represents the pixel value of intra-field interpolation, C2 is the known pixel value of the same position in the n-1th field, and F2 is the same in the n+1th field The known pixel value of the position, i_sel is the selection coefficient of the two inter-field interpolation methods, i_sel is a decimal between 0 and 1 (here generally takes 0.5), and α represents the interpolation coefficient.

3.2场内插值3.2 Intra-field interpolation

场内插值采用边缘差值算法,极大的改善了边缘处的插值效果,首先为待插入值的像素点X选取一个插值参考窗口,在包括水平和竖直方向在内的4个方向上判断图像的相关性,并在相关性最大的方向得到待插像素点X的X_intra值,定义4个方向的相关性如下:In-field interpolation adopts the edge difference algorithm, which greatly improves the interpolation effect at the edge. First, select an interpolation reference window for the pixel point X to be interpolated, and judge it in four directions including the horizontal and vertical directions. The correlation of the image, and the X_intra value of the pixel point X to be interpolated is obtained in the direction with the greatest correlation, and the correlation of the four directions is defined as follows:

Dire1=|D1-D3|+|E1-E3|Dire1=|D1-D3|+|E1-E3|

Dire2=|D1-E1|+|D3-E3|Dire2=|D1-E1|+|D3-E3|

Dire3=|D2-E1|+|D3-E2|Dire3=|D2-E1|+|D3-E2|

Dire4=|D1-E2|+|D2-E3|Dire4=|D1-E2|+|D2-E3|

Dire1、Dire2、Dire3、Dire4的数值大小表示待插值像素在对应方向上与存在像素点的相关性,值越小相关性越强,针对相关性最大方向进行插值,具体方法如下所示:The numerical values of Dire1, Dire2, Dire3, and Dire4 indicate the correlation between the pixel to be interpolated and the existing pixel in the corresponding direction. The smaller the value, the stronger the correlation, and the interpolation is performed for the direction of the maximum correlation. The specific method is as follows:

如果Dire1方向相关性最大,则If the Dire1 direction has the greatest correlation, then

X_intra=(Max(Min(A,B,C),Min(D,E,F),Min(B,E))+Min(Max(A,B,C),Max(D,E,F),Max(B,E)))/2X_intra=(Max(Min(A,B,C),Min(D,E,F),Min(B,E))+Min(Max(A,B,C),Max(D,E,F) , Max(B,E)))/2

如果Dire2方向相关性最大,则If the Dire2 direction has the greatest correlation, then

X_intra=(Max(Min(A,B,D),Min(C,E,F),Min(B,E))+Min(Max(A,B,D),Max(C,E,F),Max(B,E)))/2X_intra=(Max(Min(A,B,D),Min(C,E,F),Min(B,E))+Min(Max(A,B,D),Max(C,E,F) , Max(B,E)))/2

如果Dire3方向相关性最大,则If the Dire3 direction has the greatest correlation, then

X_intra=(Max(Min(B,D),Min(C,E,),Min(B,E))+Min(Max(B,D),Max(C,E),Max(B,E)))/2X_intra=(Max(Min(B,D),Min(C,E,),Min(B,E))+Min(Max(B,D),Max(C,E),Max(B,E) ))/2

如果Dire4方向相关性最大,则If the Dire4 direction correlation is the largest, then

X_intra=(Max(Min(A,E),Min(B,F),Min(B,E))+Min(Max(A,E),Max(B,F),Max(B,E)))/2X_intra=(Max(Min(A,E),Min(B,F),Min(B,E))+Min(Max(A,E),Max(B,F),Max(B,E)) )/2

X_intra表示场内插值的像素值,Min表示取最小值,Max表示取最大值。X_intra represents the pixel value of intra-field interpolation, Min represents the minimum value, and Max represents the maximum value.

四、根据插值系数α、场内插值和场间插值的结果,对待插入点进行自适应插值,最后与当前场已有行交织,以实现逐行输出。4. According to the interpolation coefficient α, the results of intra-field interpolation and inter-field interpolation, perform adaptive interpolation on the point to be inserted, and finally interleave with existing lines in the current field to realize progressive output.

根据运动检测的判断结果MV,自适应插值加权系数α,场内插值以及场间插值,进行自适应插值得到缺失行当前场第n场待插值像素点X最后输出值X为:According to the judgment result MV of motion detection, adaptive interpolation weighting coefficient α, intra-field interpolation and inter-field interpolation, adaptive interpolation is performed to obtain the missing row current field nth field to be interpolated pixel point X and the final output value X is:

ZZ ,, (( ythe y modmod 22 == nno modmod 22 )) &alpha;&alpha; ** Xx __ interinter ++ (( 11 -- &alpha;&alpha; )) ** Xx __ intraintra ,, otherwiseotherwise

其中,n表示待插入点的场数即当前场,y表示待插入点的行数,Z表示当前场已经存在的像素。Among them, n represents the number of fields to be inserted, that is, the current field, y represents the number of rows to be inserted, and Z represents the existing pixels in the current field.

最后与当前场已有行交织以实现逐行输出。Finally, it is interleaved with the existing lines in the current field to realize progressive output.

以上所述仅为本发明的较佳实施方式,本发明并不局限于上述实施方式,在实施过程中可能存在局部微小的结构改动,如果对本发明的各种改动或变型不脱离本发明的精神和范围,且属于本发明的权利要求和等同技术范围之内,则本发明也意图包含这些改动和变型。The above is only a preferred embodiment of the present invention, the present invention is not limited to the above embodiment, there may be local minor structural changes in the implementation process, if the various changes or modifications of the present invention do not depart from the spirit of the present invention and scope, and belong to the claims and equivalent technical scope of the present invention, the present invention also intends to include these changes and modifications.

Claims (5)

1. based on the Motion Adaptive interlace-removing method of visual characteristic, it comprises:
1) stores five interlaced video information f continuously N-2, f N-1, f n, f N+1And f N+2, these five interlaced videos are carried out motion detection, obtain motion detection value MV
2) adopt dynamic bivalve value method to obtain interpolation coefficient α according to human vision property;
3) to video f nCarry out interpolation between field interpolation and field;
4) to video f nCarry out adaptive-interpolation, interweave at last with when the existing row in front court, with realization output line by line,
It is characterized in that, described step 1) with each pixel about 2 join in the motion detection, constitute the FIR filter,
A=(A1+2*A2+A3)/4,
B=(B1+2*B2+B3)/4,
C=(C1+2*C2+C3)/4,
D=(D1+2*D2+D3)/4,
E=(E1+2*E2+E3)/4,
F=(F1+2*F2+F3)/4,
G=(G1+2*G2+G3)/4,
H=(H1+2*H2+H3)/4,
MV1=|A-D|,
MV2=|B-E|,
MV3=|C-F|,
MV4=|D-G|,
MV5=|E-H|,
MV=max(MV1,MV2,MV3,MV4,MV5),
Wherein, A1/A2/A3 and B1/B2/B3 represent n-2 field two row interlacing pixels, three pixels of every row, C1/C2/C3 represents three pixels of n-1 field delegation, D1/D2/D3, E1/E2/E3 represents n field two row interlacing pixels, three pixels of every row, F1/F2/F3 represents three pixels of n+1 field delegation, G1/G2/G3, H1/H2/H3 represent n+2 field two row interlacing pixels, three pixels of every row, MV1, MV2, MV3, MV4, MV5 represent the difference of corresponding pixel points between per two, and MV is the value of motion detection.
2. the Motion Adaptive interlace-removing method based on visual characteristic according to claim 1 is characterized in that described step 2) in interpolation coefficient α adopt dynamic bivalve value method,
&alpha; = 0 , ( MV &le; MV L ) 1 , ( MV &le; MV H ) ( MV - MV L ) / ( MV H - MV L ) ,
Wherein, MV LExpression is threshold values down, MV HThreshold values in the expression,
MV L = 30 - Y * 10 128 , while ( Y < 128 ) 30 - ( 255 - Y ) * 10 128 , otherwise
MV H = 80 - Y * 10 128 , while ( Y < 128 ) 80 - ( 255 - Y ) * 10 128 , otherwise
Y represents the pixel brightness value of the pixel of the value of being inserted into.
3. the Motion Adaptive interlace-removing method based on visual characteristic according to claim 2, it is characterized in that, the field interpolation of described step 3) adopts the edge difference value-based algorithm, definition Dire1, Dire2, Dire3 and Dire4 are the correlation of the pixel X of the value of being inserted at the four direction that comprises level and vertical direction
Dire1=|D1-D3|+|E1-E3|
Dire2=|D1-E1|+|D3-E3|
Dire3=|D2-E1|+|D3-E2|
Dire4=|D1-E2|+|D2-E3|
The numerical values recited of Dire1, Dire2, Dire3 and Dire4 is represented the interpolation pixel the counterparty upwards and the correlation that has pixel, and it is strong more be worth more little correlation, carries out interpolation at correlation maximum direction, and concrete grammar is as follows:
If Dire1 directional dependency maximum, then
X_intra=(Max(Min(A,B,C),Min(D,E,F),Min(B,E))+Min(Max(A,B,C),Max(D,E,F),Max(B,E)))/2
If Dire2 directional dependency maximum, then
X_intra=(Max(Min(A,B,D),Min(C,E,F),Min(B,E))+Min(Max(A,B,D),Max(C,E,F),Max(B,E)))/2
If Dire3 directional dependency maximum, then
X_intra=(Max(Min(B,D),Min(C,E,),Min(B,E))+Min(Max(B,D),Max(C,E),Max(B,E)))/2
If Dire4 directional dependency maximum, then
X_intra=(Max(Min(A,E),Min(B,F),Min(B,E))+Min(Max(A,E),Max(B,F),Max(B,E)))/2
Wherein, X_intra represents the pixel value of field interpolation, and Min represents to get minimum value, and Max represents to get maximum.
4. the Motion Adaptive interlace-removing method based on visual characteristic according to claim 3 is characterized in that, interpolation adopts improved 3 median filter method between the field of described step 3), is specially:
X _ inter = median ( F 2 , X _ intra , C 2 ) , while ( &alpha; > i _ sel ) average ( F 2 , C 2 ) , otherwise
Wherein, X_inter represents the pixel value of interpolation between the field, median is the medium filtering function, X_intra represents the pixel value of field interpolation, C2 is the known pixel values of n-1 field same position, and F2 is the known pixel values of n+1 field same position, and i_sel is the selection coefficient of interpolation mode between two kinds of fields, i_sel is the decimal between 0 to 1, and α represents interpolation coefficient.
5. the Motion Adaptive interlace-removing method based on visual characteristic according to claim 4 is characterized in that, the insertion pixel value X that the n field of the adaptive-interpolation of described step 4) is inserted into a little is:
Z , ( y mod 2 = n mod 2 ) &alpha; * X _ inter + ( 1 - &alpha; ) * X _ intra , otherwise
Wherein, the number of fields that n represents to be inserted into is a little promptly worked as the front court, and y represents to be inserted into line number a little, and Z represents to work as the pixel that the front court has existed.
CN201110146619XA 2011-06-02 2011-06-02 Motion self-adaptive de-interlacing method based on visual characteristics Pending CN102215368A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110146619XA CN102215368A (en) 2011-06-02 2011-06-02 Motion self-adaptive de-interlacing method based on visual characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110146619XA CN102215368A (en) 2011-06-02 2011-06-02 Motion self-adaptive de-interlacing method based on visual characteristics

Publications (1)

Publication Number Publication Date
CN102215368A true CN102215368A (en) 2011-10-12

Family

ID=44746468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110146619XA Pending CN102215368A (en) 2011-06-02 2011-06-02 Motion self-adaptive de-interlacing method based on visual characteristics

Country Status (1)

Country Link
CN (1) CN102215368A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102497524A (en) * 2011-12-05 2012-06-13 四川虹微技术有限公司 Edge adaptive de-interlacing interpolation method
CN102509311A (en) * 2011-11-21 2012-06-20 华亚微电子(上海)有限公司 Motion detection method and device
CN104202555A (en) * 2014-09-29 2014-12-10 建荣集成电路科技(珠海)有限公司 Method and device for deinterlacing
CN107135367A (en) * 2017-04-26 2017-09-05 西安诺瓦电子科技有限公司 Video interlace-removing method and device, method for processing video frequency and device
CN107306346A (en) * 2016-04-18 2017-10-31 深圳市中兴微电子技术有限公司 Image processing method and device, player, electronic equipment
CN108199735A (en) * 2018-02-06 2018-06-22 成都纳雷科技有限公司 A kind of Adaptive Suppression method for transmitting radar antenna crosstalk, wave filter
CN105025241B (en) * 2014-04-30 2018-08-24 深圳市中兴微电子技术有限公司 A kind of image de-interlacing apparatus and method
CN113261276A (en) * 2019-01-09 2021-08-13 西安诺瓦星云科技股份有限公司 Deinterlacing interpolation method, device and system, video processing method and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101014086A (en) * 2007-01-31 2007-08-08 天津大学 De-interlacing apparatus using motion detection and adaptive weighted filter
US20080062309A1 (en) * 2006-09-07 2008-03-13 Texas Instruments Incorporated Motion detection for interlaced video
CN101309385A (en) * 2008-07-09 2008-11-19 北京航空航天大学 A Deinterlacing Method Based on Motion Detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080062309A1 (en) * 2006-09-07 2008-03-13 Texas Instruments Incorporated Motion detection for interlaced video
CN101014086A (en) * 2007-01-31 2007-08-08 天津大学 De-interlacing apparatus using motion detection and adaptive weighted filter
CN101309385A (en) * 2008-07-09 2008-11-19 北京航空航天大学 A Deinterlacing Method Based on Motion Detection

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509311A (en) * 2011-11-21 2012-06-20 华亚微电子(上海)有限公司 Motion detection method and device
CN102509311B (en) * 2011-11-21 2015-01-21 华亚微电子(上海)有限公司 Motion detection method and device
CN102497524B (en) * 2011-12-05 2014-02-12 四川虹微技术有限公司 Edge adaptive de-interlacing interpolation method
CN102497524A (en) * 2011-12-05 2012-06-13 四川虹微技术有限公司 Edge adaptive de-interlacing interpolation method
CN105025241B (en) * 2014-04-30 2018-08-24 深圳市中兴微电子技术有限公司 A kind of image de-interlacing apparatus and method
CN104202555A (en) * 2014-09-29 2014-12-10 建荣集成电路科技(珠海)有限公司 Method and device for deinterlacing
CN104202555B (en) * 2014-09-29 2017-10-20 建荣集成电路科技(珠海)有限公司 Interlace-removing method and device
CN107306346A (en) * 2016-04-18 2017-10-31 深圳市中兴微电子技术有限公司 Image processing method and device, player, electronic equipment
CN107306346B (en) * 2016-04-18 2019-11-22 深圳市中兴微电子技术有限公司 Image data processing method and device, player, electronic device
CN107135367A (en) * 2017-04-26 2017-09-05 西安诺瓦电子科技有限公司 Video interlace-removing method and device, method for processing video frequency and device
CN107135367B (en) * 2017-04-26 2019-10-29 西安诺瓦星云科技股份有限公司 Video interlace-removing method and device, method for processing video frequency and device
CN108199735A (en) * 2018-02-06 2018-06-22 成都纳雷科技有限公司 A kind of Adaptive Suppression method for transmitting radar antenna crosstalk, wave filter
CN108199735B (en) * 2018-02-06 2019-09-17 成都纳雷科技有限公司 It is a kind of for the Adaptive Suppression method of transmitting radar antenna crosstalk, filter
CN113261276A (en) * 2019-01-09 2021-08-13 西安诺瓦星云科技股份有限公司 Deinterlacing interpolation method, device and system, video processing method and storage medium
CN113261276B (en) * 2019-01-09 2023-08-22 西安诺瓦星云科技股份有限公司 De-interlacing interpolation method, de-interlacing interpolation device, de-interlacing interpolation system, video processing method and storage medium

Similar Documents

Publication Publication Date Title
CN102215368A (en) Motion self-adaptive de-interlacing method based on visual characteristics
Lin et al. Motion adaptive interpolation with horizontal motion detection for deinterlacing
CN100479495C (en) Deinterlacing method using motion detection and adaptive weighted filtering
CN100559836C (en) A Noise Reduction Method for Digital Image
CN101600061B (en) Video motion-adaptive de-interlacing method and device therefor
JP5645699B2 (en) Motion detection device and method, video signal processing device and method, and video display device
CN101309385B (en) A Deinterlacing Method Based on Motion Detection
CN101014086A (en) De-interlacing apparatus using motion detection and adaptive weighted filter
CN102025960B (en) Motion compensation de-interlacing method based on adaptive interpolation
CN103051857B (en) Motion compensation-based 1/4 pixel precision video image deinterlacing method
JP3842756B2 (en) Method and system for edge adaptive interpolation for interlace-to-progressive conversion
CN101510985B (en) Self-adapting de-interleave method for movement compensation accessory movement
CN102364933A (en) An Adaptive Deinterlacing Method Based on Motion Classification
JP2013030862A (en) Image processing apparatus, image processing method, and image display apparatus
CN102447870A (en) Stationary object detection method and motion compensation device
US7787048B1 (en) Motion-adaptive video de-interlacer
US9495728B2 (en) Method for edge detection, method for motion detection, method for pixel interpolation utilizing up-sampling, and apparatuses thereof
CN103024331B (en) Video de-interlacing method based on edge detection
CN1199449C (en) Method for removing intersection by adopting error protection motion compensation and its equipment
CN106027943B (en) A kind of video interlace-removing method
CN102497525A (en) Motion compensation deinterlacing method
US8274605B2 (en) System and method for adjacent field comparison in video processing
CN101483747B (en) Movement detection method suitable for deinterlacing technique
CN102497492B (en) Detection method for subtitle moving in screen
US8228429B2 (en) Reducing artifacts as a result of video de-interlacing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20111012