CN102708568B - Stereoscopic image objective quality evaluation method on basis of structural distortion - Google Patents
Stereoscopic image objective quality evaluation method on basis of structural distortion Download PDFInfo
- Publication number
- CN102708568B CN102708568B CN201210145034.0A CN201210145034A CN102708568B CN 102708568 B CN102708568 B CN 102708568B CN 201210145034 A CN201210145034 A CN 201210145034A CN 102708568 B CN102708568 B CN 102708568B
- Authority
- CN
- China
- Prior art keywords
- mrow
- org
- dis
- msub
- munderover
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 34
- 230000008447 perception Effects 0.000 claims abstract description 55
- 238000011156 evaluation Methods 0.000 claims abstract description 40
- 239000011159 matrix material Substances 0.000 claims description 143
- 238000000354 decomposition reaction Methods 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000011524 similarity measure Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims 26
- 229910044991 metal oxide Inorganic materials 0.000 claims 2
- 150000004706 metal oxides Chemical class 0.000 claims 2
- 239000004065 semiconductor Substances 0.000 claims 2
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 7
- 238000001303 quality assessment method Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013210 evaluation model Methods 0.000 description 2
- 241001182632 Akko Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- YYJNOYZRYGDPNH-MFKUBSTISA-N fenpyroximate Chemical compound C=1C=C(C(=O)OC(C)(C)C)C=CC=1CO/N=C/C=1C(C)=NN(C)C=1OC1=CC=CC=C1 YYJNOYZRYGDPNH-MFKUBSTISA-N 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
本发明公开了一种基于结构失真的立体图像客观质量评价方法,其首先分别对无失真的立体图像和失真的立体图像的左右视点图像进行区域划分,得到人眼敏感区域和相应的非敏感区域,然后从结构幅值失真和结构方向失真两方面分别得出敏感区域和非敏感区域的评价指标;其次获取左右视点图像质量评价值;再次采样奇异值差异和剥夺奇异值后的残余图像的均值偏差率来衡量立体图像的深度感知的畸变情况,从而获得立体感知质量的评价值;最后结合左右视点图像质量和立体感知质量,得到立体图像的最终质量评价结果,由于本发明方法避免模拟人眼视觉系统的各个组成部分,而充分利用了立体图像的结构信息,因此有效地提高了客观评价结果与主观感知的一致性。
The invention discloses a method for evaluating the objective quality of stereoscopic images based on structural distortion. Firstly, the undistorted stereoscopic image and the left and right viewpoint images of the distorted stereoscopic image are divided into regions to obtain human eye sensitive regions and corresponding non-sensitive regions. , and then obtain the evaluation index of sensitive area and non-sensitive area from the two aspects of structural amplitude distortion and structural direction distortion respectively; secondly, obtain the image quality evaluation value of the left and right viewpoints; re-sample the singular value difference and the mean value of the residual image after depriving the singular value The deviation rate is used to measure the distortion of the depth perception of the stereoscopic image, thereby obtaining the evaluation value of the stereoscopic perception quality; finally combining the left and right viewpoint image quality and the stereoscopic perception quality to obtain the final quality evaluation result of the stereoscopic image, because the method of the present invention avoids simulating human eyes The various components of the visual system make full use of the structural information of the stereo image, thus effectively improving the consistency between the objective evaluation results and the subjective perception.
Description
技术领域technical field
本发明涉及一种图像质量评价技术,尤其是涉及一种基于结构失真的立体图像客观质量评价方法。The invention relates to an image quality evaluation technology, in particular to an objective quality evaluation method of a stereoscopic image based on structural distortion.
背景技术Background technique
立体图像质量评价在立体图像/视频系统中占据着十分重要的地位,不仅能够评判立体图像/视频系统中处理算法的优劣,而且还能优化和设计该算法,以提高立体图像/视频处理系统的效率。立体图像质量评价方法主要分为两类:主观质量评价和客观质量评价。主观质量评价方法就是把多名观察者对待评价立体图像的质量进行加权平均的综合评价,其结果符合人眼视觉系统特性,但是其受到计算不便、速度慢、成本高等诸多因素的限制,导致嵌入系统难,因而在实际应用中无法得到广泛推广。而客观质量评价方法具有操作简单、成本低、易于实现及实时优化算法等特点,成为立体图像质量评价研究的重点。Stereoscopic image quality evaluation occupies a very important position in stereoscopic image/video systems. It can not only judge the pros and cons of processing algorithms in stereoscopic image/video systems, but also optimize and design the algorithms to improve stereoscopic image/video processing systems. s efficiency. Stereoscopic image quality assessment methods are mainly divided into two categories: subjective quality assessment and objective quality assessment. The subjective quality evaluation method is to carry out a weighted average comprehensive evaluation of the quality of the three-dimensional image to be evaluated by multiple observers. The system is difficult, so it cannot be widely promoted in practical applications. The objective quality evaluation method has the characteristics of simple operation, low cost, easy implementation and real-time optimization algorithm, etc., and has become the focus of research on stereoscopic image quality evaluation.
目前,主流的立体图像客观质量评价模型包括左右视点图像质量评价和深度感知质量评价两部分。但是,由于人类对人眼视觉系统的认识有限,难以准确地模拟人眼的各个组成部分,因此这些模型与主观感知之间的一致性不是很好。At present, the mainstream stereoscopic image objective quality evaluation model includes two parts: left and right viewpoint image quality evaluation and depth perception quality evaluation. However, due to the limited understanding of the human visual system, it is difficult to accurately simulate the various components of the human eye, so the agreement between these models and subjective perception is not very good.
发明内容Contents of the invention
本发明所要解决的技术问题是提供一种基于结构失真的立体图像客观质量评价方法,其能够有效提高立体图像客观质量评价结果与主观感知之间的一致性。The technical problem to be solved by the present invention is to provide an objective quality evaluation method for stereoscopic images based on structural distortion, which can effectively improve the consistency between the objective quality evaluation results and subjective perception of stereoscopic images.
本发明解决上述技术问题所采用的技术方案为:一种基于结构失真的立体图像客观质量评价方法,其特征在于包括以下步骤:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a method for evaluating the objective quality of stereoscopic images based on structural distortion, which is characterized in that it includes the following steps:
①令Sorg为原始的无失真的立体图像,令Sdis为待评价的失真的立体图像,将原始的无失真的立体图像Sorg的左视点灰度图像记为Lorg,将原始的无失真的立体图像Sorg的右视点灰度图像记为Rorg,将待评价的失真的立体图像Sdis的左视点灰度图像记为Ldis,将待评价的失真的立体图像Sdis的右视点灰度图像记为Rdis;①Let S org be the original undistorted stereo image, let S dis be the distorted stereo image to be evaluated, denote the left-view grayscale image of the original undistorted stereo image S org as L org , and denote the original undistorted stereo image The right viewpoint grayscale image of the distorted stereo image S org is denoted as R org , the left viewpoint grayscale image of the distorted stereo image S dis to be evaluated is denoted as L dis , and the right viewpoint grayscale image of the distorted stereo image S dis to be evaluated is The viewpoint grayscale image is denoted as R dis ;
②对Lorg和Ldis、Rorg和Rdis4幅图像分别实施区域划分,分别得到Lorg和Ldis、Rorg和Rdis4幅图像各自对应的敏感区域矩阵映射图,将Lorg和Ldis分别实施区域划分后得到的各自对应的敏感区域矩阵映射图的系数矩阵均记为AL,对于AL中坐标位置为(i,j)处的系数值,将其记为AL(i,j),将Rorg和Rdis分别实施区域划分后得到的各自对应的敏感区域矩阵映射图的系数矩阵均记为AR,对于AR中坐标位置为(i,j)处的系数值,将其记为AR(i,j),其中,此处0≤i≤(W-8),0≤j≤(H-8),W表示Lorg、Ldis、Rorg和Rdis的宽,H表示Lorg、Ldis、Rorg和Rdis的高;②The four images of L org and L dis , R org and R dis are divided into regions respectively, and the corresponding sensitive area matrix maps of the four images of L org and L dis , R org and R dis are respectively obtained, and L org and The coefficient matrices of the corresponding sensitive area matrix maps obtained after L dis respectively implement area division are all recorded as AL , and the coefficient value at the coordinate position (i, j) in AL is recorded as AL ( i, j), The coefficient matrices of the corresponding sensitive area matrix maps obtained after R org and R dis are divided into regions are recorded as A R , and the coefficient value at the coordinate position (i, j) in AR is recorded as is A R (i,j), Among them, here 0≤i≤(W-8), 0≤j≤(H-8), W represents the width of L org , L dis , R org and R dis , H represents L org , L dis , R org and the height of R dis ;
③将Lorg和Ldis2幅图像分别分割成(W-7)×(H-7)个尺寸大小为8×8的重叠块,然后计算Lorg和Ldis2幅图像中所有坐标位置相同的两个重叠块的结构幅值失真映射图,将该结构幅值失真映射图的系数矩阵记为BL,对于BL中坐标位置为(i,j)处的系数值,将其记为BL(i,j),
将Rorg和Rdis2幅图像分别分割成(W-7)×(H-7)个尺寸大小为8×8的重叠块,然后计算Rorg和Rdis2幅图像中所有坐标位置相同的两个重叠块的结构幅值失真映射图,将该结构幅值失真映射图的系数矩阵记为BR,对于BR中坐标位置为(i,j)处的系数值,将其记为BR(i,j),
④对Lorg和Ldis2幅图像分别实施水平及垂直方向Sobel算子处理,分别得到Lorg和Ldis2幅图像各自对应的水平方向梯度矩阵映射图和垂直方向梯度矩阵映射图,将Lorg实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,org,L,对于Ih,org,L中坐标位置为(i,j)处的系数值,将其记为Ih,org,L(i,j),
对Rorg和Rdis2幅图像分别实施水平及垂直方向Sobel算子处理,分别得到Rorg和Rdis2幅图像各自对应的水平方向梯度矩阵映射图和垂直方向梯度矩阵映射图,将Rorg实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,org,R,对于Ih,org,R中坐标位置为(i,j)处的系数值,将其记为Ih,org,R(i,j),
⑤计算Lorg和Ldis2幅图像中所有坐标位置相同的两个重叠块的结构方向失真映射图,将该结构方向失真映射图的系数矩阵记为EL,对于EL中坐标位置为(i,j)处的系数值,将其记为EL(i,j),
计算Rorg和Rdis2幅图像中所有坐标位置相同的两个重叠块的结构方向失真映射图,将该结构方向失真映射图的系数矩阵记为ER,对于ER中坐标位置为(i,j)处的系数值,将其记为ER(i,j),
⑥计算Lorg和Ldis的结构失真评价值,记为QL,QL=ω1×Qm,L+ω2×Qnm,L,
计算Rorg和Rdis的结构失真评价值,记为QR,QR=ω'1×Qm,R+ω'2×Qnm,R,
⑦根据QL和QR计算待评价的失真的立体图像Sdis相对于原始的无失真的立体图像Sorg的空间频率相似度度量,记为QF,QF=β1×QL+(1-β1)×QR,其中,β1表示QL的权值;⑦ Calculate the spatial frequency similarity measure of the distorted stereo image S dis to be evaluated relative to the original undistorted stereo image S org according to Q L and Q R , denoted as Q F , Q F =β 1 ×Q L +( 1-β 1 )×Q R , where β 1 represents the weight of Q L ;
⑧计算Lorg和Rorg的绝对差值图像,以矩阵形式表示为 计算Ldis和Rdis的绝对差值图像,以矩阵形式表示为 其中,“||”为取绝对值符号;⑧ Calculate the absolute difference image of L org and R org , expressed in matrix form as Calculate the absolute difference image of L dis and R dis , expressed in matrix form as Among them, "||" is the absolute value symbol;
⑨将和幅图像分别分割成个互不重叠的尺寸大小为8×8的块,然后对和幅图像中的所有块分别实施奇异值分解,得到对应的由其每个块的奇异值矩阵组成的奇异值映射图和对应的由其每个块的奇异值矩阵组成的奇异值映射图,将实施奇异值分解后得到的奇异值映射图的系数矩阵记为Gorg,对于Gorg中第n个块的奇异值矩阵中坐标位置为(p,q)处的奇异值,将其记为将实施奇异值分解后得到的奇异值映射图的系数矩阵记为Gdis,对于Gdis中第n个块的奇异值矩阵中坐标位置为(p,q)处的奇异值,将其记为其中,WLR表示和的宽,HLR表示和的高,0≤p≤7,0≤q≤7;⑨ will and The image is divided into non-overlapping blocks of size 8×8, and then and Singular value decomposition is performed on all the blocks in the image respectively to get The corresponding singular value map composed of the singular value matrix of each block and The corresponding singular value map composed of the singular value matrix of each block will be The coefficient matrix of the singular value map obtained after implementing singular value decomposition is denoted as G org , and for the singular value at the coordinate position (p, q) in the singular value matrix of the nth block in G org , it is denoted as Will The coefficient matrix of the singular value map obtained after the implementation of singular value decomposition is denoted as G dis , and for the singular value at the coordinate position (p, q) in the singular value matrix of the nth block in G dis , it is denoted as Among them, W LR means and The width, H LR means and height of, 0≤p≤7,0≤q≤7;
⑩计算对应的奇异值映射图和对应的奇异值映射图的奇异值偏差评价值,记为K,
对和分别实施奇异值分解,分别得到和各自对应的2个正交矩阵和1个奇异值矩阵,将实施奇异值分解后得到的2个正交矩阵分别记为χorg和Vorg,将实施奇异值分解后得到的奇异值矩阵记为Oorg,将实施奇异值分解后得到的2个正交矩阵分别记为χdis和Vdis,将实施奇异值分解后得到的奇异值矩阵记为Odis, right and Singular value decomposition is carried out respectively, and we get and Corresponding to two orthogonal matrices and one singular value matrix, the The two orthogonal matrices obtained after implementing singular value decomposition are denoted as χ org and V org respectively, and the The singular value matrix obtained after implementing singular value decomposition is denoted as O org , Will The two orthogonal matrices obtained after implementing singular value decomposition are denoted as χ dis and V dis respectively, and the The singular value matrix obtained after implementing singular value decomposition is denoted as O dis ,
分别计算和幅图像剥夺奇异值后的残留矩阵图,将剥夺奇异值后的残留矩阵图记为Xorg,Xorg=χorg×Λ×Vorg,将剥夺奇异值后的残留矩阵图记为Xdis,Xdis=χdis×Λ×Vdis,其中,Λ表示单位矩阵,Λ的大小与Oorg和Odis的大小一致; Calculate separately and The residual matrix image after stripping the singular values of the images, the The residual matrix after deprivation of singular values is denoted as X org , X org = χ org × Λ × V org , the The residual matrix after stripping the singular value is marked as X dis , X dis =χ dis ×Λ×V dis , where Λ represents the identity matrix, and the size of Λ is consistent with the size of O org and O dis ;
计算Xorg和Xdis的均值偏差率,记为 其中,x表示Xorg和Xdis中的像素点的横坐标,y表示Xorg和Xdis中的像素点的纵坐标; Calculate the mean deviation rate of X org and X dis , denoted as Wherein, x represents the abscissa of the pixel in X org and X dis , and y represents the ordinate of the pixel in X org and X dis ;
计算待评价的失真的立体图像Sdis相对于原始的无失真的立体图像Sorg的立体感知评价度量,记为QS,其中,τ表示常数,用于调节K和在QS中所起的重要性; Calculate the stereo perception evaluation metric of the distorted stereo image S dis to be evaluated relative to the original undistorted stereo image S org , denoted as Q S , where τ represents a constant used to adjust K and The importance played in QS ;
根据QF和QS,计算待评价的失真的立体图像Sdis的图像质量评价分值,记为Q,Q=QF×(QS)ρ,其中,ρ表示权重系数值。 According to Q F and Q S , calculate the image quality evaluation score of the distorted stereo image S dis to be evaluated, denoted as Q, Q=Q F ×(Q S ) ρ , where ρ represents the weight coefficient value.
所述的步骤②中Lorg和Ldis各自对应的敏感区域矩阵映射图的系数矩阵AL的获取过程为:In the step 2., the acquisition process of the coefficient matrix AL of the corresponding sensitive area matrix map of L org and L dis is:
②-a1、对Lorg作水平及垂直方向Sobel算子处理,得到Lorg的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,l1和Zv,l1,然后计算Lorg的梯度幅值图,记为Zl1,其中,Zl1(x,y)表示Zl1中坐标位置为(x,y)的像素点的梯度幅值,Zh,l1(x,y)表示Zh,l1中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,l1(x,y)表示Zv,l1中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zl1的宽,H'表示Zl1的高;②-a1. Perform horizontal and vertical Sobel operator processing on L org to obtain the horizontal gradient image and vertical gradient image of L org , which are recorded as Z h,l1 and Z v,l1 respectively, and then calculate the gradient of L org Amplitude diagram, denoted as Z l1 , Among them, Z l1 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z l1 , Z h, l1 (x, y) represents the coordinate position in Z h, l1 is (x, y) the horizontal direction gradient value of the pixel point, Z v,l1 (x,y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x,y) in Z v,l1 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z l1 , and H' represents the height of Z l1 ;
②-a2、对Ldis作水平及垂直方向Sobel算子处理,得到Ldis的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,l2和Zv,l2,然后计算Ldis的梯度幅值图,记为Zl2,其中,Zl2(x,y)表示Zl2中坐标位置为(x,y)的像素点的梯度幅值,Zh,l2(x,y)表示Zh,l2中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,l2(x,y)表示Zv,l2中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zl2的宽,H'表示Zl2的高;②-a2. Perform horizontal and vertical Sobel operator processing on L dis to obtain the horizontal gradient image and vertical gradient image of L dis , which are recorded as Z h,l2 and Z v,l2 respectively, and then calculate the gradient of L dis Amplitude diagram, denoted as Z l2 , Among them, Z l2 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z l2 , Z h, l2 (x, y) represents the coordinate position in Z h, l2 is (x, y) the horizontal direction gradient value of the pixel point, Z v,l2 (x,y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x,y) in Z v,l2 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z l2 , and H' represents the height of Z l2 ;
②-a3、计算划分区域时所需的阈值T,
②-a4、将Zl1中坐标位置为(i,j)的像素点的梯度幅值记为Zl1(i,j),将Zl2中坐标位置为(i,j)的像素点的梯度幅值记为Zl2(i,j),判断Zl1(i,j)>T或Zl2(i,j)>T是否成立,如果成立,则确定Lorg和Ldis中坐标位置为(i,j)的像素点属于敏感区域,并令AL(i,j)=1,否则,确定Lorg和Ldis中坐标位置为(i,j)的像素点属于非敏感区域,并令AL(i,j)=0,其中,0≤i≤(W-8),0≤j≤(H-8);②-a4. Record the gradient magnitude of the pixel at the coordinate position (i, j) in Z l1 as Z l1 (i, j), and the gradient of the pixel at the coordinate position (i, j) in Z l2 The amplitude is recorded as Z l2 (i,j), and it is judged whether Z l1 (i,j)>T or Z l2 (i,j)>T is established. If it is established, the coordinate position in L org and L dis is determined as ( i, j) belongs to the sensitive area, and let A L (i, j) = 1, otherwise, determine that the pixel with the coordinate position (i, j) in L org and L dis belongs to the non-sensitive area, and set A L (i,j)=0, where, 0≤i≤(W-8), 0≤j≤(H-8);
所述的步骤②中Rorg和Rdis各自对应的敏感区域矩阵映射图的系数矩阵AR的获取过程为:The acquisition process of the coefficient matrix A R of the sensitive area matrix map corresponding to R org and R dis respectively in the step ② is:
②-b1、对Rorg作水平及垂直方向Sobel算子处理,得到Rorg的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,r1和Zv,r1,然后计算Rorg的梯度幅值图,记为Zr1,其中,Zr1(x,y)表示Zr1中坐标位置为(x,y)的像素点的梯度幅值,Zh,r1(x,y)表示Zh,r1中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,r1(x,y)表示Zv,r1中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zr1的宽,H'表示Zr1的高;②-b1. Perform horizontal and vertical Sobel operator processing on R org to obtain the horizontal gradient image and vertical gradient image of R org , which are recorded as Z h,r1 and Z v,r1 respectively, and then calculate the gradient of R org Amplitude diagram, denoted as Z r1 , Among them, Z r1 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z r1 , Z h, r1 (x, y) represents the coordinate position in Z h, r1 is (x, The horizontal direction gradient value of the pixel point of y), Z v, r1 (x, y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x, y) in Z v, r1 , 1≤x≤W', 1≤y≤H', where W' indicates the width of Z r1 , and H' indicates the height of Z r1 ;
②-b2、对Rdis作水平及垂直方向Sobel算子处理,得到Rdis的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,r2和Zv,r2,然后计算Rdis的梯度幅值图,记为Zr2,其中,Zr2(x,y)表示Zr2中坐标位置为(x,y)的像素点的梯度幅值,Zh,r2(x,y)表示Zh,r2中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,r2(x,y)表示Zv,r2中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zr2的宽,H'表示Zr2的高;②-b2. Perform horizontal and vertical Sobel operator processing on R dis to obtain the horizontal gradient image and vertical gradient image of R dis , which are recorded as Z h,r2 and Z v,r2 respectively, and then calculate the gradient of R dis Magnitude diagram, denoted as Z r2 , Among them, Z r2 (x, y) represents the gradient magnitude of the pixel whose coordinate position in Z r2 is (x, y), Z h, r2 (x, y) represents the coordinate position in Z h, r2 is (x, The horizontal direction gradient value of the pixel point of y), Z v, r2 (x, y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x, y) in Z v, r2 , 1≤x≤W', 1≤y≤H', where W' indicates the width of Z r2 , and H' indicates the height of Z r2 ;
②-b3、计算划分区域时所需的阈值T',
②-b4、将Zr1中坐标位置为(i,j)的像素点的梯度幅值记为Zr1(i,j),将Zr2中坐标位置为(i,j)的像素点的梯度幅值记为Zr2(i,j),判断Zr1(i,j)>T或Zr2(i,j)>T是否成立,如果成立,则确定Rorg和Rdis中坐标位置为(i,j)的像素点属于敏感区域,并令AR(i,j)=1,否则,确定Rorg和Rdis中坐标位置为(i,j)的像素点属于非敏感区域,并令AR(i,j)=0,其中,0≤i≤(W-8),0≤j≤(H-8)。②-b4. Record the gradient magnitude of the pixel at the coordinate position (i, j) in Z r1 as Z r1 (i, j), and record the gradient of the pixel at the coordinate position (i, j) in Z r2 The amplitude is recorded as Z r2 (i,j), judge whether Z r1 (i,j)>T or Z r2 (i,j)>T is true, if true, determine the coordinate position in R org and R dis as ( i, j) belongs to the sensitive area, and set A R (i, j) = 1, otherwise, determine that the pixel with the coordinate position (i, j) in R org and R dis belongs to the non-sensitive area, and set A R (i,j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).
所述的步骤⑦中β1的获取过程为:The acquisition process of β1 in the described step ⑦ is:
⑦-1、采用n幅无失真的立体图像建立其在不同失真类型不同失真程度下的失真立体图像集,该失真立体图像集包括多幅失真的立体图像,其中,n≥1;⑦-1. Using n undistorted stereoscopic images to establish a distorted stereoscopic image set under different distortion types and different degrees of distortion, the distorted stereoscopic image set includes multiple distorted stereoscopic images, wherein n≥1;
⑦-2、利用主观质量评价方法获取失真立体图像集中的每幅失真的立体图像的平均主观评分差值,记为DMOS,DMOS=100-MOS,其中,MOS表示主观评分均值,DMOS∈[0,100];⑦-2. Utilize the subjective quality evaluation method to obtain the average subjective score difference value of each distorted stereoscopic image in the distorted stereoscopic image set, which is denoted as DMOS, DMOS=100-MOS, wherein, MOS represents the mean value of the subjective score, and DMOS∈[0,100 ];
⑦-3、根据步骤①至步骤⑥的操作过程,计算失真立体图像集中的每幅失真的立体图像的左视点图像相对于对应的无失真的立体图像的左视点图像的敏感区域的评价值Qm,L和非敏感区域的评价值Qnm,L;⑦-3, according to the operation process of step ① to step ⑥, calculate the evaluation value Q of the sensitive area of the left viewpoint image of each distorted stereoscopic image in the distorted stereoscopic image set relative to the left viewpoint image of the corresponding undistorted stereoscopic image m, L and the evaluation value Q nm, L of the non-sensitive area;
⑦-4、采用数学拟合方法拟合失真立体图像集中失真的立体图像的平均主观评分差值DMOS和对应的Qm,L和Qnm,L,从而获得β1值。⑦-4. Using a mathematical fitting method to fit the average subjective score difference DMOS and the corresponding Q m,L and Q nm,L of the distorted stereo images in the distorted stereo image set, so as to obtain the value of β 1 .
与现有技术相比,本发明的优点在于首先分别对无失真的立体图像和失真的立体图像的左视点图像和右视点图像进行区域划分,得到人眼敏感区域和相应的非敏感区域,然后从结构幅值失真和结构方向失真两方面分别得出敏感区域和非敏感区域的评价指标;其次采用线性加权分别得到左视点图像质量评价值和右视点图像质量评价值,进而得到左右视点图像质量评价值;再次根据奇异值可以较好的表征立体图像结构信息特性,采样奇异值差异和剥夺奇异值后的残余图像的均值偏差率来衡量立体图像的深度感知的畸变情况,从而获得立体感知质量的评价值;最后将左右视点图像质量和立体感知质量两者以非线性方式结合,得到立体图像的最终质量评价结果,由于本发明方法避免模拟人眼视觉系统的各个组成部分,而充分利用了立体图像的结构信息,因此有效地提高了客观评价结果与主观感知的一致性。Compared with the prior art, the present invention has the advantage of firstly dividing the left viewpoint image and the right viewpoint image of the undistorted stereoscopic image and the distorted stereoscopic image respectively to obtain human eye sensitive regions and corresponding non-sensitive regions, and then The evaluation indexes of sensitive area and non-sensitive area are respectively obtained from the two aspects of structural amplitude distortion and structural direction distortion; secondly, linear weighting is used to obtain the left viewpoint image quality evaluation value and right viewpoint image quality evaluation value respectively, and then the left and right viewpoint image quality is obtained Evaluation value; again according to the singular value can better characterize the structural information characteristics of the stereoscopic image, the difference of the sampling singular value and the mean deviation rate of the residual image after depriving the singular value are used to measure the distortion of the depth perception of the stereoscopic image, so as to obtain the stereoscopic perception quality evaluation value; finally the left and right viewpoint image quality and the stereoscopic perception quality are combined in a non-linear manner to obtain the final quality evaluation result of the stereoscopic image. Since the method of the present invention avoids simulating each component part of the human visual system, fully utilizes Structural information of stereoscopic images, thus effectively improving the consistency of objective evaluation results with subjective perception.
附图说明Description of drawings
图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the inventive method;
图2a为Akko&Kayo(640×480)立体图像;Figure 2a is a stereoscopic image of Akko&Kayo (640×480);
图2b为Alt Moabit(1024×768)立体图像;Figure 2b is a stereoscopic image of Alt Moabit (1024×768);
图2c为Balloons(1024×768)立体图像;Figure 2c is a stereoscopic image of Balloons (1024×768);
图2d为Door Flowers(1024×768)立体图像;Figure 2d is a stereoscopic image of Door Flowers (1024×768);
图2e为Kendo(1024×768)立体图像;Figure 2e is a stereoscopic image of Kendo (1024×768);
图2f为Leaving Laptop(1024×768)立体图像;Figure 2f is a stereo image of Leaving Laptop (1024×768);
图2g为Lovebird1(1024×768)立体图像;Figure 2g is a stereoscopic image of Lovebird1 (1024×768);
图2h为Newspaper(1024×768)立体图像;Figure 2h is a stereoscopic image of Newspaper (1024×768);
图2i为Xmas(640×480)立体图像;Fig. 2i is Xmas (640 * 480) stereoscopic image;
图2j为Puppy(720×480)立体图像;Fig. 2j is Puppy (720 * 480) three-dimensional image;
图2k为Soccer2(720×480)立体图像;Fig. 2k is Soccer2 (720 * 480) three-dimensional image;
图2l为Horse(480×270)立体图像;Figure 2l is a stereoscopic image of Horse (480×270);
图3为本发明方法的左视点图像质量评价框图;Fig. 3 is a block diagram of left viewpoint image quality evaluation of the method of the present invention;
图4a为不同α和ω1下的左视点图像质量和主观感知质量之间的CC性能变化图;Figure 4a is a diagram of the CC performance change between the left view image quality and subjective perception quality under different α and ω 1 ;
图4b为不同α和ω1下的左视点图像质量和主观感知质量之间的SROCC性能变化图;Figure 4b is a graph of the SROCC performance change between the left viewpoint image quality and subjective perception quality under different α and ω 1 ;
图4c为不同α和ω1下的左视点图像质量和主观感知质量之间的RMSE性能变化图;Figure 4c is a graph of the RMSE performance change between the left view image quality and subjective perception quality under different α and ω 1 ;
图5a为在ω1=1的情况下,不同α下的左视点图像质量和主观感知质量之间的CC性能变化图;Fig. 5a is a diagram of CC performance change between left view image quality and subjective perception quality under different α in the case of ω 1 =1;
图5b为在ω1=1的情况下,不同α下的左视点图像质量和主观感知质量之间的SROCC性能变化图;Fig. 5b is a graph of SROCC performance change between left view image quality and subjective perception quality under different α in the case of ω 1 =1;
图5c为在ω1=1的情况下,不同α下的左视点图像质量和主观感知质量之间的RMSE性能变化图;Fig. 5c is a diagram of RMSE performance change between left view image quality and subjective perception quality under different α in the case of ω 1 =1;
图6a为不同β1下的左右视点图像质量和主观感知质量之间的CC性能变化图;Figure 6a is a diagram of the CC performance change between left and right viewpoint image quality and subjective perception quality under different β1 ;
图6b为不同β1下的左右视点图像质量和主观感知质量之间的SROCC性能变化图;Figure 6b is a graph of the SROCC performance change between left and right viewpoint image quality and subjective perception quality under different β1 ;
图6c为不同β1下的左右视点图像质量和主观感知质量之间的RMSE性能变化图;Fig. 6c is the RMSE performance change diagram between left and right viewpoint image quality and subjective perception quality under different β1 ;
图7a为不同τ下的立体深度感知质量和主观感知质量之间的CC性能变化图;Figure 7a is a graph of the CC performance change between the stereoscopic depth perception quality and the subjective perception quality under different τ;
图7b为不同τ下的立体深度感知质量和主观感知质量之间的SROCC性能变化图;Figure 7b is a graph of the SROCC performance change between the stereoscopic depth perception quality and the subjective perception quality under different τ;
图7c为不同τ下的立体深度感知质量和主观感知质量之间的RMSE性能变化图;Figure 7c is a graph of the RMSE performance change between the stereoscopic depth perception quality and the subjective perception quality under different τ;
图8a为不同ρ下的立体图像质量和主观感知质量之间的CC性能变化图;Figure 8a is a diagram of CC performance variation between stereoscopic image quality and subjective perceptual quality under different ρ;
图8b为不同ρ下的立体图像质量和主观感知质量之间的SROCC性能变化图;Figure 8b is a graph of the SROCC performance change between the stereoscopic image quality and the subjective perception quality under different ρ;
图8c为不同ρ下的立体图像质量和主观感知质量之间的RMSE性能变化图。Fig. 8c is a graph of RMSE performance variation between stereoscopic image quality and subjective perception quality under different ρ.
具体实施方式Detailed ways
以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.
本发明提出的一种基于结构失真的立体图像客观质量评价方法,其从结构失真的角度分别评价了左右视点图像质量和立体图像的立体感知质量,采用非线性加权的方式得到立体图像的最终质量评价值。图1给出了本发明方法的总体实现框图,其包括以下步骤:A method for evaluating the objective quality of a stereoscopic image based on structural distortion proposed by the present invention, which evaluates the image quality of the left and right viewpoints and the stereoscopic perception quality of the stereoscopic image from the perspective of structural distortion, and obtains the final quality of the stereoscopic image in a non-linear weighted manner Evaluation value. Fig. 1 has provided the overall realization block diagram of the inventive method, and it comprises the following steps:
①令Sorg为原始的无失真的立体图像,令Sdis为待评价的失真的立体图像,将原始的无失真的立体图像Sorg的左视点灰度图像记为Lorg,将原始的无失真的立体图像Sorg的右视点灰度图像记为Rorg,将待评价的失真的立体图像Sdis的左视点灰度图像记为Ldis,将待评价的失真的立体图像Sdis的右视点灰度图像记为Rdis。① Let S org be the original undistorted stereo image, let S dis be the distorted stereo image to be evaluated, denote the left-viewpoint grayscale image of the original undistorted stereo image S org as L org , and denote the original undistorted stereo image The right viewpoint grayscale image of the distorted stereo image S org is denoted as R org , the left viewpoint grayscale image of the distorted stereo image S dis to be evaluated is denoted as L dis , and the right viewpoint grayscale image of the distorted stereo image S dis to be evaluated is The viewpoint grayscale image is denoted as R dis .
②对Lorg和Ldis、Rorg和Rdis4幅图像分别实施区域划分,分别得到Lorg和Ldis、Rorg和Rdis4幅图像各自对应的敏感区域矩阵映射图,将Lorg和Ldis分别实施区域划分后得到的各自对应的敏感区域矩阵映射图的系数矩阵均记为AL,对于AL中坐标位置为(i,j)处的系数值,将其记为AL(i,j),将Rorg和Rdis分别实施区域划分后得到的各自对应的敏感区域矩阵映射图的系数矩阵均记为AR,对于AR中坐标位置为(i,j)处的系数值,将其记为AR(i,j),其中,此处0≤i≤(W-8),0≤j≤(H-8),W表示Lorg、Ldis、Rorg和Rdis的宽,H表示Lorg、Ldis、Rorg和Rdis的高。②The four images of L org and L dis , R org and R dis are divided into regions respectively, and the corresponding sensitive area matrix maps of the four images of L org and L dis , R org and R dis are respectively obtained, and L org and The coefficient matrices of the corresponding sensitive area matrix maps obtained after L dis are respectively divided into regions are all recorded as AL , and the coefficient value at the coordinate position (i, j) in AL is recorded as AL ( i, j), The coefficient matrices of the corresponding sensitive area matrix maps obtained after R org and R dis are divided into regions are recorded as A R , and the coefficient value at the coordinate position (i, j) in AR is recorded as is A R (i,j), Among them, here 0≤i≤(W-8), 0≤j≤(H-8), W represents the width of L org , L dis , R org and R dis , H represents L org , L dis , R org and R dis high.
在此具体实施例中,步骤②中Lorg和Ldis各自对应的敏感区域矩阵映射图的系数矩阵AL的获取过程为:In this specific embodiment, the acquisition process of the coefficient matrix AL of the sensitive area matrix map corresponding to L org and L dis respectively in step 2. is:
②-a1、对Lorg作水平及垂直方向Sobel算子处理,得到Lorg的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,l1和Zv,l1,然后计算Lorg的梯度幅值图,记为Zl1,其中,Zl1(x,y)表示Zl1中坐标位置为(x,y)的像素点的梯度幅值,Zh,l1(x,y)表示Zh,l1中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,l1(x,y)表示Zv,l1中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zl1的宽,H'表示Zl1的高。②-a1. Perform horizontal and vertical Sobel operator processing on L org to obtain the horizontal gradient image and vertical gradient image of L org , which are recorded as Z h,l1 and Z v,l1 respectively, and then calculate the gradient of L org Amplitude diagram, denoted as Z l1 , Among them, Z l1 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z l1 , Z h, l1 (x, y) represents the coordinate position in Z h, l1 is (x, y) the horizontal direction gradient value of the pixel point, Z v,l1 (x,y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x,y) in Z v,l1 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z l1 , and H' represents the height of Z l1 .
②-a2、对Ldis作水平及垂直方向Sobel算子处理,得到Ldis的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,l2和Zv,l2,然后计算Ldis的梯度幅值图,记为Zl2,其中,Zl2(x,y)表示Zl2中坐标位置为(x,y)的像素点的梯度幅值,Zh,l2(x,y)表示Zh,l2中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,l2(x,y)表示Zv,l2中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zl2的宽,H'表示Zl2的高。②-a2. Perform horizontal and vertical Sobel operator processing on L dis to obtain the horizontal gradient image and vertical gradient image of L dis , which are recorded as Z h,l2 and Z v,l2 respectively, and then calculate the gradient of L dis Amplitude diagram, denoted as Z l2 , Among them, Z l2 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z l2 , Z h, l2 (x, y) represents the coordinate position in Z h, l2 is (x, y) the horizontal direction gradient value of the pixel point, Z v,l2 (x,y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x,y) in Z v,l2 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z l2 , and H' represents the height of Z l2 .
②-a3、计算划分区域时所需的阈值T,
②-a4、将Zl1中坐标位置为(i,j)的像素点的梯度幅值记为Zl1(i,j),将Zl2中坐标位置为(i,j)的像素点的梯度幅值记为Zl2(i,j),判断Zl1(i,j)>T或Zl2(i,j)>T是否成立,如果成立,则确定Lorg和Ldis中坐标位置为(i,j)的像素点属于敏感区域,并令AL(i,j)=1,否则,确定Lorg和Ldis中坐标位置为(i,j)的像素点属于非敏感区域,并令AL(i,j)=0,其中,0≤i≤(W-8),0≤j≤(H-8)。②-a4. Record the gradient magnitude of the pixel point whose coordinate position is (i, j) in Z l1 as Z l1 (i, j), and record the gradient amplitude of the pixel point whose coordinate position is (i, j) in Z l2 The amplitude is recorded as Z l2 (i,j), and it is judged whether Z l1 (i,j)>T or Z l2 (i,j)>T is established. If it is established, the coordinate position in L org and L dis is determined as ( i, j) belongs to the sensitive area, and let A L (i, j) = 1, otherwise, determine that the pixel with the coordinate position (i, j) in L org and L dis belongs to the non-sensitive area, and set A L (i,j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).
在此具体实施例中,步骤②中Rorg和Rdis各自对应的敏感区域矩阵映射图的系数矩阵AR的获取过程为:In this specific embodiment, the acquisition process of the coefficient matrix AR of the sensitive area matrix map corresponding to R org and R dis respectively in step ② is:
②-b1、对Rorg作水平及垂直方向Sobel算子处理,得到Rorg的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,r1和Zv,r1,然后计算Rorg的梯度幅值图,记为Zr1,其中,Zr1(x,y)表示Zr1中坐标位置为(x,y)的像素点的梯度幅值,Zh,r1(x,y)表示Zh,r1中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,r1(x,y)表示Zv,r1中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zr1的宽,H'表示Zr1的高。②-b1. Perform horizontal and vertical Sobel operator processing on R org to obtain the horizontal gradient image and vertical gradient image of R org , which are recorded as Z h,r1 and Z v,r1 respectively, and then calculate the gradient of R org Amplitude diagram, denoted as Z r1 , Among them, Z r1 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z r1 , Z h, r1 (x, y) represents the coordinate position in Z h, r1 is (x, The horizontal direction gradient value of the pixel point of y), Z v, r1 (x, y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x, y) in Z v, r1 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z r1 , and H' represents the height of Z r1 .
②-b2、对Rdis作水平及垂直方向Sobel算子处理,得到Rdis的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,r2和Zv,r2,然后计算Rdis的梯度幅值图,记为Zr2,其中,Zr2(x,y)表示Zr2中坐标位置为(x,y)的像素点的梯度幅值,Zh,r2(x,y)表示Zh,r2中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,r2(x,y)表示Zv,r2中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zr2的宽,H'表示Zr2的高。②-b2. Perform horizontal and vertical Sobel operator processing on R dis to obtain the horizontal gradient image and vertical gradient image of R dis , which are recorded as Z h,r2 and Z v,r2 respectively, and then calculate the gradient of R dis Amplitude diagram, denoted as Z r2 , Among them, Z r2 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z r2 , Z h, r2 (x, y) represents the coordinate position in Z h, r2 is (x, The horizontal direction gradient value of the pixel point of y), Z v, r2 (x, y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x, y) in Z v, r2 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z r2 , and H' represents the height of Z r2 .
②-b3、计算划分区域时所需的阈值T',
②-b4、将Zr1中坐标位置为(i,j)的像素点的梯度幅值记为Zr1(i,j),将Zr2中坐标位置为(i,j)的像素点的梯度幅值记为Zr2(i,j),判断Zr1(i,j)>T或Zr2(i,j)>T是否成立,如果成立,则确定Rorg和Rdis中坐标位置为(i,j)的像素点属于敏感区域,并令AR(i,j)=1,否则,确定Rorg和Rdis中坐标位置为(i,j)的像素点属于非敏感区域,并令AR(i,j)=0,其中,0≤i≤(W-8),0≤j≤(H-8)。②-b4. Record the gradient magnitude of the pixel at the coordinate position (i, j) in Z r1 as Z r1 (i, j), and the gradient of the pixel at the coordinate position (i, j) in Z r2 The amplitude is recorded as Z r2 (i,j), judge whether Z r1 (i,j)>T or Z r2 (i,j)>T is true, if true, determine the coordinate position in R org and R dis as ( i, j) belongs to the sensitive area, and set A R (i, j) = 1, otherwise, determine that the pixel with the coordinate position (i, j) in R org and R dis belongs to the non-sensitive area, and set A R (i,j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).
在本实施例中,利用如图2a、图2b、图2c、图2d、图2e、图2f、图2g、图2h、图2i、图2j、图2k和图2l所示的12幅无失真的立体图像建立其在不同失真类型不同失真程度下的失真立体图像集,失真类型包括JPEG压缩、JP2K压缩、高斯白噪声、高斯模糊和H264编码失真,且立体图像的左视点图像和右视点图像同时同程度失真,该失真立体图像集共包括312幅失真的立体图像,其中JPEG压缩的失真的立体图像共60幅,JPEG2000压缩的失真的立体图像共60幅,高斯白噪声失真的立体图像共60幅,高斯模糊失真的立体图像共60幅,H264编码失真的立体图像共72幅。对上述312幅立体图像进行如上所述的区域分割。In this embodiment, the 12 distortion-free Stereoscopic images of different distortion types and different degrees of distortion of the stereoscopic image set, the distortion types include JPEG compression, JP2K compression, Gaussian white noise, Gaussian blur and H264 encoding distortion, and the left viewpoint image and right viewpoint image of the stereoscopic image At the same time, the distorted stereo image set includes 312 distorted stereo images, including 60 distorted stereo images compressed by JPEG, 60 distorted stereo images compressed by JPEG2000, and 60 distorted stereo images compressed by Gaussian white noise. There are 60 images, 60 stereoscopic images with Gaussian blur distortion, and 72 stereoscopic images with H264 encoding distortion. The above-mentioned 312 stereoscopic images are divided into regions as described above.
在本实施例中,α值决定了敏感区域划分的准确程度,若取值过大,则敏感区域会被误认为非敏感区域,若取值过小,则非敏感区域会被误认为敏感区域,因此其值的确定过程与左视点图像质量或右视点图像质量对立体图像质量的贡献来决定。In this embodiment, the α value determines the accuracy of the sensitive area division. If the value is too large, the sensitive area will be mistaken for a non-sensitive area; if the value is too small, the non-sensitive area will be mistaken for a sensitive area. , so the determination process of its value is determined by the contribution of left-viewpoint image quality or right-viewpoint image quality to stereoscopic image quality.
③将Lorg和Ldis2幅图像分别分割成(W-7)×(H-7)个尺寸大小为8×8的重叠块,然后计算Lorg和Ldis2幅图像中所有坐标位置相同的两个重叠块的结构幅值失真映射图,将该结构幅值失真映射图的系数矩阵记为BL,对于BL中坐标位置为(i,j)处的系数值,将其记为BL(i,j),
在此,考虑图像的像素点之间的相关性,一个尺寸大小为8×8的重叠块与它最相邻的左块或右块有7列重叠,同样,该8×8重叠块与它最相邻的上块或下块有7行重叠。Here, considering the correlation between the pixels of the image, an overlapping block with a size of 8×8 overlaps with its nearest left or right block by 7 columns. Similarly, the 8×8 overlapping block overlaps with its The most adjacent upper or lower blocks have 7 lines of overlap.
将Rorg和Rdis2幅图像分别分割成(W-7)×(H-7)个尺寸大小为8×8的重叠块,然后计算Rorg和Rdis2幅图像中所有坐标位置相同的两个重叠块的结构幅值失真映射图,将该结构幅值失真映射图的系数矩阵记为BR,对于BR中坐标位置为(i,j)处的系数值,将其记为BR(i,j),
④对Lorg和Ldis2幅图像分别实施水平及垂直方向Sobel算子处理,分别得到Lorg和Ldis2幅图像各自对应的水平方向梯度矩阵映射图和垂直方向梯度矩阵映射图,将Lorg实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,org,L,对于Ih,org,L中坐标位置为(i,j)处的系数值,将其记为Ih,org,L(i,j),
对Rorg和Rdis2幅图像分别实施水平及垂直方向Sobel算子处理,分别得到Rorg和Rdis2幅图像各自对应的水平方向梯度矩阵映射图和垂直方向梯度矩阵映射图,将Rorg实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,org,R,对于Ih,org,R中坐标位置为(i,j)处的系数值,将其记为Ih,org,R(i,j),
⑤计算Lorg和Ldis2幅图像中所有坐标位置相同的两个重叠块的结构方向失真映射图,将该结构方向失真映射图的系数矩阵记为EL,对于EL中坐标位置为(i,j)处的系数值,将其记为EL(i,j),
计算Rorg和Rdis2幅图像中所有坐标位置相同的两个重叠块的结构方向失真映射图,将该结构方向失真映射图的系数矩阵记为ER,对于ER中坐标位置为(i,j)处的系数值,将其记为ER(i,j),
⑥计算Lorg和Ldis的结构失真评价值,记为QL,QL=ω1×Qm,L+ω2×Qnm,L,
计算Rorg和Rdis的结构失真评价值,记为QR,QR=ω'1×Qm,R+ω'2×Qnm,R,
在本实施例中,图3给出了左视点图像质量评价的实现框图。利用如图2a至图2l所示的12幅无失真的立体图像建立由312幅失真的立体图像构成的失真立体图像集,对这312幅失真的立体图像采用公知的主观质量评价方法进行主观质量评价,得到312幅失真的立体图像各自的平均主观评分差值(DMOS,Difference MeanOpinion Scores),即每幅失真的立体图像的主观质量评分值。DMOS为主观评分均值(MOS)和满分(100)的差值,即DMOS=100-MOS,因此,DMOS值越大表示失真的立体图像的质量越差,DMOS值越小表示失真的立体图像的质量越好,且DMOS的取值范围为[0,100]。另一方面,对上述312幅失真的立体图像按本发明方法步骤①至⑥计算得到每幅失真的立体图像相应的Qm,L和Qnm,L;然后采用QL=ω1×Qm,L+(1-ω1)×Qnm,L进行作四参数Logistic函数非线性拟合,得到α和ω1值。这里,利用评估图像质量评价方法的3个常用客观参量作为评价指标,即非线性回归条件下的Pearson相关系数(Correlation Coefficient,CC)、Spearman相关系数(SpearmanRank-Order Correlation Coefficient,SROCC)和均方误差系数(Rooted Mean SquaredError,RMSE),CC反映失真的立体图像评价函数这一客观模型的精度,SROCC反映客观模型与主观感知之间的单调性情况。RMSE反映其预测的准确性。CC和SROCC值越高说明立体图像客观评价方法与DMOS相关性越好,RMSE值越低说明立体图像客观评价方法与DMOS相关性越好。图4a给出了不同α和ω1下的312幅立体图像的左视点图像质量和主观感知质量之间的CC性能变化,图4b给出了不同α和ω1下的312幅立体图像的左视点图像质量和主观感知质量之间的SROCC性能变化,图4c给出了不同α和ω1下的312幅立体图像的左视点图像质量和主观感知质量之间的RMSE性能变化,分析图4a、图4b和图4c,可知CC和SROCC会随着ω1值的变大而变大,而RMSE随着ω1值的变大而变小,说明左视点图像质量主要是由敏感区域的质量决定,而α值的改变对左视点图像质量与主观感知之间的性能影响不大。图5a给出了在ω1=1、ω2=0情况下,不同α下的312幅立体图像的左视点图像质量和主观感知质量之间的CC性能变化;图5b给出了在ω1=1、ω2=0情况下,不同α下的312幅立体图像的左视点图像质量和主观感知质量之间的SROCC性能变化;图5c给出了在ω1=1、ω2=0情况下,不同α下的312幅立体图像的左视点图像质量和主观感知质量之间的RMSE性能变化;分析图5a、图5b和图5c,可知CC、SROCC和RMSE值都在百分位上波动,但都存在一个峰值。因此,在本实施例中,取ω1=1,α=2.1。In this embodiment, FIG. 3 shows a block diagram for realizing the evaluation of image quality from the left viewpoint. Using 12 undistorted stereo images as shown in Figure 2a to Figure 2l, a distorted stereo image set consisting of 312 distorted stereo images is established, and the subjective quality of these 312 distorted stereo images is evaluated using a known subjective quality evaluation method Evaluation, the average subjective score difference (DMOS, Difference Mean Opinion Scores) of 312 distorted stereoscopic images, that is, the subjective quality score value of each distorted stereoscopic image. DMOS is the difference between the subjective score mean (MOS) and the full score (100), that is, DMOS=100-MOS, therefore, the larger the DMOS value, the worse the quality of the distorted stereoscopic image, and the smaller the DMOS value, the worse the quality of the distorted stereoscopic image. The better the quality, and the value range of DMOS is [0, 100]. On the other hand, for the above-mentioned 312 distorted stereoscopic images, calculate the corresponding Q m, L and Q nm, L of each distorted stereoscopic image according to steps ① to ⑥ of the method of the present invention; then use Q L = ω 1 × Q m ,L +(1-ω 1 )×Q nm,L is used for nonlinear fitting of the four-parameter Logistic function, and the values of α and ω 1 are obtained. Here, three commonly used objective parameters for evaluating image quality evaluation methods are used as evaluation indicators, namely Pearson correlation coefficient (Correlation Coefficient, CC), Spearman correlation coefficient (SpearmanRank-Order Correlation Coefficient, SROCC) and mean square Error coefficient (Rooted Mean Squared Error, RMSE), CC reflects the accuracy of the objective model of the distorted stereo image evaluation function, and SROCC reflects the monotonicity between the objective model and subjective perception. RMSE reflects the accuracy of its prediction. The higher the value of CC and SROCC, the better the correlation between the stereoscopic image objective evaluation method and DMOS, and the lower the RMSE value, the better the correlation between the stereoscopic image objective evaluation method and DMOS. Figure 4a shows the CC performance change between the left view image quality and subjective perception quality of 312 stereo images under different α and ω 1, and Figure 4b shows the left view of 312 stereo images under different α and ω 1 SROCC performance change between viewpoint image quality and subjective perception quality, Fig. 4c shows the RMSE performance variation between left viewpoint image quality and subjective perception quality of 312 stereo images under different α and ω 1 , analysis Fig. 4a, From Figure 4b and Figure 4c, it can be seen that CC and SROCC will increase as the value of ω1 increases, while RMSE decreases as the value of ω1 increases, indicating that the image quality of the left viewpoint is mainly determined by the quality of the sensitive area. However, the change of α value has little effect on the performance between left view image quality and subjective perception. Figure 5a shows the CC performance change between the left viewpoint image quality and subjective perception quality of 312 stereo images under different α in the case of ω 1 =1, ω 2 =0; =1, ω 2 =0, the SROCC performance change between the left viewpoint image quality and subjective perception quality of 312 stereo images under different α; Fig. 5c shows the , the RMSE performance change between the left-viewpoint image quality and subjective perception quality of 312 stereo images under different α; analyzing Figure 5a, Figure 5b and Figure 5c, we can see that the CC, SROCC and RMSE values are all fluctuating on the percentile , but there is a peak. Therefore, in this embodiment, ω 1 =1, α=2.1.
⑦根据QL和QR计算待评价的失真的立体图像Sdis相对于原始的无失真的立体图像Sorg的空间频率相似度度量,记为QF,QF=β1×QL+(1-β1)×QR,其中,β1表示QL的权值。⑦ Calculate the spatial frequency similarity measure of the distorted stereo image S dis to be evaluated relative to the original undistorted stereo image S org according to Q L and Q R , denoted as Q F , Q F =β 1 ×Q L +( 1-β 1 )×Q R , where β 1 represents the weight of Q L.
在此具体实施例中,步骤⑦中β1的获取过程为:In this specific embodiment, the acquisition process of β1 in step ⑦ is:
⑦-1、采用n幅无失真的立体图像建立其在不同失真类型不同失真程度下的失真立体图像集,该失真立体图像集包括多幅失真的立体图像,其中,n≥1。⑦-1. Using n undistorted stereoscopic images to establish a distorted stereoscopic image set under different distortion types and different degrees of distortion, the distorted stereoscopic image set includes multiple distorted stereoscopic images, wherein n≥1.
⑦-2、利用主观质量评价方法获取失真立体图像集中的每幅失真的立体图像的平均主观评分差值,记为DMOS,DMOS=100-MOS,其中,MOS表示主观评分均值,DMOS∈[0,100]。⑦-2. Utilize the subjective quality evaluation method to obtain the average subjective score difference value of each distorted stereoscopic image in the distorted stereoscopic image set, which is denoted as DMOS, DMOS=100-MOS, wherein, MOS represents the mean value of the subjective score, and DMOS∈[0,100 ].
⑦-3、根据步骤①至步骤⑥的操作过程,计算失真立体图像集中的每幅失真的立体图像的左视点图像相对于对应的无失真的立体图像的左视点图像的敏感区域的评价值Qm,L和非敏感区域的评价值Qnm,L。⑦-3, according to the operation process of step ① to step ⑥, calculate the evaluation value Q of the sensitive area of the left viewpoint image of each distorted stereoscopic image in the distorted stereoscopic image set relative to the left viewpoint image of the corresponding undistorted stereoscopic image m,L and the evaluation value Q nm,L of the non-sensitive area.
⑦-4、采用数学拟合方法拟合失真立体图像集中失真的立体图像的平均主观评分差值DMOS和对应的Qm,L和Qnm,L,从而获得β1值。⑦-4. Using a mathematical fitting method to fit the average subjective score difference DMOS and the corresponding Q m,L and Q nm,L of the distorted stereo images in the distorted stereo image set, so as to obtain the value of β 1 .
在本实施例中,β1决定了QL对立体图像质量的贡献,针对块效应,立体图像质量大概是左视点图像的质量和右视点图像的质量之和的一半,针对模糊失真,立体图像质量主要取决于质量较好的那个视点。由于该立体图像测试库的左视点图像和右视点图像同时受到同程度的失真,左视点图像的质量和右视点图像的质量变化不大,故β1变化对立体图像的主观性能影响不大。首先对上述312幅失真的立体图像按本发明方法步骤①至⑥计算得到每幅失真的立体图像相应的QL和QR,然后采用四参数拟合得到β1值。图6a给出了不同β1下的左右视点图像的质量与主观感知质量之间的CC性能变化,图6a给出了不同β1下的左右视点图像的质量与主观感知质量之间的SROCC性能变化,图6a给出了不同β1下的左右视点图像的质量与主观感知质量之间的RMSE性能变化,分析图6a、图6b和图6c,可知随着β1值的变化,CC、SROCC和RMSE值变化不大,在百分位上波动,但都存在峰值。这里,取β1=0.5。In this embodiment, β1 determines the contribution of Q L to the stereoscopic image quality. For blockiness, the stereoscopic image quality is about half of the sum of the quality of the left-viewpoint image and the right-viewpoint image. For blurring and distortion, the stereoscopic image The quality is mainly determined by the viewpoint with better quality. Because the left-viewpoint image and right-viewpoint image of the stereoscopic image test library are distorted to the same degree at the same time, the quality of the left-viewpoint image and the quality of the right-viewpoint image have little change, so the change of β1 has little effect on the subjective performance of the stereoscopic image. First, the above-mentioned 312 distorted stereoscopic images are calculated according to steps ① to ⑥ of the method of the present invention to obtain the corresponding QL and QR of each distorted stereoscopic image, and then the four-parameter fitting is used to obtain the β1 value. Figure 6a shows the CC performance change between the quality of the left and right viewpoint images and the subjective perception quality under different β 1 , and Fig. 6a shows the SROCC performance between the quality of the left and right viewpoint images and the subjective perception quality under different β 1 Figure 6a shows the RMSE performance change between the quality of the left and right viewpoint images and the subjective perception quality under different β 1. Analyzing Figure 6a, Figure 6b and Figure 6c, we can know that with the change of β 1 value, CC, SROCC and RMSE values have little change, fluctuating on the percentile, but both have peaks. Here, β 1 =0.5 is taken.
⑧计算Lorg和Rorg的绝对差值图像用矩阵表示,即计算Ldis和Rdis的绝对差值图像用矩阵表示,即其中,“||”为取绝对值符号。⑧ Calculate the absolute difference image matrix of L org and R org means that Calculate the absolute difference image matrix of L dis and R dis means that Among them, "||" is the absolute value symbol.
⑨将和幅图像分别分割成个互不重叠的尺寸大小为8×8的块,然后对和幅图像中的所有块分别实施奇异值分解,得到对应的由其每个块的奇异值矩阵组成的奇异值映射图和对应的由其每个块的奇异值矩阵组成的奇异值映射图,将实施奇异值分解后得到的奇异值映射图的系数矩阵记为Gorg,对于Gorg中第n个块的奇异值矩阵中坐标位置为(p,q)处的奇异值,将其记为将实施奇异值分解后得到的奇异值映射图的系数矩阵记为Gdis,对于Gdis中第n个块的奇异值矩阵中坐标位置为(p,q)处的奇异值,将其记为其中,WLR表示和的宽,HLR表示和的高,0≤p≤7,0≤q≤7。⑨ will and The image is divided into non-overlapping blocks of size 8×8, and then and Singular value decomposition is performed on all the blocks in the image respectively to get The corresponding singular value map composed of the singular value matrix of each block and The corresponding singular value map composed of the singular value matrix of each block will be The coefficient matrix of the singular value map obtained after implementing singular value decomposition is denoted as G org , and for the singular value at the coordinate position (p, q) in the singular value matrix of the nth block in G org , it is denoted as Will The coefficient matrix of the singular value map obtained after the implementation of singular value decomposition is denoted as G dis , and for the singular value at the coordinate position (p, q) in the singular value matrix of the nth block in G dis , it is denoted as Among them, W LR means and The width, H LR means and height of, 0≤p≤7,0≤q≤7.
在此,为了降低计算复杂度,一个8×8的块与它最相邻的左块或右块或上块或下块没有重复列或重复行,即块与块互不重叠。Here, in order to reduce computational complexity, an 8×8 block does not have repeated columns or rows with its most adjacent left or right block or upper or lower block, that is, blocks do not overlap each other.
⑩计算对应的奇异值映射图和对应的奇异值映射图的奇异值偏差评价值,记为K,
对和分别实施奇异值分解,分别得到和各自对应的2个正交矩阵和1个奇异值矩阵,将实施奇异值分解后得到的2个正交矩阵分别记为χorg和Vorg,将实施奇异值分解后得到的奇异值矩阵记为Oorg,将实施奇异值分解后得到的2个正交矩阵分别记为χdis和Vdis,将实施奇异值分解后得到的奇异值矩阵记为Odis, right and Singular value decomposition is carried out respectively, and we get and Corresponding to two orthogonal matrices and one singular value matrix, the The two orthogonal matrices obtained after implementing singular value decomposition are denoted as χ org and V org respectively, and the The singular value matrix obtained after implementing singular value decomposition is denoted as O org , Will The two orthogonal matrices obtained after implementing singular value decomposition are denoted as χ dis and V dis respectively, and the The singular value matrix obtained after implementing singular value decomposition is denoted as O dis ,
分别计算和幅图像剥夺奇异值后的残留矩阵图,将剥夺奇异值后的残留矩阵图记为Xorg,Xorg=χorg×Λ×Vorg,将剥夺奇异值后的残留矩阵图记为Xdis,Xdis=χdis×Λ×Vdis,其中,Λ表示单位矩阵,Λ的大小与Oorg和Odis的大小一致。 Calculate separately and The residual matrix image after depriving the singular value of the image, the The residual matrix after deprivation of singular values is denoted as X org , X org = χ org × Λ × V org , the The residual matrix after stripping the singular value is marked as X dis , X dis =χ dis ×Λ×V dis , where Λ represents the identity matrix, and the size of Λ is consistent with the size of O org and O dis .
计算Xorg和Xdis的均值偏差率,记为 其中,x表示Xorg和Xdis中的像素点的横坐标,y表示Xorg和Xdis中的像素点的纵坐标。 Calculate the mean deviation rate of X org and X dis , denoted as Wherein, x represents the abscissa of the pixel in X org and X dis , and y represents the ordinate of the pixel in X org and X dis .
计算待评价的失真的立体图像Sdis相对于原始的无失真的立体图像Sorg的立体感知评价度量,记为QS,其中,τ表示常数,用于调节K和在QS中所起的重要性。 Calculate the stereo perception evaluation metric of the distorted stereo image S dis to be evaluated relative to the original undistorted stereo image S org , denoted as Q S , where τ represents a constant used to adjust K and The importance played in QS .
在本实施例中,首先求取上述312幅失真的立体图像和10幅无失真的立体图像的绝对差值图像,然后按照本发明方法的步骤⑧至计算得到每幅失真的立体图像相应的K和在此,τ值大小决定了奇异值偏差与残留信息在深度感知评价中所起的重要性。图7a给出了在不同τ下312幅失真的立体图像的立体感知质量与主观感知之间的CC性能变化,图7b给出了在不同τ下312幅失真的立体图像的立体感知质量与主观感知之间的SROCC性能变化,图7c给出了在不同τ下312幅失真的立体图像的立体感知质量与主观感知之间的RMSE性能变化,图7a、图7b和图7c中τ在[-164]范围内变化,分析图7a、图7b和图7c,可知CC、SROCC和RMSE与τ的变化都存在一个极值,且位置大致相同,这里取τ=-8。In the present embodiment, first obtain the absolute difference images of the above-mentioned 312 distorted stereo images and 10 undistorted stereo images, and then follow steps 8 to 8 of the method of the present invention. Calculate the K and K corresponding to each distorted stereo image Here, the value of τ determines the importance of singular value deviation and residual information in depth perception evaluation. Figure 7a shows the CC performance change between the stereo perception quality and subjective perception of 312 distorted stereo images under different τ, and Figure 7b shows the stereo perception quality and subjective perception of 312 distorted stereo images under different τ The variation of SROCC performance between perceptions, Fig. 7c shows the variation of RMSE performance between the stereo perception quality and subjective perception of 312 distorted stereo images under different τ, in Fig. 7a, Fig. 7b and Fig. 7c, τ is in [- 164], analyzing Figure 7a, Figure 7b and Figure 7c, it can be seen that there is an extreme value in CC, SROCC and RMSE, and the change of τ, and the position is roughly the same, here we take τ=-8.
根据QF和QS,计算待评价的失真的立体图像Sdis的图像质量评价分值,记为Q,Q=QF×(QS)ρ,其中,ρ表示权重系数值。 According to Q F and Q S , calculate the image quality evaluation score of the distorted stereo image S dis to be evaluated, denoted as Q, Q=Q F ×(Q S ) ρ , where ρ represents the weight coefficient value.
在本实施例中,对上述312幅失真的立体图像按本发明方法步骤①至计算得到每幅失真的立体图像相应的QF和QS,然后采用Q=QF×(QS)ρ进行作四参数Logistic函数非线性拟合,得到ρ,ρ值决定了左右视点图像的质量和立体感知质量在立体图像质量中的贡献。QF和QS值都是随着立体图像失真程度加深而变小,故ρ值的取值范围为大于0。图8a给出了在不同ρ值下的312幅立体图像的质量与主观感知质量之间的CC性能变化,图8b给出了在不同ρ值下的312幅立体图像的质量与主观感知质量之间的SROCC性能变化,图8c给出了在不同ρ值下的312幅立体图像的质量与主观感知质量之间的RMSE性能变化,分析图8a、图8b、图8c,可知ρ值取得太大或太小都会影响立体图像质量客观评价模型与主观感知之间的一致性,随着ρ值变化情况下,CC、SROCC和RMSE值都存在极值点,且大致位置相同,这里取ρ=0.3。In the present embodiment, for the above-mentioned 312 distorted stereoscopic images according to the method steps of the present invention from ① to Calculate the corresponding Q F and Q S of each distorted stereoscopic image, and then use Q=Q F ×(Q S ) ρ to perform nonlinear fitting of the four-parameter Logistic function to obtain ρ, and the value of ρ determines the left and right viewpoint images Quality and Stereoscopic Quality Contributions in Stereoscopic Image Quality. Both the Q F and Q S values become smaller as the degree of distortion of the stereoscopic image deepens, so the value range of the ρ value is greater than 0. Figure 8a shows the CC performance variation between the quality of 312 stereo images and subjective perceptual quality under different ρ values, and Figure 8b shows the relationship between the quality of 312 stereo images and subjective perceptual quality under different ρ values. The change of SROCC performance among them. Figure 8c shows the RMSE performance change between the quality of 312 stereoscopic images and the subjective perception quality under different ρ values. Analyzing Figure 8a, Figure 8b, and Figure 8c, it can be known that the value of ρ is too large or too small will affect the consistency between the stereoscopic image quality objective evaluation model and subjective perception. With the change of ρ value, there are extreme points in the CC, SROCC and RMSE values, and the approximate positions are the same. Here, ρ=0.3 .
分析本实施例得到的失真的立体图像的图像质量评价函数Q=QF×(QS)0.3的最终评价结果与主观评分DMOS之间的相关性。首先按本实施例得到的失真的立体图像的图像质量评价函数Q=QF×(QS)0.3计算得到的最终立体图像质量评价结果的输出值Q,然后将输出值Q做四参数Logistic函数非线性拟合,最后得到立体客观评价模型与主观感知之间的性能指标值。这里,利用评估图像质量评价方法的4个常用客观参量作为评价指标,即CC、SROCC、常值比率(Outlier Ratio,OR)、RMSE。OR反映立体图像质量客观评级模型的离散程度,即所有失真立体图像中四参数拟合后的评价值与DMOS之间的差异大于某一阈值的失真立体图像数目所占比例。表1给出了评价性能CC、SROCC、OR和RMSE系数,由表1数据可见,按本实施例得到的失真的立体图像的图像质量评价函数Q=QF×(QS)0.3计算得到的最终评价结果的输出值Q与主观评分DMOS之间的相关性是很高的,CC值和SROCC值都超过0.92,RMSE值低于6.5,表明客观评价结果与人眼主观感知的结果较为一致,说明了本发明方法的有效性。The correlation between the final evaluation result of the image quality evaluation function Q=Q F ×(Q S ) 0.3 of the distorted stereoscopic image obtained in this embodiment and the subjective score DMOS is analyzed. First, the output value Q of the final stereoscopic image quality evaluation result calculated by the image quality evaluation function Q=Q F * (Q S ) 0.3 of the distorted stereoscopic image obtained in this embodiment, then the output value Q is done as a four-parameter Logistic function Non-linear fitting, and finally obtain the performance index value between the stereoscopic objective evaluation model and the subjective perception. Here, four commonly used objective parameters for evaluating image quality evaluation methods are used as evaluation indicators, namely CC, SROCC, Outlier Ratio (OR), and RMSE. OR reflects the degree of dispersion of the stereoscopic image quality objective rating model, that is, the proportion of the number of distorted stereoscopic images whose difference between the evaluation value after four-parameter fitting and DMOS is greater than a certain threshold in all distorted stereoscopic images. Table 1 provides the evaluation performance CC, SROCC, OR and RMSE coefficients, as can be seen from the data in Table 1, the image quality evaluation function Q=Q F * (Q S ) 0.3 of the distorted stereoscopic image obtained by the present embodiment is obtained by calculating The correlation between the output value Q of the final evaluation result and the subjective score DMOS is very high, the CC value and SROCC value both exceed 0.92, and the RMSE value is lower than 6.5, indicating that the objective evaluation result is more consistent with the subjective perception of the human eye. The effectiveness of the method of the present invention is illustrated.
表1本实施得到的失真的立体图像的图像质量评价分值与主观评分之间的相关性Table 1 Correlation between the image quality evaluation score and the subjective score of the distorted stereoscopic image obtained in this implementation
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210145034.0A CN102708568B (en) | 2012-05-11 | 2012-05-11 | Stereoscopic image objective quality evaluation method on basis of structural distortion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210145034.0A CN102708568B (en) | 2012-05-11 | 2012-05-11 | Stereoscopic image objective quality evaluation method on basis of structural distortion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102708568A CN102708568A (en) | 2012-10-03 |
CN102708568B true CN102708568B (en) | 2014-11-05 |
Family
ID=46901288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210145034.0A Expired - Fee Related CN102708568B (en) | 2012-05-11 | 2012-05-11 | Stereoscopic image objective quality evaluation method on basis of structural distortion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102708568B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103108209B (en) * | 2012-12-28 | 2015-03-11 | 宁波大学 | Stereo image objective quality evaluation method based on integration of visual threshold value and passage |
CN104036502B (en) * | 2014-06-03 | 2016-08-24 | 宁波大学 | A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology |
CN104574363B (en) * | 2014-12-12 | 2017-09-29 | 南京邮电大学 | A kind of full reference image quality appraisement method for considering gradient direction difference |
CN108074241B (en) * | 2018-01-16 | 2021-10-22 | 深圳大学 | Target image quality scoring method, device, terminal and storage medium |
CN110232680B (en) * | 2019-05-30 | 2021-04-27 | 广智微芯(扬州)有限公司 | Image ambiguity evaluation method and device |
CN113920065B (en) * | 2021-09-18 | 2023-04-28 | 天津大学 | Imaging quality evaluation method for visual detection system of industrial site |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833766A (en) * | 2010-05-11 | 2010-09-15 | 天津大学 | Stereoscopic Image Objective Quality Evaluation Algorithm Based on GSSIM |
CN101872479A (en) * | 2010-06-09 | 2010-10-27 | 宁波大学 | A Stereo Image Objective Quality Evaluation Method |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7512286B2 (en) * | 2003-10-27 | 2009-03-31 | Hewlett-Packard Development Company, L.P. | Assessing image quality |
US8295565B2 (en) * | 2007-03-16 | 2012-10-23 | Sti Medical Systems, Llc | Method of image quality assessment to produce standardized imaging data |
JP4895204B2 (en) * | 2007-03-22 | 2012-03-14 | 富士フイルム株式会社 | Image component separation device, method, and program, and normal image generation device, method, and program |
-
2012
- 2012-05-11 CN CN201210145034.0A patent/CN102708568B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833766A (en) * | 2010-05-11 | 2010-09-15 | 天津大学 | Stereoscopic Image Objective Quality Evaluation Algorithm Based on GSSIM |
CN101872479A (en) * | 2010-06-09 | 2010-10-27 | 宁波大学 | A Stereo Image Objective Quality Evaluation Method |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
Non-Patent Citations (4)
Title |
---|
利用奇异值分解法的立体图像客观质量评价模型;周俊明等;《计算机辅助设计与图形学学报》;20110531;第23卷(第5期);870-877 * |
周俊明等.利用奇异值分解法的立体图像客观质量评价模型.《计算机辅助设计与图形学学报》.2011,第23卷(第5期),870-877. * |
基于三维特征和结构相似度的图像质量评价方法;沈丽丽等;《光电子·激光》;20101130;第21卷(第11期);1713-1719 * |
沈丽丽等.基于三维特征和结构相似度的图像质量评价方法.《光电子·激光》.2010,第21卷(第11期),1713-1719. * |
Also Published As
Publication number | Publication date |
---|---|
CN102708568A (en) | 2012-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101872479B (en) | Three-dimensional image objective quality evaluation method | |
CN102663747B (en) | Stereo image objectivity quality evaluation method based on visual perception | |
CN106097327B (en) | In conjunction with the objective evaluation method for quality of stereo images of manifold feature and binocular characteristic | |
CN103581661B (en) | Method for evaluating visual comfort degree of three-dimensional image | |
CN102333233B (en) | Stereo image quality objective evaluation method based on visual perception | |
CN104036501B (en) | A kind of objective evaluation method for quality of stereo images based on rarefaction representation | |
CN102708568B (en) | Stereoscopic image objective quality evaluation method on basis of structural distortion | |
CN103517065B (en) | Method for objectively evaluating quality of degraded reference three-dimensional picture | |
CN104581143B (en) | A kind of based on machine learning without with reference to objective evaluation method for quality of stereo images | |
CN105744256A (en) | Three-dimensional image quality objective evaluation method based on graph-based visual saliency | |
CN103413298B (en) | A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic | |
CN104394403B (en) | A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts | |
CN102209257A (en) | Stereo image quality objective evaluation method | |
CN102595185A (en) | Stereo image quality objective evaluation method | |
CN104036502B (en) | A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology | |
CN105282543B (en) | Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception | |
CN102521825B (en) | An Objective Evaluation Method of Stereo Image Quality Based on Zero Watermark | |
CN109429051B (en) | An objective evaluation method of no-reference stereoscopic video quality based on multi-view feature learning | |
CN105407349A (en) | No-reference objective three-dimensional image quality evaluation method based on binocular visual perception | |
CN104954778A (en) | Objective stereo image quality assessment method based on perception feature set | |
CN103338380A (en) | Adaptive image quality objective evaluation method | |
CN102843572B (en) | Phase-based stereo image quality objective evaluation method | |
CN104811691A (en) | Stereoscopic video quality objective evaluation method based on wavelet transformation | |
CN103400378A (en) | Method for objectively evaluating quality of three-dimensional image based on visual characteristics of human eyes | |
CN105654465A (en) | Stereo image quality evaluation method through parallax compensation and inter-viewpoint filtering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20141105 Termination date: 20210511 |