[go: up one dir, main page]

CN102708568B - Stereoscopic image objective quality evaluation method on basis of structural distortion - Google Patents

Stereoscopic image objective quality evaluation method on basis of structural distortion Download PDF

Info

Publication number
CN102708568B
CN102708568B CN201210145034.0A CN201210145034A CN102708568B CN 102708568 B CN102708568 B CN 102708568B CN 201210145034 A CN201210145034 A CN 201210145034A CN 102708568 B CN102708568 B CN 102708568B
Authority
CN
China
Prior art keywords
mrow
org
dis
msub
munderover
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210145034.0A
Other languages
Chinese (zh)
Other versions
CN102708568A (en
Inventor
蒋刚毅
毛香英
王晓东
郁梅
周俊明
彭宗举
邵枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201210145034.0A priority Critical patent/CN102708568B/en
Publication of CN102708568A publication Critical patent/CN102708568A/en
Application granted granted Critical
Publication of CN102708568B publication Critical patent/CN102708568B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种基于结构失真的立体图像客观质量评价方法,其首先分别对无失真的立体图像和失真的立体图像的左右视点图像进行区域划分,得到人眼敏感区域和相应的非敏感区域,然后从结构幅值失真和结构方向失真两方面分别得出敏感区域和非敏感区域的评价指标;其次获取左右视点图像质量评价值;再次采样奇异值差异和剥夺奇异值后的残余图像的均值偏差率来衡量立体图像的深度感知的畸变情况,从而获得立体感知质量的评价值;最后结合左右视点图像质量和立体感知质量,得到立体图像的最终质量评价结果,由于本发明方法避免模拟人眼视觉系统的各个组成部分,而充分利用了立体图像的结构信息,因此有效地提高了客观评价结果与主观感知的一致性。

The invention discloses a method for evaluating the objective quality of stereoscopic images based on structural distortion. Firstly, the undistorted stereoscopic image and the left and right viewpoint images of the distorted stereoscopic image are divided into regions to obtain human eye sensitive regions and corresponding non-sensitive regions. , and then obtain the evaluation index of sensitive area and non-sensitive area from the two aspects of structural amplitude distortion and structural direction distortion respectively; secondly, obtain the image quality evaluation value of the left and right viewpoints; re-sample the singular value difference and the mean value of the residual image after depriving the singular value The deviation rate is used to measure the distortion of the depth perception of the stereoscopic image, thereby obtaining the evaluation value of the stereoscopic perception quality; finally combining the left and right viewpoint image quality and the stereoscopic perception quality to obtain the final quality evaluation result of the stereoscopic image, because the method of the present invention avoids simulating human eyes The various components of the visual system make full use of the structural information of the stereo image, thus effectively improving the consistency between the objective evaluation results and the subjective perception.

Description

一种基于结构失真的立体图像客观质量评价方法A Stereoscopic Image Objective Quality Evaluation Method Based on Structural Distortion

技术领域technical field

本发明涉及一种图像质量评价技术,尤其是涉及一种基于结构失真的立体图像客观质量评价方法。The invention relates to an image quality evaluation technology, in particular to an objective quality evaluation method of a stereoscopic image based on structural distortion.

背景技术Background technique

立体图像质量评价在立体图像/视频系统中占据着十分重要的地位,不仅能够评判立体图像/视频系统中处理算法的优劣,而且还能优化和设计该算法,以提高立体图像/视频处理系统的效率。立体图像质量评价方法主要分为两类:主观质量评价和客观质量评价。主观质量评价方法就是把多名观察者对待评价立体图像的质量进行加权平均的综合评价,其结果符合人眼视觉系统特性,但是其受到计算不便、速度慢、成本高等诸多因素的限制,导致嵌入系统难,因而在实际应用中无法得到广泛推广。而客观质量评价方法具有操作简单、成本低、易于实现及实时优化算法等特点,成为立体图像质量评价研究的重点。Stereoscopic image quality evaluation occupies a very important position in stereoscopic image/video systems. It can not only judge the pros and cons of processing algorithms in stereoscopic image/video systems, but also optimize and design the algorithms to improve stereoscopic image/video processing systems. s efficiency. Stereoscopic image quality assessment methods are mainly divided into two categories: subjective quality assessment and objective quality assessment. The subjective quality evaluation method is to carry out a weighted average comprehensive evaluation of the quality of the three-dimensional image to be evaluated by multiple observers. The system is difficult, so it cannot be widely promoted in practical applications. The objective quality evaluation method has the characteristics of simple operation, low cost, easy implementation and real-time optimization algorithm, etc., and has become the focus of research on stereoscopic image quality evaluation.

目前,主流的立体图像客观质量评价模型包括左右视点图像质量评价和深度感知质量评价两部分。但是,由于人类对人眼视觉系统的认识有限,难以准确地模拟人眼的各个组成部分,因此这些模型与主观感知之间的一致性不是很好。At present, the mainstream stereoscopic image objective quality evaluation model includes two parts: left and right viewpoint image quality evaluation and depth perception quality evaluation. However, due to the limited understanding of the human visual system, it is difficult to accurately simulate the various components of the human eye, so the agreement between these models and subjective perception is not very good.

发明内容Contents of the invention

本发明所要解决的技术问题是提供一种基于结构失真的立体图像客观质量评价方法,其能够有效提高立体图像客观质量评价结果与主观感知之间的一致性。The technical problem to be solved by the present invention is to provide an objective quality evaluation method for stereoscopic images based on structural distortion, which can effectively improve the consistency between the objective quality evaluation results and subjective perception of stereoscopic images.

本发明解决上述技术问题所采用的技术方案为:一种基于结构失真的立体图像客观质量评价方法,其特征在于包括以下步骤:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a method for evaluating the objective quality of stereoscopic images based on structural distortion, which is characterized in that it includes the following steps:

①令Sorg为原始的无失真的立体图像,令Sdis为待评价的失真的立体图像,将原始的无失真的立体图像Sorg的左视点灰度图像记为Lorg,将原始的无失真的立体图像Sorg的右视点灰度图像记为Rorg,将待评价的失真的立体图像Sdis的左视点灰度图像记为Ldis,将待评价的失真的立体图像Sdis的右视点灰度图像记为Rdis①Let S org be the original undistorted stereo image, let S dis be the distorted stereo image to be evaluated, denote the left-view grayscale image of the original undistorted stereo image S org as L org , and denote the original undistorted stereo image The right viewpoint grayscale image of the distorted stereo image S org is denoted as R org , the left viewpoint grayscale image of the distorted stereo image S dis to be evaluated is denoted as L dis , and the right viewpoint grayscale image of the distorted stereo image S dis to be evaluated is The viewpoint grayscale image is denoted as R dis ;

②对Lorg和Ldis、Rorg和Rdis4幅图像分别实施区域划分,分别得到Lorg和Ldis、Rorg和Rdis4幅图像各自对应的敏感区域矩阵映射图,将Lorg和Ldis分别实施区域划分后得到的各自对应的敏感区域矩阵映射图的系数矩阵均记为AL,对于AL中坐标位置为(i,j)处的系数值,将其记为AL(i,j),将Rorg和Rdis分别实施区域划分后得到的各自对应的敏感区域矩阵映射图的系数矩阵均记为AR,对于AR中坐标位置为(i,j)处的系数值,将其记为AR(i,j),其中,此处0≤i≤(W-8),0≤j≤(H-8),W表示Lorg、Ldis、Rorg和Rdis的宽,H表示Lorg、Ldis、Rorg和Rdis的高;②The four images of L org and L dis , R org and R dis are divided into regions respectively, and the corresponding sensitive area matrix maps of the four images of L org and L dis , R org and R dis are respectively obtained, and L org and The coefficient matrices of the corresponding sensitive area matrix maps obtained after L dis respectively implement area division are all recorded as AL , and the coefficient value at the coordinate position (i, j) in AL is recorded as AL ( i, j), The coefficient matrices of the corresponding sensitive area matrix maps obtained after R org and R dis are divided into regions are recorded as A R , and the coefficient value at the coordinate position (i, j) in AR is recorded as is A R (i,j), Among them, here 0≤i≤(W-8), 0≤j≤(H-8), W represents the width of L org , L dis , R org and R dis , H represents L org , L dis , R org and the height of R dis ;

③将Lorg和Ldis2幅图像分别分割成(W-7)×(H-7)个尺寸大小为8×8的重叠块,然后计算Lorg和Ldis2幅图像中所有坐标位置相同的两个重叠块的结构幅值失真映射图,将该结构幅值失真映射图的系数矩阵记为BL,对于BL中坐标位置为(i,j)处的系数值,将其记为BL(i,j), B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 , 其中,BL(i,j)亦表示Lorg中左上角坐标位置为(i,j)的尺寸大小为8×8的重叠块与Ldis中左上角坐标位置为(i,j)的尺寸大小为8×8的重叠块的结构幅值失真值, σ org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L org ( i + x , j + y ) - U org , L ( i , j ) ) 2 , U org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L org ( i + x , j + y ) , σ dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) 2 , U dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L dis ( i + x , j + y ) , σ org , dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( L org ( i + x , j + y ) - U org , L ( i , j ) ) × ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) ) , Lorg(i+x,j+y)表示Lorg中坐标位置为(i+x,j+y)的像素点的像素值,Ldis(i+x,j+y)表示Ldis中坐标位置为(i+x,j+y)的像素点的像素值,C1表示常数,此处0≤i≤(W-8),0≤j≤(H-8);③ Divide the two images of L org and L dis into (W-7)×(H-7) overlapping blocks with a size of 8×8, and then calculate that all coordinate positions in the two images of L org and L dis are the same The structural amplitude distortion map of two overlapping blocks of , the coefficient matrix of the structural amplitude distortion map is recorded as BL , and the coefficient value at the coordinate position (i, j) in BL is recorded as B L (i, j), B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 , Among them, B L (i, j) also means that the overlapping block with the coordinate position of the upper left corner (i, j) in L org and the size of the overlapping block of size 8×8 and the coordinate position of the upper left corner in L dis are (i, j) Structural magnitude distortion values for overlapping blocks of size 8×8, σ org , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( L org ( i + x , j + the y ) - u org , L ( i , j ) ) 2 , u org , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 L org ( i + x , j + the y ) , σ dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( L dis ( i + x , j + the y ) - u dis , L ( i , j ) ) 2 , u dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 L dis ( i + x , j + the y ) , σ org , dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( ( L org ( i + x , j + the y ) - u org , L ( i , j ) ) × ( L dis ( i + x , j + the y ) - u dis , L ( i , j ) ) ) , L org (i+x, j+y) represents the pixel value of the pixel whose coordinate position is (i+x, j+y) in L org , and L dis (i+x, j+y) represents the coordinates in L dis The pixel value of the pixel at the position (i+x, j+y), C 1 represents a constant, where 0≤i≤(W-8), 0≤j≤(H-8);

将Rorg和Rdis2幅图像分别分割成(W-7)×(H-7)个尺寸大小为8×8的重叠块,然后计算Rorg和Rdis2幅图像中所有坐标位置相同的两个重叠块的结构幅值失真映射图,将该结构幅值失真映射图的系数矩阵记为BR,对于BR中坐标位置为(i,j)处的系数值,将其记为BR(i,j), B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 , 其中,BR(i,j)亦表示Rorg中左上角坐标位置为(i,j)的尺寸大小为8×8的重叠块与Rdis中左上角坐标位置为(i,j)的尺寸大小为8×8的重叠块的结构幅值失真值, σ org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R org ( i + x , j + y ) - U org , R ( i , j ) ) 2 , U org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R org ( i + x , j + y ) , σ dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) 2 , U dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R dis ( i + x , j + y ) , σ org , dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( R org ( i + x , j + y ) - U org , R ( i , j ) ) × ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) ) , Rorg(i+x,j+y)表示Rorg中坐标位置为(i+x,j+y)的像素点的像素值,Rdis(i+x,j+y)表示Rdis中坐标位置为(i+x,j+y)的像素点的像素值,C1表示常数,此处0≤i≤(W-8),0≤j≤(H-8);Divide the two images of R org and R dis into (W-7)×(H-7) overlapping blocks with a size of 8×8, and then calculate the coordinates of all the same coordinates in the two images of R org and R dis The structural amplitude distortion map of two overlapping blocks, the coefficient matrix of the structural amplitude distortion map is recorded as BR , and the coefficient value at the coordinate position (i,j) in BR is recorded as B R (i,j), B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 , Among them, B R (i, j) also means that the overlapping block with a size of 8×8 at the coordinate position of the upper left corner in R org is (i, j) and the size at the coordinate position of the upper left corner in R dis is (i, j) Structural magnitude distortion values for overlapping blocks of size 8×8, σ org , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( R org ( i + x , j + the y ) - u org , R ( i , j ) ) 2 , u org , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 R org ( i + x , j + the y ) , σ dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( R dis ( i + x , j + the y ) - u dis , R ( i , j ) ) 2 , u dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 R dis ( i + x , j + the y ) , σ org , dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( ( R org ( i + x , j + the y ) - u org , R ( i , j ) ) × ( R dis ( i + x , j + the y ) - u dis , R ( i , j ) ) ) , R org (i+x,j+y) indicates the pixel value of the pixel whose coordinate position is (i+x,j+y) in R org , and R dis (i+x,j+y) indicates the coordinates in R dis The pixel value of the pixel at the position (i+x, j+y), C 1 represents a constant, where 0≤i≤(W-8), 0≤j≤(H-8);

④对Lorg和Ldis2幅图像分别实施水平及垂直方向Sobel算子处理,分别得到Lorg和Ldis2幅图像各自对应的水平方向梯度矩阵映射图和垂直方向梯度矩阵映射图,将Lorg实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,org,L,对于Ih,org,L中坐标位置为(i,j)处的系数值,将其记为Ih,org,L(i,j), I h , org , L ( i , j ) = L org ( i + 2 , j ) + 2 L org ( i + 2 , j + 1 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i , j + 1 ) - L org ( i , j + 2 ) , 将Lorg实施垂直方向Sobel算子处理后得到的对应的垂直方向梯度矩阵映射图的系数矩阵记为Iv,org,L,对于Iv,org,L中坐标位置为(i,j)处的系数值,将其记为Iv,org,L(i,j), I v , org , L ( i , j ) = L org ( i , j + 2 ) + 2 L org ( i + 1 , j + 2 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i + 1 , j ) - L org ( i + 2 , j ) , 将Ldis实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,dis,L,对于Ih,dis,L中坐标位置为(i,j)处的系数值,将其记为Ih,dis,L(i,j), I h , dis , L ( i , j ) = L dis ( i + 2 , j ) + 2 L dis ( i + 2 , j + 1 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i , j + 1 ) - L dis ( i , j + 2 ) , 将Ldis实施垂直方向Sobel算子处理后得到的对应的垂直方向梯度矩阵映射图的系数矩阵记为Iv,dis,L,对于Iv,dis,L中坐标位置为(i,j)处的系数值,将其记为Iv,dis,L(i,j), I v , dis , L ( i , j ) = L dis ( i , j + 2 ) + 2 L dis ( i + 1 , j + 2 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i + 1 , j ) - L dis ( i + 2 , j ) , 其中,Lorg(i+2,j)、Lorg(i+2,j+1)、Lorg(i+2,j+2)、Lorg(i,j)、Lorg(i,j+1)、Lorg(i,j+2)、Lorg(i+1,j+2)、Lorg(i+1,j)分别对应表示Lorg中坐标位置为(i+2,j)、(i+2,j+1)、(i+2,j+2)、(i,j)、(i,j+1)、(i,j+2)、(i+1,j+2)、(i+1,j)的像素点的像素值,Ldis(i+2,j)、Ldis(i+2,j+1)、Ldis(i+2,j+2)、Ldis(i,j)、Ldis(i,j+1)、Ldis(i,j+2)、Ldis(i+1,j+2)、Ldis(i+1,j)分别对应表示Ldis中坐标位置为(i+2,j)、(i+2,j+1)、(i+2,j+2)、(i,j)、(i,j+1)、(i,j+2)、(i+1,j+2)、(i+1,j)的像素点的像素值;④ The two images of L org and L dis are processed by horizontal and vertical Sobel operators respectively, and the corresponding horizontal gradient matrix maps and vertical gradient matrix maps of the two images of L org and L dis are respectively obtained. The coefficient matrix of the corresponding horizontal gradient matrix map obtained after org implements the Sobel operator in the horizontal direction is denoted as I h,org,L , for the coefficient at the coordinate position (i,j) in I h,org,L value, denoted as I h,org,L (i,j), I h , org , L ( i , j ) = L org ( i + 2 , j ) + 2 L org ( i + 2 , j + 1 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i , j + 1 ) - L org ( i , j + 2 ) , The coefficient matrix of the corresponding gradient matrix map in the vertical direction obtained after L org implements the Sobel operator in the vertical direction is recorded as Iv,org,L , and the coordinate position in Iv ,org,L is (i,j) The coefficient value of , denoted as I v,org,L (i,j), I v , org , L ( i , j ) = L org ( i , j + 2 ) + 2 L org ( i + 1 , j + 2 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i + 1 , j ) - L org ( i + 2 , j ) , The coefficient matrix of the corresponding horizontal gradient matrix map obtained after L dis is processed by the horizontal direction Sobel operator is recorded as I h, dis, L , and the coordinate position in I h, dis, L is (i, j) The coefficient value of , denoted as I h,dis,L (i,j), I h , dis , L ( i , j ) = L dis ( i + 2 , j ) + 2 L dis ( i + 2 , j + 1 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i , j + 1 ) - L dis ( i , j + 2 ) , The coefficient matrix of the corresponding vertical gradient matrix map obtained after L dis is processed by the vertical direction Sobel operator is recorded as Iv,dis,L , and the coordinate position in Iv ,dis,L is (i,j) The coefficient value of , denoted as I v,dis,L (i,j), I v , dis , L ( i , j ) = L dis ( i , j + 2 ) + 2 L dis ( i + 1 , j + 2 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i + 1 , j ) - L dis ( i + 2 , j ) , Among them, L org (i+2,j), L org (i+2,j+1), L org (i+2,j+2), L org (i,j), L org (i,j +1), L org (i, j+2), L org (i+1, j+2), and L org (i+1, j) respectively indicate that the coordinate position in L org is (i+2, j ), (i+2,j+1), (i+2,j+2), (i,j), (i,j+1), (i,j+2), (i+1,j +2), (i+1,j) pixel value of the pixel, L dis (i+2,j), L dis (i+2,j+1), L dis (i+2,j+2 ), L dis (i,j), L dis (i,j+1), L dis (i,j+2), L dis (i+1,j+2), L dis (i+1,j ) respectively correspond to the coordinate positions in L dis as (i+2,j), (i+2,j+1), (i+2,j+2), (i,j), (i,j+1 ), (i, j+2), (i+1, j+2), (i+1, j) pixel values of the pixels;

对Rorg和Rdis2幅图像分别实施水平及垂直方向Sobel算子处理,分别得到Rorg和Rdis2幅图像各自对应的水平方向梯度矩阵映射图和垂直方向梯度矩阵映射图,将Rorg实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,org,R,对于Ih,org,R中坐标位置为(i,j)处的系数值,将其记为Ih,org,R(i,j), I h , org , R ( i , j ) = R org ( i + 2 , j ) + 2 R org ( i + 2 , j + 1 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i , j + 1 ) - R org ( i , j + 2 ) , 将Rorg实施垂直方向Sobel算子处理后得到的对应的垂直方向梯度矩阵映射图的系数矩阵记为Iv,org,R,对于Iv,org,R中坐标位置为(i,j)处的系数值,将其记为Iv,org,R(i,j), I v , org , R ( i , j ) = R org ( i , j + 2 ) + 2 R org ( i + 1 , j + 2 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i + 1 , j ) - R org ( i + 2 , j ) , 将Rdis实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,dis,R,对于Ih,dis,R中坐标位置为(i,j)处的系数值,将其记为Ih,dis,R(i,j), I h , dis , R ( i , j ) = R dis ( i + 2 , j ) + 2 R dis ( i + 2 , j + 1 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i , j + 1 ) - R dis ( i , j + 2 ) , 将Rdis实施垂直方向Sobel算子处理后得到的对应的垂直方向梯度矩阵映射图的系数矩阵记为Iv,dis,R,对于Iv,dis,R中坐标位置为(i,j)处的系数值,将其记为Iv,dis,R(i,j), I v , dis , R ( i , j ) = R dis ( i , j + 2 ) + 2 R dis ( i + 1 , j + 2 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i + 1 , j ) - R dis ( i + 2 , j ) , 其中,Rorg(i+2,j)、Rorg(i+2,j+1)、Rorg(i+2,j+2)、Rorg(i,j)、Rorg(i,j+1)、Rorg(i,j+2)、Rorg(i+1,j+2)、Rorg(i+1,j)分别对应表示Rorg中坐标位置为(i+2,j)、(i+2,j+1)、(i+2,j+2)、(i,j)、(i,j+1)、(i,j+2)、(i+1,j+2)、(i+1,j)的像素点的像素值,Rdis(i+2,j)、Rdis(i+2,j+1)、Rdis(i+2,j+2)、Rdis(i,j)、Rdis(i,j+1)、Rdis(i,j+2)、Rdis(i+1,j+2)、Rdis(i+1,j)分别对应表示Rdis中坐标位置为(i+2,j)、(i+2,j+1)、(i+2,j+2)、(i,j)、(i,j+1)、(i,j+2)、(i+1,j+2)、(i+1,j)的像素点的像素值;The two images of R org and R dis are processed by horizontal and vertical Sobel operators respectively, and the corresponding horizontal gradient matrix maps and vertical gradient matrix maps of the two images of R org and R dis are respectively obtained, and the R org The coefficient matrix of the corresponding gradient matrix map in the horizontal direction obtained after implementing the Sobel operator in the horizontal direction is denoted as I h,org,R , for the coefficient value at the coordinate position (i,j) in I h,org,R , denoted as I h,org,R (i,j), I h , org , R ( i , j ) = R org ( i + 2 , j ) + 2 R org ( i + 2 , j + 1 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i , j + 1 ) - R org ( i , j + 2 ) , The coefficient matrix of the corresponding gradient matrix map in the vertical direction obtained after R org implements the Sobel operator in the vertical direction is recorded as Iv,org,R , and the coordinate position in Iv ,org,R is (i,j) The coefficient value of , denoted as I v,org,R (i,j), I v , org , R ( i , j ) = R org ( i , j + 2 ) + 2 R org ( i + 1 , j + 2 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i + 1 , j ) - R org ( i + 2 , j ) , The coefficient matrix of the corresponding horizontal gradient matrix map obtained after R dis is processed by the horizontal direction Sobel operator is recorded as I h, dis, R , and the coordinate position in I h, dis, R is (i, j) The coefficient value of , denoted as I h,dis,R (i,j), I h , dis , R ( i , j ) = R dis ( i + 2 , j ) + 2 R dis ( i + 2 , j + 1 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i , j + 1 ) - R dis ( i , j + 2 ) , The coefficient matrix of the corresponding gradient matrix map in the vertical direction obtained after R dis is processed by the vertical direction Sobel operator is recorded as Iv,dis,R , and the coordinate position in Iv ,dis,R is (i,j) The coefficient value of , denoted as I v,dis,R (i,j), I v , dis , R ( i , j ) = R dis ( i , j + 2 ) + 2 R dis ( i + 1 , j + 2 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i + 1 , j ) - R dis ( i + 2 , j ) , Among them, R org (i+2,j), R org (i+2,j+1), R org (i+2,j+2), R org (i,j), R org (i,j +1), R org (i, j+2), R org (i+1, j+2), and R org (i+1, j) respectively indicate that the coordinate position in R org is (i+2, j ), (i+2,j+1), (i+2,j+2), (i,j), (i,j+1), (i,j+2), (i+1,j +2), (i+1,j) pixel value of the pixel, R dis (i+2,j), R dis (i+2,j+1), R dis (i+2,j+2 ), R dis (i,j), R dis (i,j+1), R dis (i,j+2), R dis (i+1,j+2), R dis (i+1,j ) respectively correspond to the coordinate positions in R dis as (i+2,j), (i+2,j+1), (i+2,j+2), (i,j), (i,j+1 ), (i, j+2), (i+1, j+2), (i+1, j) the pixel value of the pixel point;

⑤计算Lorg和Ldis2幅图像中所有坐标位置相同的两个重叠块的结构方向失真映射图,将该结构方向失真映射图的系数矩阵记为EL,对于EL中坐标位置为(i,j)处的系数值,将其记为EL(i,j), E L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 , 其中,C2表示常数;⑤ Calculate the structural direction distortion map of two overlapping blocks with the same coordinate position in the two images of L org and L dis , record the coefficient matrix of the structural direction distortion map as E L , and the coordinate position in E L is ( i,j), denote it as E L (i,j), E. L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 , Wherein, C represents a constant ;

计算Rorg和Rdis2幅图像中所有坐标位置相同的两个重叠块的结构方向失真映射图,将该结构方向失真映射图的系数矩阵记为ER,对于ER中坐标位置为(i,j)处的系数值,将其记为ER(i,j), E R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 ; Calculate the structural direction distortion map of two overlapping blocks with the same coordinate position in the two images of R org and R dis , record the coefficient matrix of the structural direction distortion map as E R , and the coordinate position in E R is (i ,j), which is recorded as E R (i,j), E. R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 ;

⑥计算Lorg和Ldis的结构失真评价值,记为QL,QL=ω1×Qm,L2×Qnm,L Q m , L = 1 N L , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × A L ( i , j ) ) , N L , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A L ( i , j ) , Q nm , L = 1 N L , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × ( 1 - A L ( i , j ) ) ) , N L , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A L ( i , j ) ) , 其中,ω1表示Lorg和Ldis中敏感区域的权重值,ω2表示Lorg和Ldis中非敏感区域的权重值;⑥ Calculate the structural distortion evaluation value of L org and L dis , denoted as Q L , Q L =ω 1 ×Q m,L2 ×Q nm,L , Q m , L = 1 N L , m Σ i = 0 W - 8 Σ j = 0 h - 8 ( 0.5 × ( B L ( i , j ) + E. L ( i , j ) ) × A L ( i , j ) ) , N L , m = Σ i = 0 W - 8 Σ j = 0 h - 8 A L ( i , j ) , Q nm , L = 1 N L , nm Σ i = 0 W - 8 Σ j = 0 h - 8 ( 0.5 × ( B L ( i , j ) + E. L ( i , j ) ) × ( 1 - A L ( i , j ) ) ) , N L , nm = Σ i = 0 W - 8 Σ j = 0 h - 8 ( 1 - A L ( i , j ) ) , Among them, ω 1 represents the weight value of the sensitive area in L org and L dis , ω 2 represents the weight value of the non-sensitive area in L org and L dis ;

计算Rorg和Rdis的结构失真评价值,记为QR,QR=ω'1×Qm,R+ω'2×Qnm,R Q m , R = 1 N R , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × A R ( i , j ) ) , N R , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A R ( i , j ) , Q nm , R = 1 N R , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × ( 1 - A R ( i , j ) ) ) , N R , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A R ( i , j ) ) , 其中,ω'1表示Rorg和Rdis中敏感区域的权重值,ω'2表示Rorg和Rdis中非敏感区域的权重值;Calculate the structural distortion evaluation value of R org and R dis , denoted as Q R , Q R =ω' 1 ×Q m,R +ω' 2 ×Q nm,R , Q m , R = 1 N R , m Σ i = 0 W - 8 Σ j = 0 h - 8 ( 0.5 × ( B R ( i , j ) + E. R ( i , j ) ) × A R ( i , j ) ) , N R , m = Σ i = 0 W - 8 Σ j = 0 h - 8 A R ( i , j ) , Q nm , R = 1 N R , nm Σ i = 0 W - 8 Σ j = 0 h - 8 ( 0.5 × ( B R ( i , j ) + E. R ( i , j ) ) × ( 1 - A R ( i , j ) ) ) , N R , nm = Σ i = 0 W - 8 Σ j = 0 h - 8 ( 1 - A R ( i , j ) ) , Among them, ω'1 represents the weight value of the sensitive area in R org and R dis , and ω'2 represents the weight value of the non-sensitive area in R org and R dis ;

⑦根据QL和QR计算待评价的失真的立体图像Sdis相对于原始的无失真的立体图像Sorg的空间频率相似度度量,记为QF,QF=β1×QL+(1-β1)×QR,其中,β1表示QL的权值;⑦ Calculate the spatial frequency similarity measure of the distorted stereo image S dis to be evaluated relative to the original undistorted stereo image S org according to Q L and Q R , denoted as Q F , Q F1 ×Q L +( 1-β 1 )×Q R , where β 1 represents the weight of Q L ;

⑧计算Lorg和Rorg的绝对差值图像,以矩阵形式表示为 计算Ldis和Rdis的绝对差值图像,以矩阵形式表示为 其中,“||”为取绝对值符号;⑧ Calculate the absolute difference image of L org and R org , expressed in matrix form as Calculate the absolute difference image of L dis and R dis , expressed in matrix form as Among them, "||" is the absolute value symbol;

⑨将幅图像分别分割成个互不重叠的尺寸大小为8×8的块,然后对幅图像中的所有块分别实施奇异值分解,得到对应的由其每个块的奇异值矩阵组成的奇异值映射图和对应的由其每个块的奇异值矩阵组成的奇异值映射图,将实施奇异值分解后得到的奇异值映射图的系数矩阵记为Gorg,对于Gorg中第n个块的奇异值矩阵中坐标位置为(p,q)处的奇异值,将其记为实施奇异值分解后得到的奇异值映射图的系数矩阵记为Gdis,对于Gdis中第n个块的奇异值矩阵中坐标位置为(p,q)处的奇异值,将其记为其中,WLR表示的宽,HLR表示的高,0≤p≤7,0≤q≤7;⑨ will and The image is divided into non-overlapping blocks of size 8×8, and then and Singular value decomposition is performed on all the blocks in the image respectively to get The corresponding singular value map composed of the singular value matrix of each block and The corresponding singular value map composed of the singular value matrix of each block will be The coefficient matrix of the singular value map obtained after implementing singular value decomposition is denoted as G org , and for the singular value at the coordinate position (p, q) in the singular value matrix of the nth block in G org , it is denoted as Will The coefficient matrix of the singular value map obtained after the implementation of singular value decomposition is denoted as G dis , and for the singular value at the coordinate position (p, q) in the singular value matrix of the nth block in G dis , it is denoted as Among them, W LR means and The width, H LR means and height of, 0≤p≤7,0≤q≤7;

⑩计算对应的奇异值映射图和对应的奇异值映射图的奇异值偏差评价值,记为K, K = 64 W LR × H LR × Σ n = 0 W LR × H LR / 64 - 1 Σ p = 0 7 ( G org n ( p , p ) × | G org n ( p , p ) - G dis n ( p , p ) | ) Σ p = 0 7 G org n ( p , p ) , 其中,表示Gorg中第n个块的奇异值矩阵中坐标位置为(p,p)处的奇异值,表示Gdis中第n个块的奇异值矩阵中坐标位置为(p,p)处的奇异值;⑩ Calculation The corresponding singular value maps and The singular value deviation evaluation value of the corresponding singular value map, denoted as K, K = 64 W LR × h LR × Σ no = 0 W LR × h LR / 64 - 1 Σ p = 0 7 ( G org no ( p , p ) × | G org no ( p , p ) - G dis no ( p , p ) | ) Σ p = 0 7 G org no ( p , p ) , in, Indicates the singular value at the coordinate position (p,p) in the singular value matrix of the nth block in G org , Represents the singular value at the coordinate position (p, p) in the singular value matrix of the nth block in G dis ;

分别实施奇异值分解,分别得到各自对应的2个正交矩阵和1个奇异值矩阵,将实施奇异值分解后得到的2个正交矩阵分别记为χorg和Vorg,将实施奇异值分解后得到的奇异值矩阵记为Oorg实施奇异值分解后得到的2个正交矩阵分别记为χdis和Vdis,将实施奇异值分解后得到的奇异值矩阵记为Odis right and Singular value decomposition is carried out respectively, and we get and Corresponding to two orthogonal matrices and one singular value matrix, the The two orthogonal matrices obtained after implementing singular value decomposition are denoted as χ org and V org respectively, and the The singular value matrix obtained after implementing singular value decomposition is denoted as O org , Will The two orthogonal matrices obtained after implementing singular value decomposition are denoted as χ dis and V dis respectively, and the The singular value matrix obtained after implementing singular value decomposition is denoted as O dis ,

分别计算幅图像剥夺奇异值后的残留矩阵图,将剥夺奇异值后的残留矩阵图记为Xorg,Xorg=χorg×Λ×Vorg,将剥夺奇异值后的残留矩阵图记为Xdis,Xdis=χdis×Λ×Vdis,其中,Λ表示单位矩阵,Λ的大小与Oorg和Odis的大小一致; Calculate separately and The residual matrix image after stripping the singular values of the images, the The residual matrix after deprivation of singular values is denoted as X org , X org = χ org × Λ × V org , the The residual matrix after stripping the singular value is marked as X dis , X disdis ×Λ×V dis , where Λ represents the identity matrix, and the size of Λ is consistent with the size of O org and O dis ;

计算Xorg和Xdis的均值偏差率,记为 其中,x表示Xorg和Xdis中的像素点的横坐标,y表示Xorg和Xdis中的像素点的纵坐标; Calculate the mean deviation rate of X org and X dis , denoted as Wherein, x represents the abscissa of the pixel in X org and X dis , and y represents the ordinate of the pixel in X org and X dis ;

计算待评价的失真的立体图像Sdis相对于原始的无失真的立体图像Sorg的立体感知评价度量,记为QS其中,τ表示常数,用于调节K和在QS中所起的重要性; Calculate the stereo perception evaluation metric of the distorted stereo image S dis to be evaluated relative to the original undistorted stereo image S org , denoted as Q S , where τ represents a constant used to adjust K and The importance played in QS ;

根据QF和QS,计算待评价的失真的立体图像Sdis的图像质量评价分值,记为Q,Q=QF×(QS)ρ,其中,ρ表示权重系数值。 According to Q F and Q S , calculate the image quality evaluation score of the distorted stereo image S dis to be evaluated, denoted as Q, Q=Q F ×(Q S ) ρ , where ρ represents the weight coefficient value.

所述的步骤②中Lorg和Ldis各自对应的敏感区域矩阵映射图的系数矩阵AL的获取过程为:In the step 2., the acquisition process of the coefficient matrix AL of the corresponding sensitive area matrix map of L org and L dis is:

②-a1、对Lorg作水平及垂直方向Sobel算子处理,得到Lorg的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,l1和Zv,l1,然后计算Lorg的梯度幅值图,记为Zl1其中,Zl1(x,y)表示Zl1中坐标位置为(x,y)的像素点的梯度幅值,Zh,l1(x,y)表示Zh,l1中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,l1(x,y)表示Zv,l1中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zl1的宽,H'表示Zl1的高;②-a1. Perform horizontal and vertical Sobel operator processing on L org to obtain the horizontal gradient image and vertical gradient image of L org , which are recorded as Z h,l1 and Z v,l1 respectively, and then calculate the gradient of L org Amplitude diagram, denoted as Z l1 , Among them, Z l1 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z l1 , Z h, l1 (x, y) represents the coordinate position in Z h, l1 is (x, y) the horizontal direction gradient value of the pixel point, Z v,l1 (x,y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x,y) in Z v,l1 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z l1 , and H' represents the height of Z l1 ;

②-a2、对Ldis作水平及垂直方向Sobel算子处理,得到Ldis的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,l2和Zv,l2,然后计算Ldis的梯度幅值图,记为Zl2其中,Zl2(x,y)表示Zl2中坐标位置为(x,y)的像素点的梯度幅值,Zh,l2(x,y)表示Zh,l2中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,l2(x,y)表示Zv,l2中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zl2的宽,H'表示Zl2的高;②-a2. Perform horizontal and vertical Sobel operator processing on L dis to obtain the horizontal gradient image and vertical gradient image of L dis , which are recorded as Z h,l2 and Z v,l2 respectively, and then calculate the gradient of L dis Amplitude diagram, denoted as Z l2 , Among them, Z l2 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z l2 , Z h, l2 (x, y) represents the coordinate position in Z h, l2 is (x, y) the horizontal direction gradient value of the pixel point, Z v,l2 (x,y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x,y) in Z v,l2 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z l2 , and H' represents the height of Z l2 ;

②-a3、计算划分区域时所需的阈值T, T = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z l 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z l 2 ( x , y ) ) , 其中,α为常数,Zl1(x,y)表示Zl1中坐标位置为(x,y)的像素点的梯度幅值,Zl2(x,y)表示Zl2中坐标位置为(x,y)的像素点的梯度幅值;②-a3. Calculate the threshold T required for dividing the area, T = α × 1 W ′ × h ′ × ( Σ x = 0 W ′ Σ the y = 0 h ′ Z l 1 ( x , the y ) + Σ x = 0 W ′ Σ the y = 0 h ′ Z l 2 ( x , the y ) ) , Among them, α is a constant, Z l1 (x, y) represents the gradient magnitude of the pixel point whose coordinate position in Z l1 is (x, y), Z l2 (x, y) represents the coordinate position in Z l2 is (x, y) The gradient magnitude of the pixel point of y);

②-a4、将Zl1中坐标位置为(i,j)的像素点的梯度幅值记为Zl1(i,j),将Zl2中坐标位置为(i,j)的像素点的梯度幅值记为Zl2(i,j),判断Zl1(i,j)>T或Zl2(i,j)>T是否成立,如果成立,则确定Lorg和Ldis中坐标位置为(i,j)的像素点属于敏感区域,并令AL(i,j)=1,否则,确定Lorg和Ldis中坐标位置为(i,j)的像素点属于非敏感区域,并令AL(i,j)=0,其中,0≤i≤(W-8),0≤j≤(H-8);②-a4. Record the gradient magnitude of the pixel at the coordinate position (i, j) in Z l1 as Z l1 (i, j), and the gradient of the pixel at the coordinate position (i, j) in Z l2 The amplitude is recorded as Z l2 (i,j), and it is judged whether Z l1 (i,j)>T or Z l2 (i,j)>T is established. If it is established, the coordinate position in L org and L dis is determined as ( i, j) belongs to the sensitive area, and let A L (i, j) = 1, otherwise, determine that the pixel with the coordinate position (i, j) in L org and L dis belongs to the non-sensitive area, and set A L (i,j)=0, where, 0≤i≤(W-8), 0≤j≤(H-8);

所述的步骤②中Rorg和Rdis各自对应的敏感区域矩阵映射图的系数矩阵AR的获取过程为:The acquisition process of the coefficient matrix A R of the sensitive area matrix map corresponding to R org and R dis respectively in the step ② is:

②-b1、对Rorg作水平及垂直方向Sobel算子处理,得到Rorg的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,r1和Zv,r1,然后计算Rorg的梯度幅值图,记为Zr1其中,Zr1(x,y)表示Zr1中坐标位置为(x,y)的像素点的梯度幅值,Zh,r1(x,y)表示Zh,r1中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,r1(x,y)表示Zv,r1中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zr1的宽,H'表示Zr1的高;②-b1. Perform horizontal and vertical Sobel operator processing on R org to obtain the horizontal gradient image and vertical gradient image of R org , which are recorded as Z h,r1 and Z v,r1 respectively, and then calculate the gradient of R org Amplitude diagram, denoted as Z r1 , Among them, Z r1 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z r1 , Z h, r1 (x, y) represents the coordinate position in Z h, r1 is (x, The horizontal direction gradient value of the pixel point of y), Z v, r1 (x, y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x, y) in Z v, r1 , 1≤x≤W', 1≤y≤H', where W' indicates the width of Z r1 , and H' indicates the height of Z r1 ;

②-b2、对Rdis作水平及垂直方向Sobel算子处理,得到Rdis的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,r2和Zv,r2,然后计算Rdis的梯度幅值图,记为Zr2其中,Zr2(x,y)表示Zr2中坐标位置为(x,y)的像素点的梯度幅值,Zh,r2(x,y)表示Zh,r2中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,r2(x,y)表示Zv,r2中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zr2的宽,H'表示Zr2的高;②-b2. Perform horizontal and vertical Sobel operator processing on R dis to obtain the horizontal gradient image and vertical gradient image of R dis , which are recorded as Z h,r2 and Z v,r2 respectively, and then calculate the gradient of R dis Magnitude diagram, denoted as Z r2 , Among them, Z r2 (x, y) represents the gradient magnitude of the pixel whose coordinate position in Z r2 is (x, y), Z h, r2 (x, y) represents the coordinate position in Z h, r2 is (x, The horizontal direction gradient value of the pixel point of y), Z v, r2 (x, y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x, y) in Z v, r2 , 1≤x≤W', 1≤y≤H', where W' indicates the width of Z r2 , and H' indicates the height of Z r2 ;

②-b3、计算划分区域时所需的阈值T', T ′ = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z r 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z r 2 ( x , y ) ) , 其中,α为常数,Zr1(x,y)表示Zr1中坐标位置为(x,y)的像素点的梯度幅值,Zr2(x,y)表示Zr2中坐标位置为(x,y)的像素点的梯度幅值;②-b3. Calculate the threshold T' required for dividing the area, T ′ = α × 1 W ′ × h ′ × ( Σ x = 0 W ′ Σ the y = 0 h ′ Z r 1 ( x , the y ) + Σ x = 0 W ′ Σ the y = 0 h ′ Z r 2 ( x , the y ) ) , Among them, α is a constant, Z r1 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z r1 , Z r2 (x, y) represents the coordinate position in Z r2 is (x, y) The gradient magnitude of the pixel point of y);

②-b4、将Zr1中坐标位置为(i,j)的像素点的梯度幅值记为Zr1(i,j),将Zr2中坐标位置为(i,j)的像素点的梯度幅值记为Zr2(i,j),判断Zr1(i,j)>T或Zr2(i,j)>T是否成立,如果成立,则确定Rorg和Rdis中坐标位置为(i,j)的像素点属于敏感区域,并令AR(i,j)=1,否则,确定Rorg和Rdis中坐标位置为(i,j)的像素点属于非敏感区域,并令AR(i,j)=0,其中,0≤i≤(W-8),0≤j≤(H-8)。②-b4. Record the gradient magnitude of the pixel at the coordinate position (i, j) in Z r1 as Z r1 (i, j), and record the gradient of the pixel at the coordinate position (i, j) in Z r2 The amplitude is recorded as Z r2 (i,j), judge whether Z r1 (i,j)>T or Z r2 (i,j)>T is true, if true, determine the coordinate position in R org and R dis as ( i, j) belongs to the sensitive area, and set A R (i, j) = 1, otherwise, determine that the pixel with the coordinate position (i, j) in R org and R dis belongs to the non-sensitive area, and set A R (i,j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).

所述的步骤⑦中β1的获取过程为:The acquisition process of β1 in the described step ⑦ is:

⑦-1、采用n幅无失真的立体图像建立其在不同失真类型不同失真程度下的失真立体图像集,该失真立体图像集包括多幅失真的立体图像,其中,n≥1;⑦-1. Using n undistorted stereoscopic images to establish a distorted stereoscopic image set under different distortion types and different degrees of distortion, the distorted stereoscopic image set includes multiple distorted stereoscopic images, wherein n≥1;

⑦-2、利用主观质量评价方法获取失真立体图像集中的每幅失真的立体图像的平均主观评分差值,记为DMOS,DMOS=100-MOS,其中,MOS表示主观评分均值,DMOS∈[0,100];⑦-2. Utilize the subjective quality evaluation method to obtain the average subjective score difference value of each distorted stereoscopic image in the distorted stereoscopic image set, which is denoted as DMOS, DMOS=100-MOS, wherein, MOS represents the mean value of the subjective score, and DMOS∈[0,100 ];

⑦-3、根据步骤①至步骤⑥的操作过程,计算失真立体图像集中的每幅失真的立体图像的左视点图像相对于对应的无失真的立体图像的左视点图像的敏感区域的评价值Qm,L和非敏感区域的评价值Qnm,L⑦-3, according to the operation process of step ① to step ⑥, calculate the evaluation value Q of the sensitive area of the left viewpoint image of each distorted stereoscopic image in the distorted stereoscopic image set relative to the left viewpoint image of the corresponding undistorted stereoscopic image m, L and the evaluation value Q nm, L of the non-sensitive area;

⑦-4、采用数学拟合方法拟合失真立体图像集中失真的立体图像的平均主观评分差值DMOS和对应的Qm,L和Qnm,L,从而获得β1值。⑦-4. Using a mathematical fitting method to fit the average subjective score difference DMOS and the corresponding Q m,L and Q nm,L of the distorted stereo images in the distorted stereo image set, so as to obtain the value of β 1 .

与现有技术相比,本发明的优点在于首先分别对无失真的立体图像和失真的立体图像的左视点图像和右视点图像进行区域划分,得到人眼敏感区域和相应的非敏感区域,然后从结构幅值失真和结构方向失真两方面分别得出敏感区域和非敏感区域的评价指标;其次采用线性加权分别得到左视点图像质量评价值和右视点图像质量评价值,进而得到左右视点图像质量评价值;再次根据奇异值可以较好的表征立体图像结构信息特性,采样奇异值差异和剥夺奇异值后的残余图像的均值偏差率来衡量立体图像的深度感知的畸变情况,从而获得立体感知质量的评价值;最后将左右视点图像质量和立体感知质量两者以非线性方式结合,得到立体图像的最终质量评价结果,由于本发明方法避免模拟人眼视觉系统的各个组成部分,而充分利用了立体图像的结构信息,因此有效地提高了客观评价结果与主观感知的一致性。Compared with the prior art, the present invention has the advantage of firstly dividing the left viewpoint image and the right viewpoint image of the undistorted stereoscopic image and the distorted stereoscopic image respectively to obtain human eye sensitive regions and corresponding non-sensitive regions, and then The evaluation indexes of sensitive area and non-sensitive area are respectively obtained from the two aspects of structural amplitude distortion and structural direction distortion; secondly, linear weighting is used to obtain the left viewpoint image quality evaluation value and right viewpoint image quality evaluation value respectively, and then the left and right viewpoint image quality is obtained Evaluation value; again according to the singular value can better characterize the structural information characteristics of the stereoscopic image, the difference of the sampling singular value and the mean deviation rate of the residual image after depriving the singular value are used to measure the distortion of the depth perception of the stereoscopic image, so as to obtain the stereoscopic perception quality evaluation value; finally the left and right viewpoint image quality and the stereoscopic perception quality are combined in a non-linear manner to obtain the final quality evaluation result of the stereoscopic image. Since the method of the present invention avoids simulating each component part of the human visual system, fully utilizes Structural information of stereoscopic images, thus effectively improving the consistency of objective evaluation results with subjective perception.

附图说明Description of drawings

图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the inventive method;

图2a为Akko&Kayo(640×480)立体图像;Figure 2a is a stereoscopic image of Akko&Kayo (640×480);

图2b为Alt Moabit(1024×768)立体图像;Figure 2b is a stereoscopic image of Alt Moabit (1024×768);

图2c为Balloons(1024×768)立体图像;Figure 2c is a stereoscopic image of Balloons (1024×768);

图2d为Door Flowers(1024×768)立体图像;Figure 2d is a stereoscopic image of Door Flowers (1024×768);

图2e为Kendo(1024×768)立体图像;Figure 2e is a stereoscopic image of Kendo (1024×768);

图2f为Leaving Laptop(1024×768)立体图像;Figure 2f is a stereo image of Leaving Laptop (1024×768);

图2g为Lovebird1(1024×768)立体图像;Figure 2g is a stereoscopic image of Lovebird1 (1024×768);

图2h为Newspaper(1024×768)立体图像;Figure 2h is a stereoscopic image of Newspaper (1024×768);

图2i为Xmas(640×480)立体图像;Fig. 2i is Xmas (640 * 480) stereoscopic image;

图2j为Puppy(720×480)立体图像;Fig. 2j is Puppy (720 * 480) three-dimensional image;

图2k为Soccer2(720×480)立体图像;Fig. 2k is Soccer2 (720 * 480) three-dimensional image;

图2l为Horse(480×270)立体图像;Figure 2l is a stereoscopic image of Horse (480×270);

图3为本发明方法的左视点图像质量评价框图;Fig. 3 is a block diagram of left viewpoint image quality evaluation of the method of the present invention;

图4a为不同α和ω1下的左视点图像质量和主观感知质量之间的CC性能变化图;Figure 4a is a diagram of the CC performance change between the left view image quality and subjective perception quality under different α and ω 1 ;

图4b为不同α和ω1下的左视点图像质量和主观感知质量之间的SROCC性能变化图;Figure 4b is a graph of the SROCC performance change between the left viewpoint image quality and subjective perception quality under different α and ω 1 ;

图4c为不同α和ω1下的左视点图像质量和主观感知质量之间的RMSE性能变化图;Figure 4c is a graph of the RMSE performance change between the left view image quality and subjective perception quality under different α and ω 1 ;

图5a为在ω1=1的情况下,不同α下的左视点图像质量和主观感知质量之间的CC性能变化图;Fig. 5a is a diagram of CC performance change between left view image quality and subjective perception quality under different α in the case of ω 1 =1;

图5b为在ω1=1的情况下,不同α下的左视点图像质量和主观感知质量之间的SROCC性能变化图;Fig. 5b is a graph of SROCC performance change between left view image quality and subjective perception quality under different α in the case of ω 1 =1;

图5c为在ω1=1的情况下,不同α下的左视点图像质量和主观感知质量之间的RMSE性能变化图;Fig. 5c is a diagram of RMSE performance change between left view image quality and subjective perception quality under different α in the case of ω 1 =1;

图6a为不同β1下的左右视点图像质量和主观感知质量之间的CC性能变化图;Figure 6a is a diagram of the CC performance change between left and right viewpoint image quality and subjective perception quality under different β1 ;

图6b为不同β1下的左右视点图像质量和主观感知质量之间的SROCC性能变化图;Figure 6b is a graph of the SROCC performance change between left and right viewpoint image quality and subjective perception quality under different β1 ;

图6c为不同β1下的左右视点图像质量和主观感知质量之间的RMSE性能变化图;Fig. 6c is the RMSE performance change diagram between left and right viewpoint image quality and subjective perception quality under different β1 ;

图7a为不同τ下的立体深度感知质量和主观感知质量之间的CC性能变化图;Figure 7a is a graph of the CC performance change between the stereoscopic depth perception quality and the subjective perception quality under different τ;

图7b为不同τ下的立体深度感知质量和主观感知质量之间的SROCC性能变化图;Figure 7b is a graph of the SROCC performance change between the stereoscopic depth perception quality and the subjective perception quality under different τ;

图7c为不同τ下的立体深度感知质量和主观感知质量之间的RMSE性能变化图;Figure 7c is a graph of the RMSE performance change between the stereoscopic depth perception quality and the subjective perception quality under different τ;

图8a为不同ρ下的立体图像质量和主观感知质量之间的CC性能变化图;Figure 8a is a diagram of CC performance variation between stereoscopic image quality and subjective perceptual quality under different ρ;

图8b为不同ρ下的立体图像质量和主观感知质量之间的SROCC性能变化图;Figure 8b is a graph of the SROCC performance change between the stereoscopic image quality and the subjective perception quality under different ρ;

图8c为不同ρ下的立体图像质量和主观感知质量之间的RMSE性能变化图。Fig. 8c is a graph of RMSE performance variation between stereoscopic image quality and subjective perception quality under different ρ.

具体实施方式Detailed ways

以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

本发明提出的一种基于结构失真的立体图像客观质量评价方法,其从结构失真的角度分别评价了左右视点图像质量和立体图像的立体感知质量,采用非线性加权的方式得到立体图像的最终质量评价值。图1给出了本发明方法的总体实现框图,其包括以下步骤:A method for evaluating the objective quality of a stereoscopic image based on structural distortion proposed by the present invention, which evaluates the image quality of the left and right viewpoints and the stereoscopic perception quality of the stereoscopic image from the perspective of structural distortion, and obtains the final quality of the stereoscopic image in a non-linear weighted manner Evaluation value. Fig. 1 has provided the overall realization block diagram of the inventive method, and it comprises the following steps:

①令Sorg为原始的无失真的立体图像,令Sdis为待评价的失真的立体图像,将原始的无失真的立体图像Sorg的左视点灰度图像记为Lorg,将原始的无失真的立体图像Sorg的右视点灰度图像记为Rorg,将待评价的失真的立体图像Sdis的左视点灰度图像记为Ldis,将待评价的失真的立体图像Sdis的右视点灰度图像记为Rdis① Let S org be the original undistorted stereo image, let S dis be the distorted stereo image to be evaluated, denote the left-viewpoint grayscale image of the original undistorted stereo image S org as L org , and denote the original undistorted stereo image The right viewpoint grayscale image of the distorted stereo image S org is denoted as R org , the left viewpoint grayscale image of the distorted stereo image S dis to be evaluated is denoted as L dis , and the right viewpoint grayscale image of the distorted stereo image S dis to be evaluated is The viewpoint grayscale image is denoted as R dis .

②对Lorg和Ldis、Rorg和Rdis4幅图像分别实施区域划分,分别得到Lorg和Ldis、Rorg和Rdis4幅图像各自对应的敏感区域矩阵映射图,将Lorg和Ldis分别实施区域划分后得到的各自对应的敏感区域矩阵映射图的系数矩阵均记为AL,对于AL中坐标位置为(i,j)处的系数值,将其记为AL(i,j),将Rorg和Rdis分别实施区域划分后得到的各自对应的敏感区域矩阵映射图的系数矩阵均记为AR,对于AR中坐标位置为(i,j)处的系数值,将其记为AR(i,j),其中,此处0≤i≤(W-8),0≤j≤(H-8),W表示Lorg、Ldis、Rorg和Rdis的宽,H表示Lorg、Ldis、Rorg和Rdis的高。②The four images of L org and L dis , R org and R dis are divided into regions respectively, and the corresponding sensitive area matrix maps of the four images of L org and L dis , R org and R dis are respectively obtained, and L org and The coefficient matrices of the corresponding sensitive area matrix maps obtained after L dis are respectively divided into regions are all recorded as AL , and the coefficient value at the coordinate position (i, j) in AL is recorded as AL ( i, j), The coefficient matrices of the corresponding sensitive area matrix maps obtained after R org and R dis are divided into regions are recorded as A R , and the coefficient value at the coordinate position (i, j) in AR is recorded as is A R (i,j), Among them, here 0≤i≤(W-8), 0≤j≤(H-8), W represents the width of L org , L dis , R org and R dis , H represents L org , L dis , R org and R dis high.

在此具体实施例中,步骤②中Lorg和Ldis各自对应的敏感区域矩阵映射图的系数矩阵AL的获取过程为:In this specific embodiment, the acquisition process of the coefficient matrix AL of the sensitive area matrix map corresponding to L org and L dis respectively in step 2. is:

②-a1、对Lorg作水平及垂直方向Sobel算子处理,得到Lorg的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,l1和Zv,l1,然后计算Lorg的梯度幅值图,记为Zl1其中,Zl1(x,y)表示Zl1中坐标位置为(x,y)的像素点的梯度幅值,Zh,l1(x,y)表示Zh,l1中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,l1(x,y)表示Zv,l1中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zl1的宽,H'表示Zl1的高。②-a1. Perform horizontal and vertical Sobel operator processing on L org to obtain the horizontal gradient image and vertical gradient image of L org , which are recorded as Z h,l1 and Z v,l1 respectively, and then calculate the gradient of L org Amplitude diagram, denoted as Z l1 , Among them, Z l1 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z l1 , Z h, l1 (x, y) represents the coordinate position in Z h, l1 is (x, y) the horizontal direction gradient value of the pixel point, Z v,l1 (x,y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x,y) in Z v,l1 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z l1 , and H' represents the height of Z l1 .

②-a2、对Ldis作水平及垂直方向Sobel算子处理,得到Ldis的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,l2和Zv,l2,然后计算Ldis的梯度幅值图,记为Zl2其中,Zl2(x,y)表示Zl2中坐标位置为(x,y)的像素点的梯度幅值,Zh,l2(x,y)表示Zh,l2中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,l2(x,y)表示Zv,l2中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zl2的宽,H'表示Zl2的高。②-a2. Perform horizontal and vertical Sobel operator processing on L dis to obtain the horizontal gradient image and vertical gradient image of L dis , which are recorded as Z h,l2 and Z v,l2 respectively, and then calculate the gradient of L dis Amplitude diagram, denoted as Z l2 , Among them, Z l2 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z l2 , Z h, l2 (x, y) represents the coordinate position in Z h, l2 is (x, y) the horizontal direction gradient value of the pixel point, Z v,l2 (x,y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x,y) in Z v,l2 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z l2 , and H' represents the height of Z l2 .

②-a3、计算划分区域时所需的阈值T, T = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z l 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z l 2 ( x , y ) ) , 其中,W'表示Zl1和Zl2的宽,H'表示Zl1和Zl2的高,α为常数,Zl1(x,y)表示Zl1中坐标位置为(x,y)的像素点的梯度幅值,Zl2(x,y)表示Zl2中坐标位置为(x,y)的像素点的梯度幅值。②-a3. Calculate the threshold T required for dividing the area, T = α × 1 W ′ × h ′ × ( Σ x = 0 W ′ Σ the y = 0 h ′ Z l 1 ( x , the y ) + Σ x = 0 W ′ Σ the y = 0 h ′ Z l 2 ( x , the y ) ) , Among them, W' represents the width of Z l1 and Z l2 , H' represents the height of Z l1 and Z l2 , α is a constant, Z l1 (x, y) represents the pixel point in Z l1 whose coordinate position is (x, y) The gradient magnitude of Z l2 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z l2 .

②-a4、将Zl1中坐标位置为(i,j)的像素点的梯度幅值记为Zl1(i,j),将Zl2中坐标位置为(i,j)的像素点的梯度幅值记为Zl2(i,j),判断Zl1(i,j)>T或Zl2(i,j)>T是否成立,如果成立,则确定Lorg和Ldis中坐标位置为(i,j)的像素点属于敏感区域,并令AL(i,j)=1,否则,确定Lorg和Ldis中坐标位置为(i,j)的像素点属于非敏感区域,并令AL(i,j)=0,其中,0≤i≤(W-8),0≤j≤(H-8)。②-a4. Record the gradient magnitude of the pixel point whose coordinate position is (i, j) in Z l1 as Z l1 (i, j), and record the gradient amplitude of the pixel point whose coordinate position is (i, j) in Z l2 The amplitude is recorded as Z l2 (i,j), and it is judged whether Z l1 (i,j)>T or Z l2 (i,j)>T is established. If it is established, the coordinate position in L org and L dis is determined as ( i, j) belongs to the sensitive area, and let A L (i, j) = 1, otherwise, determine that the pixel with the coordinate position (i, j) in L org and L dis belongs to the non-sensitive area, and set A L (i,j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).

在此具体实施例中,步骤②中Rorg和Rdis各自对应的敏感区域矩阵映射图的系数矩阵AR的获取过程为:In this specific embodiment, the acquisition process of the coefficient matrix AR of the sensitive area matrix map corresponding to R org and R dis respectively in step ② is:

②-b1、对Rorg作水平及垂直方向Sobel算子处理,得到Rorg的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,r1和Zv,r1,然后计算Rorg的梯度幅值图,记为Zr1其中,Zr1(x,y)表示Zr1中坐标位置为(x,y)的像素点的梯度幅值,Zh,r1(x,y)表示Zh,r1中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,r1(x,y)表示Zv,r1中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zr1的宽,H'表示Zr1的高。②-b1. Perform horizontal and vertical Sobel operator processing on R org to obtain the horizontal gradient image and vertical gradient image of R org , which are recorded as Z h,r1 and Z v,r1 respectively, and then calculate the gradient of R org Amplitude diagram, denoted as Z r1 , Among them, Z r1 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z r1 , Z h, r1 (x, y) represents the coordinate position in Z h, r1 is (x, The horizontal direction gradient value of the pixel point of y), Z v, r1 (x, y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x, y) in Z v, r1 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z r1 , and H' represents the height of Z r1 .

②-b2、对Rdis作水平及垂直方向Sobel算子处理,得到Rdis的水平方向梯度图像和垂直方向梯度图像,分别记为Zh,r2和Zv,r2,然后计算Rdis的梯度幅值图,记为Zr2其中,Zr2(x,y)表示Zr2中坐标位置为(x,y)的像素点的梯度幅值,Zh,r2(x,y)表示Zh,r2中坐标位置为(x,y)的像素点的水平方向梯度值,Zv,r2(x,y)表示Zv,r2中坐标位置为(x,y)的像素点的垂直方向梯度值,1≤x≤W',1≤y≤H',此处W'表示Zr2的宽,H'表示Zr2的高。②-b2. Perform horizontal and vertical Sobel operator processing on R dis to obtain the horizontal gradient image and vertical gradient image of R dis , which are recorded as Z h,r2 and Z v,r2 respectively, and then calculate the gradient of R dis Amplitude diagram, denoted as Z r2 , Among them, Z r2 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z r2 , Z h, r2 (x, y) represents the coordinate position in Z h, r2 is (x, The horizontal direction gradient value of the pixel point of y), Z v, r2 (x, y) represents the vertical direction gradient value of the pixel point whose coordinate position is (x, y) in Z v, r2 , 1≤x≤W', 1≤y≤H', where W' represents the width of Z r2 , and H' represents the height of Z r2 .

②-b3、计算划分区域时所需的阈值T', T ′ = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z r 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z r 2 ( x , y ) ) , 其中,W'表示Zr1和Zr2的宽,H'表示Zr1和Zr2的高,α为常数,Zr1(x,y)表示Zr1中坐标位置为(x,y)的像素点的梯度幅值,Zr2(x,y)表示Zr2中坐标位置为(x,y)的像素点的梯度幅值。②-b3. Calculate the threshold T' required for dividing the area, T ′ = α × 1 W ′ × h ′ × ( Σ x = 0 W ′ Σ the y = 0 h ′ Z r 1 ( x , the y ) + Σ x = 0 W ′ Σ the y = 0 h ′ Z r 2 ( x , the y ) ) , Among them, W' represents the width of Z r1 and Z r2 , H' represents the height of Z r1 and Z r2 , α is a constant, and Z r1 (x, y) represents the pixel in Z r1 whose coordinate position is (x, y) The gradient magnitude of Z r2 (x, y) represents the gradient magnitude of the pixel whose coordinate position is (x, y) in Z r2 .

②-b4、将Zr1中坐标位置为(i,j)的像素点的梯度幅值记为Zr1(i,j),将Zr2中坐标位置为(i,j)的像素点的梯度幅值记为Zr2(i,j),判断Zr1(i,j)>T或Zr2(i,j)>T是否成立,如果成立,则确定Rorg和Rdis中坐标位置为(i,j)的像素点属于敏感区域,并令AR(i,j)=1,否则,确定Rorg和Rdis中坐标位置为(i,j)的像素点属于非敏感区域,并令AR(i,j)=0,其中,0≤i≤(W-8),0≤j≤(H-8)。②-b4. Record the gradient magnitude of the pixel at the coordinate position (i, j) in Z r1 as Z r1 (i, j), and the gradient of the pixel at the coordinate position (i, j) in Z r2 The amplitude is recorded as Z r2 (i,j), judge whether Z r1 (i,j)>T or Z r2 (i,j)>T is true, if true, determine the coordinate position in R org and R dis as ( i, j) belongs to the sensitive area, and set A R (i, j) = 1, otherwise, determine that the pixel with the coordinate position (i, j) in R org and R dis belongs to the non-sensitive area, and set A R (i,j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).

在本实施例中,利用如图2a、图2b、图2c、图2d、图2e、图2f、图2g、图2h、图2i、图2j、图2k和图2l所示的12幅无失真的立体图像建立其在不同失真类型不同失真程度下的失真立体图像集,失真类型包括JPEG压缩、JP2K压缩、高斯白噪声、高斯模糊和H264编码失真,且立体图像的左视点图像和右视点图像同时同程度失真,该失真立体图像集共包括312幅失真的立体图像,其中JPEG压缩的失真的立体图像共60幅,JPEG2000压缩的失真的立体图像共60幅,高斯白噪声失真的立体图像共60幅,高斯模糊失真的立体图像共60幅,H264编码失真的立体图像共72幅。对上述312幅立体图像进行如上所述的区域分割。In this embodiment, the 12 distortion-free Stereoscopic images of different distortion types and different degrees of distortion of the stereoscopic image set, the distortion types include JPEG compression, JP2K compression, Gaussian white noise, Gaussian blur and H264 encoding distortion, and the left viewpoint image and right viewpoint image of the stereoscopic image At the same time, the distorted stereo image set includes 312 distorted stereo images, including 60 distorted stereo images compressed by JPEG, 60 distorted stereo images compressed by JPEG2000, and 60 distorted stereo images compressed by Gaussian white noise. There are 60 images, 60 stereoscopic images with Gaussian blur distortion, and 72 stereoscopic images with H264 encoding distortion. The above-mentioned 312 stereoscopic images are divided into regions as described above.

在本实施例中,α值决定了敏感区域划分的准确程度,若取值过大,则敏感区域会被误认为非敏感区域,若取值过小,则非敏感区域会被误认为敏感区域,因此其值的确定过程与左视点图像质量或右视点图像质量对立体图像质量的贡献来决定。In this embodiment, the α value determines the accuracy of the sensitive area division. If the value is too large, the sensitive area will be mistaken for a non-sensitive area; if the value is too small, the non-sensitive area will be mistaken for a sensitive area. , so the determination process of its value is determined by the contribution of left-viewpoint image quality or right-viewpoint image quality to stereoscopic image quality.

③将Lorg和Ldis2幅图像分别分割成(W-7)×(H-7)个尺寸大小为8×8的重叠块,然后计算Lorg和Ldis2幅图像中所有坐标位置相同的两个重叠块的结构幅值失真映射图,将该结构幅值失真映射图的系数矩阵记为BL,对于BL中坐标位置为(i,j)处的系数值,将其记为BL(i,j), B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 , 其中,BL(i,j)亦表示Lorg中左上角坐标位置为(i,j)的尺寸大小为8×8的重叠块与Ldis中左上角坐标位置为(i,j)的尺寸大小为8×8的重叠块的结构幅值失真值, σ org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L org ( i + x , j + y ) - U org , L ( i , j ) ) 2 , U org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L org ( i + x , j + y ) , σ dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) 2 , U dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L dis ( i + x , j + y ) , σ org , dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( L org ( i + x , j + y ) - U org , L ( i , j ) ) × ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) ) , Lorg(i+x,j+y)表示Lorg中坐标位置为(i+x,j+y)的像素点的像素值,Ldis(i+x,j+y)表示Ldis中坐标位置为(i+x,j+y)的像素点的像素值,C1表示常数,C1是为了避免 B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 的分母出现零的情况,在实际应用过程中可取C1=0.01,此处0≤i≤(W-8),0≤j≤(H-8)。③ Divide the two images of L org and L dis into (W-7)×(H-7) overlapping blocks with a size of 8×8, and then calculate that all coordinate positions in the two images of L org and L dis are the same The structural magnitude-distortion map of two overlapping blocks of , the coefficient matrix of the structural magnitude-distortion map is recorded as BL , and the coefficient value at the coordinate position (i, j) in BL is recorded as B L (i, j), B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 , Among them, B L (i, j) also means that the overlapping block with a size of 8×8 at the coordinate position of the upper left corner in L org is (i, j) and the size at the coordinate position of the upper left corner in L dis is (i, j) Structural magnitude distortion values for overlapping blocks of size 8×8, σ org , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( L org ( i + x , j + the y ) - u org , L ( i , j ) ) 2 , u org , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 L org ( i + x , j + the y ) , σ dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( L dis ( i + x , j + the y ) - u dis , L ( i , j ) ) 2 , u dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 L dis ( i + x , j + the y ) , σ org , dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( ( L org ( i + x , j + the y ) - u org , L ( i , j ) ) × ( L dis ( i + x , j + the y ) - u dis , L ( i , j ) ) ) , L org (i+x, j+y) represents the pixel value of the pixel whose coordinate position is (i+x, j+y) in L org , and L dis (i+x, j+y) represents the coordinates in L dis The pixel value of the pixel at the position (i+x, j+y), C 1 represents a constant, C 1 is to avoid B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 In the case where the denominator of is zero, C 1 =0.01 may be taken in practical application, where 0≤i≤(W-8), 0≤j≤(H-8).

在此,考虑图像的像素点之间的相关性,一个尺寸大小为8×8的重叠块与它最相邻的左块或右块有7列重叠,同样,该8×8重叠块与它最相邻的上块或下块有7行重叠。Here, considering the correlation between the pixels of the image, an overlapping block with a size of 8×8 overlaps with its nearest left or right block by 7 columns. Similarly, the 8×8 overlapping block overlaps with its The most adjacent upper or lower blocks have 7 lines of overlap.

将Rorg和Rdis2幅图像分别分割成(W-7)×(H-7)个尺寸大小为8×8的重叠块,然后计算Rorg和Rdis2幅图像中所有坐标位置相同的两个重叠块的结构幅值失真映射图,将该结构幅值失真映射图的系数矩阵记为BR,对于BR中坐标位置为(i,j)处的系数值,将其记为BR(i,j), B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 , 其中,BR(i,j)亦表示Rorg中左上角坐标位置为(i,j)的尺寸大小为8×8的重叠块与Rdis中左上角坐标位置为(i,j)的尺寸大小为8×8的重叠块的结构幅值失真值, σ org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R org ( i + x , j + y ) - U org , R ( i , j ) ) 2 , U org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R org ( i + x , j + y ) , σ dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) 2 , U dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R dis ( i + x , j + y ) , σ org , dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( R org ( i + x , j + y ) - U org , R ( i , j ) ) × ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) ) , Rorg(i+x,j+y)表示Rorg中坐标位置为(i+x,j+y)的像素点的像素值,Rdis(i+x,j+y)表示Rdis中坐标位置为(i+x,j+y)的像素点的像素值,C1表示常数,C1是为了避免 B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 的分母出现零的情况,在实际应用过程中可取C1=0.01,此处0≤i≤(W-8),0≤j≤(H-8)。Divide the two images of R org and R dis into (W-7)×(H-7) overlapping blocks with a size of 8×8, and then calculate the coordinates of all the same coordinates in the two images of R org and R dis The structural amplitude distortion map of two overlapping blocks, the coefficient matrix of the structural amplitude distortion map is recorded as BR , and the coefficient value at the coordinate position (i,j) in BR is recorded as B R (i,j), B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 , Among them, B R (i, j) also means that the overlapping block with a size of 8×8 at the coordinate position of the upper left corner in R org is (i, j) and the size at the coordinate position of the upper left corner in R dis is (i, j) Structural magnitude distortion values for overlapping blocks of size 8×8, σ org , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( R org ( i + x , j + the y ) - u org , R ( i , j ) ) 2 , u org , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 R org ( i + x , j + the y ) , σ dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( R dis ( i + x , j + the y ) - u dis , R ( i , j ) ) 2 , u dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 R dis ( i + x , j + the y ) , σ org , dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ the y = 0 7 ( ( R org ( i + x , j + the y ) - u org , R ( i , j ) ) × ( R dis ( i + x , j + the y ) - u dis , R ( i , j ) ) ) , R org (i+x,j+y) indicates the pixel value of the pixel whose coordinate position is (i+x,j+y) in R org , and R dis (i+x,j+y) indicates the coordinates in R dis The pixel value of the pixel at the position (i+x, j+y), C 1 represents a constant, C 1 is to avoid B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 In the case where the denominator of is zero, C 1 =0.01 may be taken in practical application, where 0≤i≤(W-8), 0≤j≤(H-8).

④对Lorg和Ldis2幅图像分别实施水平及垂直方向Sobel算子处理,分别得到Lorg和Ldis2幅图像各自对应的水平方向梯度矩阵映射图和垂直方向梯度矩阵映射图,将Lorg实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,org,L,对于Ih,org,L中坐标位置为(i,j)处的系数值,将其记为Ih,org,L(i,j), I h , org , L ( i , j ) = L org ( i + 2 , j ) + 2 L org ( i + 2 , j + 1 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i , j + 1 ) - L org ( i , j + 2 ) , 将Lorg实施垂直方向Sobel算子处理后得到的对应的垂直方向梯度矩阵映射图的系数矩阵记为Iv,org,L,对于Iv,org,L中坐标位置为(i,j)处的系数值,将其记为Iv,org,L(i,j), I v , org , L ( i , j ) = L org ( i , j + 2 ) + 2 L org ( i + 1 , j + 2 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i + 1 , j ) - L org ( i + 2 , j ) , 将Ldis实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,dis,L,对于Ih,dis,L中坐标位置为(i,j)处的系数值,将其记为Ih,dis,L(i,j), I h , dis , L ( i , j ) = L dis ( i + 2 , j ) + 2 L dis ( i + 2 , j + 1 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i , j + 1 ) - L dis ( i , j + 2 ) , 将Ldis实施垂直方向Sobel算子处理后得到的对应的垂直方向梯度矩阵映射图的系数矩阵记为Iv,dis,L,对于Iv,dis,L中坐标位置为(i,j)处的系数值,将其记为Iv,dis,L(i,j), I v , dis , L ( i , j ) = L dis ( i , j + 2 ) + 2 L dis ( i + 1 , j + 2 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i + 1 , j ) - L dis ( i + 2 , j ) , 其中,Lorg(i+2,j)、Lorg(i+2,j+1)、Lorg(i+2,j+2)、Lorg(i,j)、Lorg(i,j+1)、Lorg(i,j+2)、Lorg(i+1,j+2)、Lorg(i+1,j)分别对应表示Lorg中坐标位置为(i+2,j)、(i+2,j+1)、(i+2,j+2)、(i,j)、(i,j+1)、(i,j+2)、(i+1,j+2)、(i+1,j)的像素点的像素值,Ldis(i+2,j)、Ldis(i+2,j+1)、Ldis(i+2,j+2)、Ldis(i,j)、Ldis(i,j+1)、Ldis(i,j+2)、Ldis(i+1,j+2)、Ldis(i+1,j)分别对应表示Ldis中坐标位置为(i+2,j)、(i+2,j+1)、(i+2,j+2)、(i,j)、(i,j+1)、(i,j+2)、(i+1,j+2)、(i+1,j)的像素点的像素值。④ The two images of L org and L dis are processed by horizontal and vertical Sobel operators respectively, and the corresponding horizontal gradient matrix maps and vertical gradient matrix maps of the two images of L org and L dis are respectively obtained. The coefficient matrix of the corresponding horizontal gradient matrix map obtained after org implements the Sobel operator in the horizontal direction is denoted as I h,org,L , for the coefficient at the coordinate position (i,j) in I h,org,L value, denoted as I h,org,L (i,j), I h , org , L ( i , j ) = L org ( i + 2 , j ) + 2 L org ( i + 2 , j + 1 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i , j + 1 ) - L org ( i , j + 2 ) , The coefficient matrix of the corresponding gradient matrix map in the vertical direction obtained after L org implements the Sobel operator in the vertical direction is recorded as Iv,org,L , and the coordinate position in Iv ,org,L is (i,j) The coefficient value of , denoted as I v,org,L (i,j), I v , org , L ( i , j ) = L org ( i , j + 2 ) + 2 L org ( i + 1 , j + 2 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i + 1 , j ) - L org ( i + 2 , j ) , The coefficient matrix of the corresponding horizontal gradient matrix map obtained after L dis is processed by the horizontal direction Sobel operator is recorded as I h, dis, L , and the coordinate position in I h, dis, L is (i, j) The coefficient value of , denoted as I h,dis,L (i,j), I h , dis , L ( i , j ) = L dis ( i + 2 , j ) + 2 L dis ( i + 2 , j + 1 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i , j + 1 ) - L dis ( i , j + 2 ) , The coefficient matrix of the corresponding vertical gradient matrix map obtained after L dis is processed by the vertical direction Sobel operator is recorded as Iv,dis,L , and the coordinate position in Iv ,dis,L is (i,j) The coefficient value of , denoted as I v,dis,L (i,j), I v , dis , L ( i , j ) = L dis ( i , j + 2 ) + 2 L dis ( i + 1 , j + 2 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i + 1 , j ) - L dis ( i + 2 , j ) , Among them, L org (i+2,j), L org (i+2,j+1), L org (i+2,j+2), L org (i,j), L org (i,j +1), L org (i, j+2), L org (i+1, j+2), and L org (i+1, j) respectively indicate that the coordinate position in L org is (i+2, j ), (i+2,j+1), (i+2,j+2), (i,j), (i,j+1), (i,j+2), (i+1,j +2), (i+1,j) pixel value of the pixel, L dis (i+2,j), L dis (i+2,j+1), L dis (i+2,j+2 ), L dis (i,j), L dis (i,j+1), L dis (i,j+2), L dis (i+1,j+2), L dis (i+1,j ) respectively correspond to the coordinate positions in L dis as (i+2,j), (i+2,j+1), (i+2,j+2), (i,j), (i,j+1 ), (i,j+2), (i+1,j+2), (i+1,j) pixel values of the pixels.

对Rorg和Rdis2幅图像分别实施水平及垂直方向Sobel算子处理,分别得到Rorg和Rdis2幅图像各自对应的水平方向梯度矩阵映射图和垂直方向梯度矩阵映射图,将Rorg实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,org,R,对于Ih,org,R中坐标位置为(i,j)处的系数值,将其记为Ih,org,R(i,j), I h , org , R ( i , j ) = R org ( i + 2 , j ) + 2 R org ( i + 2 , j + 1 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i , j + 1 ) - R org ( i , j + 2 ) , 将Rorg实施垂直方向Sobel算子处理后得到的对应的垂直方向梯度矩阵映射图的系数矩阵记为Iv,org,R,对于Iv,org,R中坐标位置为(i,j)处的系数值,将其记为Iv,org,R(i,j), I v , org , R ( i , j ) = R org ( i , j + 2 ) + 2 R org ( i + 1 , j + 2 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i + 1 , j ) - R org ( i + 2 , j ) , 将Rdis实施水平方向Sobel算子处理后得到的对应的水平方向梯度矩阵映射图的系数矩阵记为Ih,dis,R,对于Ih,dis,R中坐标位置为(i,j)处的系数值,将其记为Ih,dis,R(i,j), I h , dis , R ( i , j ) = R dis ( i + 2 , j ) + 2 R dis ( i + 2 , j + 1 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i , j + 1 ) - R dis ( i , j + 2 ) , 将Rdis实施垂直方向Sobel算子处理后得到的对应的垂直方向梯度矩阵映射图的系数矩阵记为Iv,dis,R,对于Iv,dis,R中坐标位置为(i,j)处的系数值,将其记为Iv,dis,R(i,j), I v , dis , R ( i , j ) = R dis ( i , j + 2 ) + 2 R dis ( i + 1 , j + 2 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i + 1 , j ) - R dis ( i + 2 , j ) , 其中,Rorg(i+2,j)、Rorg(i+2,j+1)、Rorg(i+2,j+2)、Rorg(i,j)、Rorg(i,j+1)、Rorg(i,j+2)、Rorg(i+1,j+2)、Rorg(i+1,j)分别对应表示Rorg中坐标位置为(i+2,j)、(i+2,j+1)、(i+2,j+2)、(i,j)、(i,j+1)、(i,j+2)、(i+1,j+2)、(i+1,j)的像素点的像素值,Rdis(i+2,j)、Rdis(i+2,j+1)、Rdis(i+2,j+2)、Rdis(i,j)、Rdis(i,j+1)、Rdis(i,j+2)、Rdis(i+1,j+2)、Rdis(i+1,j)分别对应表示Rdis中坐标位置为(i+2,j)、(i+2,j+1)、(i+2,j+2)、(i,j)、(i,j+1)、(i,j+2)、(i+1,j+2)、(i+1,j)的像素点的像素值。The two images of R org and R dis are processed by horizontal and vertical Sobel operators respectively, and the corresponding horizontal gradient matrix maps and vertical gradient matrix maps of the two images of R org and R dis are respectively obtained, and the R org The coefficient matrix of the corresponding gradient matrix map in the horizontal direction obtained after implementing the Sobel operator in the horizontal direction is denoted as I h,org,R , for the coefficient value at the coordinate position (i,j) in I h,org,R , denoted as I h,org,R (i,j), I h , org , R ( i , j ) = R org ( i + 2 , j ) + 2 R org ( i + 2 , j + 1 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i , j + 1 ) - R org ( i , j + 2 ) , The coefficient matrix of the corresponding gradient matrix map in the vertical direction obtained after R org implements the Sobel operator in the vertical direction is recorded as Iv,org,R , and the coordinate position in Iv,org,R is (i,j) The coefficient value of , denoted as I v,org,R (i,j), I v , org , R ( i , j ) = R org ( i , j + 2 ) + 2 R org ( i + 1 , j + 2 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i + 1 , j ) - R org ( i + 2 , j ) , The coefficient matrix of the corresponding horizontal gradient matrix map obtained after R dis is processed by the horizontal direction Sobel operator is recorded as I h, dis, R , and the coordinate position in I h, dis, R is (i, j) The coefficient value of , denoted as I h,dis,R (i,j), I h , dis , R ( i , j ) = R dis ( i + 2 , j ) + 2 R dis ( i + 2 , j + 1 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i , j + 1 ) - R dis ( i , j + 2 ) , The coefficient matrix of the corresponding gradient matrix map in the vertical direction obtained after R dis is processed by the vertical direction Sobel operator is recorded as Iv,dis,R , and the coordinate position in Iv ,dis,R is (i,j) The coefficient value of , denoted as I v,dis,R (i,j), I v , dis , R ( i , j ) = R dis ( i , j + 2 ) + 2 R dis ( i + 1 , j + 2 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i + 1 , j ) - R dis ( i + 2 , j ) , Among them, R org (i+2,j), R org (i+2,j+1), R org (i+2,j+2), R org (i,j), R org (i,j +1), R org (i, j+2), R org (i+1, j+2), and R org (i+1, j) respectively indicate that the coordinate position in R org is (i+2, j ), (i+2,j+1), (i+2,j+2), (i,j), (i,j+1), (i,j+2), (i+1,j +2), (i+1,j) pixel value of the pixel, R dis (i+2,j), R dis (i+2,j+1), R dis (i+2,j+2 ), R dis (i,j), R dis (i,j+1), R dis (i,j+2), R dis (i+1,j+2), R dis (i+1,j ) respectively correspond to the coordinate positions in R dis as (i+2,j), (i+2,j+1), (i+2,j+2), (i,j), (i,j+1 ), (i,j+2), (i+1,j+2), (i+1,j) pixel values of the pixels.

⑤计算Lorg和Ldis2幅图像中所有坐标位置相同的两个重叠块的结构方向失真映射图,将该结构方向失真映射图的系数矩阵记为EL,对于EL中坐标位置为(i,j)处的系数值,将其记为EL(i,j), E L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 , 其中,C2表示常数,C2是为了避免 E L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 的分母出现为零的情况,在实际应用过程中可取C2=0.02。⑤ Calculate the structural direction distortion map of two overlapping blocks with the same coordinate position in the two images of L org and L dis , record the coefficient matrix of the structural direction distortion map as E L , and the coordinate position in E L is ( i,j), denote it as E L (i,j), E. L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 , Among them, C 2 represents a constant, and C 2 is to avoid E. L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 In the case where the denominator of is zero, C 2 =0.02 may be chosen in the actual application process.

计算Rorg和Rdis2幅图像中所有坐标位置相同的两个重叠块的结构方向失真映射图,将该结构方向失真映射图的系数矩阵记为ER,对于ER中坐标位置为(i,j)处的系数值,将其记为ER(i,j), E R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 , C2是为了避免 E R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 的分母出现为零的情况,在实际应用过程中可取C2=0.02。Calculate the structural direction distortion map of two overlapping blocks with the same coordinate position in the R org and R dis two images, record the coefficient matrix of the structural direction distortion map as E R , and the coordinate position in E R is (i ,j), which is recorded as E R (i,j), E. R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 , C 2 is to avoid E. R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 In the case where the denominator of is zero, C 2 =0.02 can be taken in the actual application process.

⑥计算Lorg和Ldis的结构失真评价值,记为QL,QL=ω1×Qm,L2×Qnm,L Q m , L = 1 N L , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × A L ( i , j ) ) , N L , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A L ( i , j ) , Q nm , L = 1 N L , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × ( 1 - A L ( i , j ) ) ) , N L , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A L ( i , j ) ) , 其中,ω1表示Lorg和Ldis中敏感区域的权重值,ω2表示Lorg和Ldis中非敏感区域的权重值。⑥ Calculate the structural distortion evaluation value of L org and L dis , denoted as Q L , Q L =ω 1 ×Q m,L2 ×Q nm,L , Q m , L = 1 N L , m Σ i = 0 W - 8 Σ j = 0 h - 8 ( 0.5 × ( B L ( i , j ) + E. L ( i , j ) ) × A L ( i , j ) ) , N L , m = Σ i = 0 W - 8 Σ j = 0 h - 8 A L ( i , j ) , Q nm , L = 1 N L , nm Σ i = 0 W - 8 Σ j = 0 h - 8 ( 0.5 × ( B L ( i , j ) + E. L ( i , j ) ) × ( 1 - A L ( i , j ) ) ) , N L , nm = Σ i = 0 W - 8 Σ j = 0 h - 8 ( 1 - A L ( i , j ) ) , Among them, ω 1 represents the weight value of the sensitive area in L org and L dis , and ω 2 represents the weight value of the non-sensitive area in L org and L dis .

计算Rorg和Rdis的结构失真评价值,记为QR,QR=ω'1×Qm,R+ω'2×Qnm,R Q m , R = 1 N R , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × A R ( i , j ) ) , N R , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A R ( i , j ) , Q nm , R = 1 N R , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × ( 1 - A R ( i , j ) ) ) , N R , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A R ( i , j ) ) , 其中,ω'1表示Rorg和Rdis中敏感区域的权重值,ω'2表示Rorg和Rdis中非敏感区域的权重值。Calculate the structural distortion evaluation value of R org and R dis , denoted as Q R , Q R =ω' 1 ×Q m,R +ω' 2 ×Q nm,R , Q m , R = 1 N R , m Σ i = 0 W - 8 Σ j = 0 h - 8 ( 0.5 × ( B R ( i , j ) + E. R ( i , j ) ) × A R ( i , j ) ) , N R , m = Σ i = 0 W - 8 Σ j = 0 h - 8 A R ( i , j ) , Q nm , R = 1 N R , nm Σ i = 0 W - 8 Σ j = 0 h - 8 ( 0.5 × ( B R ( i , j ) + E. R ( i , j ) ) × ( 1 - A R ( i , j ) ) ) , N R , nm = Σ i = 0 W - 8 Σ j = 0 h - 8 ( 1 - A R ( i , j ) ) , Among them, ω'1 represents the weight value of the sensitive area in R org and R dis , and ω'2 represents the weight value of the non-sensitive area in R org and R dis .

在本实施例中,图3给出了左视点图像质量评价的实现框图。利用如图2a至图2l所示的12幅无失真的立体图像建立由312幅失真的立体图像构成的失真立体图像集,对这312幅失真的立体图像采用公知的主观质量评价方法进行主观质量评价,得到312幅失真的立体图像各自的平均主观评分差值(DMOS,Difference MeanOpinion Scores),即每幅失真的立体图像的主观质量评分值。DMOS为主观评分均值(MOS)和满分(100)的差值,即DMOS=100-MOS,因此,DMOS值越大表示失真的立体图像的质量越差,DMOS值越小表示失真的立体图像的质量越好,且DMOS的取值范围为[0,100]。另一方面,对上述312幅失真的立体图像按本发明方法步骤①至⑥计算得到每幅失真的立体图像相应的Qm,L和Qnm,L;然后采用QL=ω1×Qm,L+(1-ω1)×Qnm,L进行作四参数Logistic函数非线性拟合,得到α和ω1值。这里,利用评估图像质量评价方法的3个常用客观参量作为评价指标,即非线性回归条件下的Pearson相关系数(Correlation Coefficient,CC)、Spearman相关系数(SpearmanRank-Order Correlation Coefficient,SROCC)和均方误差系数(Rooted Mean SquaredError,RMSE),CC反映失真的立体图像评价函数这一客观模型的精度,SROCC反映客观模型与主观感知之间的单调性情况。RMSE反映其预测的准确性。CC和SROCC值越高说明立体图像客观评价方法与DMOS相关性越好,RMSE值越低说明立体图像客观评价方法与DMOS相关性越好。图4a给出了不同α和ω1下的312幅立体图像的左视点图像质量和主观感知质量之间的CC性能变化,图4b给出了不同α和ω1下的312幅立体图像的左视点图像质量和主观感知质量之间的SROCC性能变化,图4c给出了不同α和ω1下的312幅立体图像的左视点图像质量和主观感知质量之间的RMSE性能变化,分析图4a、图4b和图4c,可知CC和SROCC会随着ω1值的变大而变大,而RMSE随着ω1值的变大而变小,说明左视点图像质量主要是由敏感区域的质量决定,而α值的改变对左视点图像质量与主观感知之间的性能影响不大。图5a给出了在ω1=1、ω2=0情况下,不同α下的312幅立体图像的左视点图像质量和主观感知质量之间的CC性能变化;图5b给出了在ω1=1、ω2=0情况下,不同α下的312幅立体图像的左视点图像质量和主观感知质量之间的SROCC性能变化;图5c给出了在ω1=1、ω2=0情况下,不同α下的312幅立体图像的左视点图像质量和主观感知质量之间的RMSE性能变化;分析图5a、图5b和图5c,可知CC、SROCC和RMSE值都在百分位上波动,但都存在一个峰值。因此,在本实施例中,取ω1=1,α=2.1。In this embodiment, FIG. 3 shows a block diagram for realizing the evaluation of image quality from the left viewpoint. Using 12 undistorted stereo images as shown in Figure 2a to Figure 2l, a distorted stereo image set consisting of 312 distorted stereo images is established, and the subjective quality of these 312 distorted stereo images is evaluated using a known subjective quality evaluation method Evaluation, the average subjective score difference (DMOS, Difference Mean Opinion Scores) of 312 distorted stereoscopic images, that is, the subjective quality score value of each distorted stereoscopic image. DMOS is the difference between the subjective score mean (MOS) and the full score (100), that is, DMOS=100-MOS, therefore, the larger the DMOS value, the worse the quality of the distorted stereoscopic image, and the smaller the DMOS value, the worse the quality of the distorted stereoscopic image. The better the quality, and the value range of DMOS is [0, 100]. On the other hand, for the above-mentioned 312 distorted stereoscopic images, calculate the corresponding Q m, L and Q nm, L of each distorted stereoscopic image according to steps ① to ⑥ of the method of the present invention; then use Q L = ω 1 × Q m ,L +(1-ω 1 )×Q nm,L is used for nonlinear fitting of the four-parameter Logistic function, and the values of α and ω 1 are obtained. Here, three commonly used objective parameters for evaluating image quality evaluation methods are used as evaluation indicators, namely Pearson correlation coefficient (Correlation Coefficient, CC), Spearman correlation coefficient (SpearmanRank-Order Correlation Coefficient, SROCC) and mean square Error coefficient (Rooted Mean Squared Error, RMSE), CC reflects the accuracy of the objective model of the distorted stereo image evaluation function, and SROCC reflects the monotonicity between the objective model and subjective perception. RMSE reflects the accuracy of its prediction. The higher the value of CC and SROCC, the better the correlation between the stereoscopic image objective evaluation method and DMOS, and the lower the RMSE value, the better the correlation between the stereoscopic image objective evaluation method and DMOS. Figure 4a shows the CC performance change between the left view image quality and subjective perception quality of 312 stereo images under different α and ω 1, and Figure 4b shows the left view of 312 stereo images under different α and ω 1 SROCC performance change between viewpoint image quality and subjective perception quality, Fig. 4c shows the RMSE performance variation between left viewpoint image quality and subjective perception quality of 312 stereo images under different α and ω 1 , analysis Fig. 4a, From Figure 4b and Figure 4c, it can be seen that CC and SROCC will increase as the value of ω1 increases, while RMSE decreases as the value of ω1 increases, indicating that the image quality of the left viewpoint is mainly determined by the quality of the sensitive area. However, the change of α value has little effect on the performance between left view image quality and subjective perception. Figure 5a shows the CC performance change between the left viewpoint image quality and subjective perception quality of 312 stereo images under different α in the case of ω 1 =1, ω 2 =0; =1, ω 2 =0, the SROCC performance change between the left viewpoint image quality and subjective perception quality of 312 stereo images under different α; Fig. 5c shows the , the RMSE performance change between the left-viewpoint image quality and subjective perception quality of 312 stereo images under different α; analyzing Figure 5a, Figure 5b and Figure 5c, we can see that the CC, SROCC and RMSE values are all fluctuating on the percentile , but there is a peak. Therefore, in this embodiment, ω 1 =1, α=2.1.

⑦根据QL和QR计算待评价的失真的立体图像Sdis相对于原始的无失真的立体图像Sorg的空间频率相似度度量,记为QF,QF=β1×QL+(1-β1)×QR,其中,β1表示QL的权值。⑦ Calculate the spatial frequency similarity measure of the distorted stereo image S dis to be evaluated relative to the original undistorted stereo image S org according to Q L and Q R , denoted as Q F , Q F1 ×Q L +( 1-β 1 )×Q R , where β 1 represents the weight of Q L.

在此具体实施例中,步骤⑦中β1的获取过程为:In this specific embodiment, the acquisition process of β1 in step ⑦ is:

⑦-1、采用n幅无失真的立体图像建立其在不同失真类型不同失真程度下的失真立体图像集,该失真立体图像集包括多幅失真的立体图像,其中,n≥1。⑦-1. Using n undistorted stereoscopic images to establish a distorted stereoscopic image set under different distortion types and different degrees of distortion, the distorted stereoscopic image set includes multiple distorted stereoscopic images, wherein n≥1.

⑦-2、利用主观质量评价方法获取失真立体图像集中的每幅失真的立体图像的平均主观评分差值,记为DMOS,DMOS=100-MOS,其中,MOS表示主观评分均值,DMOS∈[0,100]。⑦-2. Utilize the subjective quality evaluation method to obtain the average subjective score difference value of each distorted stereoscopic image in the distorted stereoscopic image set, which is denoted as DMOS, DMOS=100-MOS, wherein, MOS represents the mean value of the subjective score, and DMOS∈[0,100 ].

⑦-3、根据步骤①至步骤⑥的操作过程,计算失真立体图像集中的每幅失真的立体图像的左视点图像相对于对应的无失真的立体图像的左视点图像的敏感区域的评价值Qm,L和非敏感区域的评价值Qnm,L⑦-3, according to the operation process of step ① to step ⑥, calculate the evaluation value Q of the sensitive area of the left viewpoint image of each distorted stereoscopic image in the distorted stereoscopic image set relative to the left viewpoint image of the corresponding undistorted stereoscopic image m,L and the evaluation value Q nm,L of the non-sensitive area.

⑦-4、采用数学拟合方法拟合失真立体图像集中失真的立体图像的平均主观评分差值DMOS和对应的Qm,L和Qnm,L,从而获得β1值。⑦-4. Using a mathematical fitting method to fit the average subjective score difference DMOS and the corresponding Q m,L and Q nm,L of the distorted stereo images in the distorted stereo image set, so as to obtain the value of β 1 .

在本实施例中,β1决定了QL对立体图像质量的贡献,针对块效应,立体图像质量大概是左视点图像的质量和右视点图像的质量之和的一半,针对模糊失真,立体图像质量主要取决于质量较好的那个视点。由于该立体图像测试库的左视点图像和右视点图像同时受到同程度的失真,左视点图像的质量和右视点图像的质量变化不大,故β1变化对立体图像的主观性能影响不大。首先对上述312幅失真的立体图像按本发明方法步骤①至⑥计算得到每幅失真的立体图像相应的QL和QR,然后采用四参数拟合得到β1值。图6a给出了不同β1下的左右视点图像的质量与主观感知质量之间的CC性能变化,图6a给出了不同β1下的左右视点图像的质量与主观感知质量之间的SROCC性能变化,图6a给出了不同β1下的左右视点图像的质量与主观感知质量之间的RMSE性能变化,分析图6a、图6b和图6c,可知随着β1值的变化,CC、SROCC和RMSE值变化不大,在百分位上波动,但都存在峰值。这里,取β1=0.5。In this embodiment, β1 determines the contribution of Q L to the stereoscopic image quality. For blockiness, the stereoscopic image quality is about half of the sum of the quality of the left-viewpoint image and the right-viewpoint image. For blurring and distortion, the stereoscopic image The quality is mainly determined by the viewpoint with better quality. Because the left-viewpoint image and right-viewpoint image of the stereoscopic image test library are distorted to the same degree at the same time, the quality of the left-viewpoint image and the quality of the right-viewpoint image have little change, so the change of β1 has little effect on the subjective performance of the stereoscopic image. First, the above-mentioned 312 distorted stereoscopic images are calculated according to steps ① to ⑥ of the method of the present invention to obtain the corresponding QL and QR of each distorted stereoscopic image, and then the four-parameter fitting is used to obtain the β1 value. Figure 6a shows the CC performance change between the quality of the left and right viewpoint images and the subjective perception quality under different β 1 , and Fig. 6a shows the SROCC performance between the quality of the left and right viewpoint images and the subjective perception quality under different β 1 Figure 6a shows the RMSE performance change between the quality of the left and right viewpoint images and the subjective perception quality under different β 1. Analyzing Figure 6a, Figure 6b and Figure 6c, we can know that with the change of β 1 value, CC, SROCC and RMSE values have little change, fluctuating on the percentile, but both have peaks. Here, β 1 =0.5 is taken.

⑧计算Lorg和Rorg的绝对差值图像用矩阵表示,即计算Ldis和Rdis的绝对差值图像用矩阵表示,即其中,“||”为取绝对值符号。⑧ Calculate the absolute difference image matrix of L org and R org means that Calculate the absolute difference image matrix of L dis and R dis means that Among them, "||" is the absolute value symbol.

⑨将幅图像分别分割成个互不重叠的尺寸大小为8×8的块,然后对幅图像中的所有块分别实施奇异值分解,得到对应的由其每个块的奇异值矩阵组成的奇异值映射图和对应的由其每个块的奇异值矩阵组成的奇异值映射图,将实施奇异值分解后得到的奇异值映射图的系数矩阵记为Gorg,对于Gorg中第n个块的奇异值矩阵中坐标位置为(p,q)处的奇异值,将其记为实施奇异值分解后得到的奇异值映射图的系数矩阵记为Gdis,对于Gdis中第n个块的奇异值矩阵中坐标位置为(p,q)处的奇异值,将其记为其中,WLR表示的宽,HLR表示的高,0≤p≤7,0≤q≤7。⑨ will and The image is divided into non-overlapping blocks of size 8×8, and then and Singular value decomposition is performed on all the blocks in the image respectively to get The corresponding singular value map composed of the singular value matrix of each block and The corresponding singular value map composed of the singular value matrix of each block will be The coefficient matrix of the singular value map obtained after implementing singular value decomposition is denoted as G org , and for the singular value at the coordinate position (p, q) in the singular value matrix of the nth block in G org , it is denoted as Will The coefficient matrix of the singular value map obtained after the implementation of singular value decomposition is denoted as G dis , and for the singular value at the coordinate position (p, q) in the singular value matrix of the nth block in G dis , it is denoted as Among them, W LR means and The width, H LR means and height of, 0≤p≤7,0≤q≤7.

在此,为了降低计算复杂度,一个8×8的块与它最相邻的左块或右块或上块或下块没有重复列或重复行,即块与块互不重叠。Here, in order to reduce computational complexity, an 8×8 block does not have repeated columns or rows with its most adjacent left or right block or upper or lower block, that is, blocks do not overlap each other.

⑩计算对应的奇异值映射图和对应的奇异值映射图的奇异值偏差评价值,记为K, K = 64 W LR × H LR × Σ n = 0 W LR × H LR / 64 - 1 Σ p = 0 7 ( G org n ( p , p ) × | G org n ( p , p ) - G dis n ( p , p ) | ) Σ p = 0 7 G org n ( p , p ) , 其中,表示Gorg中第n个块的奇异值矩阵中坐标位置为(p,p)处的奇异值,表示Gdis中第n个块的奇异值矩阵中坐标位置为(p,p)处的奇异值。⑩ Calculation The corresponding singular value maps and The singular value deviation evaluation value of the corresponding singular value map, denoted as K, K = 64 W LR × h LR × Σ no = 0 W LR × h LR / 64 - 1 Σ p = 0 7 ( G org no ( p , p ) × | G org no ( p , p ) - G dis no ( p , p ) | ) Σ p = 0 7 G org no ( p , p ) , in, Indicates the singular value at the coordinate position (p,p) in the singular value matrix of the nth block in G org , Indicates the singular value at the coordinate position (p, p) in the singular value matrix of the nth block in Gdis .

分别实施奇异值分解,分别得到各自对应的2个正交矩阵和1个奇异值矩阵,将实施奇异值分解后得到的2个正交矩阵分别记为χorg和Vorg,将实施奇异值分解后得到的奇异值矩阵记为Oorg实施奇异值分解后得到的2个正交矩阵分别记为χdis和Vdis,将实施奇异值分解后得到的奇异值矩阵记为Odis right and Singular value decomposition is carried out respectively, and we get and Corresponding to two orthogonal matrices and one singular value matrix, the The two orthogonal matrices obtained after implementing singular value decomposition are denoted as χ org and V org respectively, and the The singular value matrix obtained after implementing singular value decomposition is denoted as O org , Will The two orthogonal matrices obtained after implementing singular value decomposition are denoted as χ dis and V dis respectively, and the The singular value matrix obtained after implementing singular value decomposition is denoted as O dis ,

分别计算幅图像剥夺奇异值后的残留矩阵图,将剥夺奇异值后的残留矩阵图记为Xorg,Xorg=χorg×Λ×Vorg,将剥夺奇异值后的残留矩阵图记为Xdis,Xdis=χdis×Λ×Vdis,其中,Λ表示单位矩阵,Λ的大小与Oorg和Odis的大小一致。 Calculate separately and The residual matrix image after depriving the singular value of the image, the The residual matrix after deprivation of singular values is denoted as X org , X org = χ org × Λ × V org , the The residual matrix after stripping the singular value is marked as X dis , X disdis ×Λ×V dis , where Λ represents the identity matrix, and the size of Λ is consistent with the size of O org and O dis .

计算Xorg和Xdis的均值偏差率,记为 其中,x表示Xorg和Xdis中的像素点的横坐标,y表示Xorg和Xdis中的像素点的纵坐标。 Calculate the mean deviation rate of X org and X dis , denoted as Wherein, x represents the abscissa of the pixel in X org and X dis , and y represents the ordinate of the pixel in X org and X dis .

计算待评价的失真的立体图像Sdis相对于原始的无失真的立体图像Sorg的立体感知评价度量,记为QS其中,τ表示常数,用于调节K和在QS中所起的重要性。 Calculate the stereo perception evaluation metric of the distorted stereo image S dis to be evaluated relative to the original undistorted stereo image S org , denoted as Q S , where τ represents a constant used to adjust K and The importance played in QS .

在本实施例中,首先求取上述312幅失真的立体图像和10幅无失真的立体图像的绝对差值图像,然后按照本发明方法的步骤⑧至计算得到每幅失真的立体图像相应的K和在此,τ值大小决定了奇异值偏差与残留信息在深度感知评价中所起的重要性。图7a给出了在不同τ下312幅失真的立体图像的立体感知质量与主观感知之间的CC性能变化,图7b给出了在不同τ下312幅失真的立体图像的立体感知质量与主观感知之间的SROCC性能变化,图7c给出了在不同τ下312幅失真的立体图像的立体感知质量与主观感知之间的RMSE性能变化,图7a、图7b和图7c中τ在[-164]范围内变化,分析图7a、图7b和图7c,可知CC、SROCC和RMSE与τ的变化都存在一个极值,且位置大致相同,这里取τ=-8。In the present embodiment, first obtain the absolute difference images of the above-mentioned 312 distorted stereo images and 10 undistorted stereo images, and then follow steps 8 to 8 of the method of the present invention. Calculate the K and K corresponding to each distorted stereo image Here, the value of τ determines the importance of singular value deviation and residual information in depth perception evaluation. Figure 7a shows the CC performance change between the stereo perception quality and subjective perception of 312 distorted stereo images under different τ, and Figure 7b shows the stereo perception quality and subjective perception of 312 distorted stereo images under different τ The variation of SROCC performance between perceptions, Fig. 7c shows the variation of RMSE performance between the stereo perception quality and subjective perception of 312 distorted stereo images under different τ, in Fig. 7a, Fig. 7b and Fig. 7c, τ is in [- 164], analyzing Figure 7a, Figure 7b and Figure 7c, it can be seen that there is an extreme value in CC, SROCC and RMSE, and the change of τ, and the position is roughly the same, here we take τ=-8.

根据QF和QS,计算待评价的失真的立体图像Sdis的图像质量评价分值,记为Q,Q=QF×(QS)ρ,其中,ρ表示权重系数值。 According to Q F and Q S , calculate the image quality evaluation score of the distorted stereo image S dis to be evaluated, denoted as Q, Q=Q F ×(Q S ) ρ , where ρ represents the weight coefficient value.

在本实施例中,对上述312幅失真的立体图像按本发明方法步骤①至计算得到每幅失真的立体图像相应的QF和QS,然后采用Q=QF×(QS)ρ进行作四参数Logistic函数非线性拟合,得到ρ,ρ值决定了左右视点图像的质量和立体感知质量在立体图像质量中的贡献。QF和QS值都是随着立体图像失真程度加深而变小,故ρ值的取值范围为大于0。图8a给出了在不同ρ值下的312幅立体图像的质量与主观感知质量之间的CC性能变化,图8b给出了在不同ρ值下的312幅立体图像的质量与主观感知质量之间的SROCC性能变化,图8c给出了在不同ρ值下的312幅立体图像的质量与主观感知质量之间的RMSE性能变化,分析图8a、图8b、图8c,可知ρ值取得太大或太小都会影响立体图像质量客观评价模型与主观感知之间的一致性,随着ρ值变化情况下,CC、SROCC和RMSE值都存在极值点,且大致位置相同,这里取ρ=0.3。In the present embodiment, for the above-mentioned 312 distorted stereoscopic images according to the method steps of the present invention from ① to Calculate the corresponding Q F and Q S of each distorted stereoscopic image, and then use Q=Q F ×(Q S ) ρ to perform nonlinear fitting of the four-parameter Logistic function to obtain ρ, and the value of ρ determines the left and right viewpoint images Quality and Stereoscopic Quality Contributions in Stereoscopic Image Quality. Both the Q F and Q S values become smaller as the degree of distortion of the stereoscopic image deepens, so the value range of the ρ value is greater than 0. Figure 8a shows the CC performance variation between the quality of 312 stereo images and subjective perceptual quality under different ρ values, and Figure 8b shows the relationship between the quality of 312 stereo images and subjective perceptual quality under different ρ values. The change of SROCC performance among them. Figure 8c shows the RMSE performance change between the quality of 312 stereoscopic images and the subjective perception quality under different ρ values. Analyzing Figure 8a, Figure 8b, and Figure 8c, it can be known that the value of ρ is too large or too small will affect the consistency between the stereoscopic image quality objective evaluation model and subjective perception. With the change of ρ value, there are extreme points in the CC, SROCC and RMSE values, and the approximate positions are the same. Here, ρ=0.3 .

分析本实施例得到的失真的立体图像的图像质量评价函数Q=QF×(QS)0.3的最终评价结果与主观评分DMOS之间的相关性。首先按本实施例得到的失真的立体图像的图像质量评价函数Q=QF×(QS)0.3计算得到的最终立体图像质量评价结果的输出值Q,然后将输出值Q做四参数Logistic函数非线性拟合,最后得到立体客观评价模型与主观感知之间的性能指标值。这里,利用评估图像质量评价方法的4个常用客观参量作为评价指标,即CC、SROCC、常值比率(Outlier Ratio,OR)、RMSE。OR反映立体图像质量客观评级模型的离散程度,即所有失真立体图像中四参数拟合后的评价值与DMOS之间的差异大于某一阈值的失真立体图像数目所占比例。表1给出了评价性能CC、SROCC、OR和RMSE系数,由表1数据可见,按本实施例得到的失真的立体图像的图像质量评价函数Q=QF×(QS)0.3计算得到的最终评价结果的输出值Q与主观评分DMOS之间的相关性是很高的,CC值和SROCC值都超过0.92,RMSE值低于6.5,表明客观评价结果与人眼主观感知的结果较为一致,说明了本发明方法的有效性。The correlation between the final evaluation result of the image quality evaluation function Q=Q F ×(Q S ) 0.3 of the distorted stereoscopic image obtained in this embodiment and the subjective score DMOS is analyzed. First, the output value Q of the final stereoscopic image quality evaluation result calculated by the image quality evaluation function Q=Q F * (Q S ) 0.3 of the distorted stereoscopic image obtained in this embodiment, then the output value Q is done as a four-parameter Logistic function Non-linear fitting, and finally obtain the performance index value between the stereoscopic objective evaluation model and the subjective perception. Here, four commonly used objective parameters for evaluating image quality evaluation methods are used as evaluation indicators, namely CC, SROCC, Outlier Ratio (OR), and RMSE. OR reflects the degree of dispersion of the stereoscopic image quality objective rating model, that is, the proportion of the number of distorted stereoscopic images whose difference between the evaluation value after four-parameter fitting and DMOS is greater than a certain threshold in all distorted stereoscopic images. Table 1 provides the evaluation performance CC, SROCC, OR and RMSE coefficients, as can be seen from the data in Table 1, the image quality evaluation function Q=Q F * (Q S ) 0.3 of the distorted stereoscopic image obtained by the present embodiment is obtained by calculating The correlation between the output value Q of the final evaluation result and the subjective score DMOS is very high, the CC value and SROCC value both exceed 0.92, and the RMSE value is lower than 6.5, indicating that the objective evaluation result is more consistent with the subjective perception of the human eye. The effectiveness of the method of the present invention is illustrated.

表1本实施得到的失真的立体图像的图像质量评价分值与主观评分之间的相关性Table 1 Correlation between the image quality evaluation score and the subjective score of the distorted stereoscopic image obtained in this implementation

GblurGblur JP2KJP2K JPEGJPEG WNWN H264H264 ALLALL 数目number 6060 6060 6060 6060 7272 312312 CCCC 0.96580.9658 0.94790.9479 0.95330.9533 0.95540.9554 0.97670.9767 0.92350.9235 SROCCSROCC 0.96550.9655 0.94890.9489 0.95240.9524 0.92740.9274 0.95450.9545 0.94300.9430 OROR 00 00 00 00 00 00 RMSERMSE 5.47195.4719 3.81803.8180 4.30104.3010 4.61514.6151 3.01353.0135 6.58906.5890

Claims (3)

1. A three-dimensional image objective quality evaluation method based on structural distortion is characterized by comprising the following steps:
making SorgFor original undistorted stereo image, let SdisFor the distorted stereo image to be evaluated, the original undistorted stereo image S is takenorgIs recorded as LorgThe original undistorted stereo image S is processedorgIs recorded as RorgDistorted stereoscopic image S to be evaluateddisIs recorded as LdisDistorted stereoscopic image S to be evaluateddisIs recorded as Rdis
② to LorgAnd Ldis、RorgAnd RdisRespectively dividing the 4 images into regions to respectively obtain LorgAnd Ldis、RorgAnd RdisMapping L sensitive area matrixes corresponding to the 4 imagesorgAnd LdisRespectively carrying out region division to obtain coefficient matrixes of respectively corresponding sensitive region matrix mapping graphs, and recording the coefficient matrixes as ALFor ALThe value of the coefficient at the middle coordinate position (i, j), which is denoted as AL(i,j),R is to beorgAnd RdisRespectively carrying out region division to obtain coefficient matrixes of respectively corresponding sensitive region matrix mapping graphs, and recording the coefficient matrixes as ARFor ARThe value of the coefficient at the middle coordinate position (i, j), which is denoted as AR(i,j),Wherein, i is not less than 0 and not more than (W-8), j is not less than 0 and not more than (H-8), W represents Lorg、Ldis、RorgAnd RdisH represents Lorg、Ldis、RorgAnd RdisHigh of (d);
③ will LorgAnd LdisThe 2 images were divided into (W-7) × (H-7) overlapping blocks of size 8 × 8, respectively, and L was calculatedorgAnd LdisThe structural amplitude distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as B in the coefficient matrix of the structural amplitude distortion mapping mapsLFor BLThe value of the coefficient at the middle coordinate position (i, j), which is denoted as BL(i,j), <math> <mrow> <msub> <mi>B</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, BL(i, j) also represents LorgThe upper left corner coordinate position is (i, j) and LdisThe middle upper left corner coordinate position is the structural amplitude distortion value of the overlapped block with the size of (i, j) being 8 multiplied by 8, <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>U</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>U</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msub> <mi>L</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>U</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>U</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mrow> <mo>(</mo> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>U</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>U</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Lorg(i + x, j + y) represents LorgThe pixel value L of the pixel point with the middle coordinate position of (i + x, j + y)dis(i + x, j + y) represents LdisThe pixel value C of the pixel point with the middle coordinate position of (i + x, j + y)1Represents a constant, where 0. ltoreq. i.ltoreq.W-8, 0. ltoreq. j.ltoreq.H-8;
r is to beorgAnd RdisThe 2 images were divided into (W-7) × (H-7) overlapping blocks of size 8 × 8, respectively, and R was calculatedorgAnd RdisThe structural amplitude distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as B in the coefficient matrix of the structural amplitude distortion mapping mapsRFor BRThe value of the coefficient at the middle coordinate position (i, j), which is denoted as BR(i,j), <math> <mrow> <msub> <mi>B</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, BR(i, j) also represents RorgThe size of the overlapped block with the upper left corner coordinate position of (i, j) being 8 multiplied by 8 and RdisThe middle upper left corner coordinate position is the structural amplitude distortion value of the overlapped block with the size of (i, j) being 8 multiplied by 8, <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>U</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>U</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msub> <mi>R</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>U</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>U</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mrow> <mo>(</mo> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>U</mi> <mrow> <mi>org</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>U</mi> <mrow> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Rorg(i + x, j + y) represents RorgThe pixel value R of the pixel point with the middle coordinate position of (i + x, j + y)dis(i + x, j + y) represents RdisThe pixel value C of the pixel point with the middle coordinate position of (i + x, j + y)1Represents a constant, where 0. ltoreq. i.ltoreq.W-8, 0. ltoreq. j.ltoreq.H-8;
fourthly, to LorgAnd LdisRespectively carrying out Sobel operator processing in the horizontal direction and the vertical direction on the 2 images to respectively obtain LorgAnd LdisThe gradient matrix mapping chart in the horizontal direction and the gradient matrix mapping chart in the vertical direction corresponding to the 2 images respectively are converted into LorgThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,org,LFor Ih,org,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,org,L(i,j), I h , org , L ( i , j ) = L org ( i + 2 , j ) + 2 L org ( i + 2 , j + 1 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i , j + 1 ) - L org ( i , j + 2 ) , Mixing L withorgThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,org,LFor Iv,org,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,org,L(i,j), I v , org , L ( i , j ) = L org ( i , j + 2 ) + 2 L org ( i + 1 , j + 2 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i + 1 , j ) - L org ( i + 2 , j ) , Mixing L withdisThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,dis,LFor Ih,dis,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,dis,L(i,j), I h , dis , L ( i , j ) = L dis ( i + 2 , j ) + 2 L dis ( i + 2 , j + 1 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i , j + 1 ) - L dis ( i , j + 2 ) , Mixing L withdisThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,dis,LFor Iv,dis,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,dis,L(i,j), I v , dis , L ( i , j ) = L dis ( i , j + 2 ) + 2 L dis ( i + 1 , j + 2 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i + 1 , j ) - L dis ( i + 2 , j ) , Wherein L isorg(i+2,j)、Lorg(i+2,j+1)、Lorg(i+2,j+2)、Lorg(i,j)、Lorg(i,j+1)、Lorg(i,j+2)、Lorg(i+1,j+2)、Lorg(i +1, j) each represents LorgThe pixel values, L, of the pixel points having the middle coordinate positions of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), and (i +1, j)dis(i+2,j)、Ldis(i+2,j+1)、Ldis(i+2,j+2)、Ldis(i,j)、Ldis(i,j+1)、Ldis(i,j+2)、Ldis(i+1,j+2)、Ldis(i +1, j) correspond to each otherRepresents LdisThe middle coordinate position is the pixel value of the pixel point of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), or (i +1, j);
to RorgAnd RdisRespectively carrying out Sobel operator processing in the horizontal direction and the vertical direction on the 2 images to respectively obtain RorgAnd RdisThe gradient matrix mapping map in the horizontal direction and the gradient matrix mapping map in the vertical direction corresponding to the 2 images are respectively obtained by mapping RorgThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,org,RFor Ih,org,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,org,R(i,j), I h , org , R ( i , j ) = R org ( i + 2 , j ) + 2 R org ( i + 2 , j + 1 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i , j + 1 ) - R org ( i , j + 2 ) , R is to beorgThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,org,RFor Iv,org,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,org,R(i,j), I v , org , R ( i , j ) = R org ( i , j + 2 ) + 2 R org ( i + 1 , j + 2 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i + 1 , j ) - R org ( i + 2 , j ) , R is to bedisThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,dis,RFor Ih,dis,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,dis,R(i,j), I h , dis , R ( i , j ) = R dis ( i + 2 , j ) + 2 R dis ( i + 2 , j + 1 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i , j + 1 ) - R dis ( i , j + 2 ) , R is to bedisThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,dis,RFor Iv,dis,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,dis,R(i,j), I v , dis , R ( i , j ) = R dis ( i , j + 2 ) + 2 R dis ( i + 1 , j + 2 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i + 1 , j ) - R dis ( i + 2 , j ) , Wherein R isorg(i+2,j)、Rorg(i+2,j+1)、Rorg(i+2,j+2)、Rorg(i,j)、Rorg(i,j+1)、Rorg(i,j+2)、Rorg(i+1,j+2)、Rorg(i +1, j) each represents RorgThe middle coordinate position is the pixel value, R, of the pixel points of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), and (i +1, j)dis(i+2,j)、Rdis(i+2,j+1)、Rdis(i+2,j+2)、Rdis(i,j)、Rdis(i,j+1)、Rdis(i,j+2)、Rdis(i+1,j+2)、Rdis(i +1, j) each represents RdisThe middle coordinate position is the pixel value of the pixel point of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), or (i +1, j);
calculating LorgAnd LdisThe structural direction distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as E in the coefficient matrix of the structural direction distortion mapping mapsLFor ELThe value of the coefficient at the middle coordinate position (i, j), denoted as EL(i,j), <math> <mrow> <msub> <mi>E</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>I</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>org</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>I</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mrow> <mi>v</mi> <mo>,</mo> <mi>org</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>I</mi> <mrow> <mi>v</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>org</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mi>v</mi> <mo>,</mo> <mi>org</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>&times;</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mi>v</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C2Represents a constant;
calculation of RorgAnd RdisThe structural direction distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as E in the coefficient matrix of the structural direction distortion mapping mapsRFor ERThe middle coordinate position is the coefficient value at (i, j), which is denoted as ER (i, j), <math> <mrow> <msub> <mi>E</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>I</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>org</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>I</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mrow> <mi>v</mi> <mo>,</mo> <mi>org</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>I</mi> <mrow> <mi>v</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>org</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mi>v</mi> <mo>,</mo> <mi>org</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>&times;</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mi>h</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mi>v</mi> <mo>,</mo> <mi>dis</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
sixthly, calculating LorgAnd LdisStructural distortion evaluation value of (1), denoted as QL,QL=ω1×Qm,L2×Qnm,L <math> <mrow> <msub> <mi>Q</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>W</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>H</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <mrow> <mo>(</mo> <mn>0.5</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>E</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>A</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>N</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>W</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>H</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <msub> <mi>A</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>Q</mi> <mrow> <mi>nm</mi> <mo>,</mo> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>nm</mi> </mrow> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>W</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>H</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <mrow> <mo>(</mo> <mn>0.5</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>E</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>A</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>N</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>nm</mi> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>W</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>H</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>A</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Wherein, ω is1Represents LorgAnd LdisWeight value of middle sensitive area, omega2Represents LorgAnd LdisThe weight value of the middle non-sensitive area;
calculation of RorgAnd RdisStructural distortion evaluation value of (1), denoted as QR,QR=ω'1×Qm,R+ω'2×Qnm,R <math> <mrow> <msub> <mi>Q</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>W</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>H</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <mrow> <mo>(</mo> <mn>0.5</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>E</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>A</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>N</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>W</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>H</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <msub> <mi>A</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>Q</mi> <mrow> <mi>nm</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>nm</mi> </mrow> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>W</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>H</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <mrow> <mo>(</mo> <mn>0.5</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>E</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>A</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>N</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>nm</mi> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>W</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>H</mi> <mo>-</mo> <mn>8</mn> </mrow> </munderover> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>A</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Wherein, omega'1Represents RorgAnd RdisWeight value of Medium sensitive region, ω'2Represents RorgAnd RdisThe weight value of the middle non-sensitive area;
is according to QLAnd QRCalculating a distorted stereo image S to be evaluateddisRelative to the original undistorted stereo image SorgThe spatial frequency similarity measure of (2), denoted as QF,QF=β1×QL+(1-β1)×QRWherein, β1Represents QLThe weight of (2);
b calculation of LorgAnd RorgIs determined by the absolute difference image of (a),expressed in matrix form as Calculating LdisAnd RdisIs expressed in matrix form as Wherein, "|" is an absolute value symbol;
ninthly willAndthe images being divided intoBlocks of size 8 x 8 which do not overlap with each other, and thenAndrespectively performing singular value decomposition on all blocks in the images to obtainCorresponding singular value map composed of singular value matrix of each block thereof andcorresponding singular value matrix of each blockA map of singular values of the compositionThe coefficient matrix of the singular value mapping graph obtained after singular value decomposition is recorded as GorgFor GorgThe coordinate position in the singular value matrix of the nth block is the singular value at (p, q), which is recorded asWill be provided withThe coefficient matrix of the singular value mapping graph obtained after singular value decomposition is recorded as GdisFor GdisThe coordinate position in the singular value matrix of the nth block is the singular value at (p, q), which is recorded asWherein, WLRTo representAndwidth of (H)LRTo representAndis high in the direction of the horizontal axis,0≤p≤7,0≤q≤7;
calculation of rCorresponding singular value map andthe singular value deviation evaluation value of the corresponding singular value map, denoted as K, <math> <mrow> <mi>K</mi> <mo>=</mo> <mfrac> <mn>64</mn> <mrow> <msub> <mi>W</mi> <mi>LR</mi> </msub> <mo>&times;</mo> <msub> <mi>H</mi> <mi>LR</mi> </msub> </mrow> </mfrac> <mo>&times;</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <msub> <mi>W</mi> <mi>LR</mi> </msub> <mo>&times;</mo> <msub> <mi>H</mi> <mi>LR</mi> </msub> <mo>/</mo> <mn>64</mn> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>G</mi> <mi>org</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <mo>|</mo> <msubsup> <mi>G</mi> <mi>org</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>G</mi> <mi>dis</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msubsup> <mi>G</mi> <mi>org</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> wherein,represents GorgThe coordinate position in the singular value matrix of the nth block is the singular value at (p, p),represents GdisThe coordinate position in the singular value matrix of the nth block is a singular value at (p, p);
to pairAndrespectively carrying out singular value decomposition to respectively obtainAnd2 orthogonal matrixes and 1 singular value matrix corresponding to each other are used for generating a plurality of singular value matrixes2 orthogonal matrixes obtained after singular value decomposition are respectively recorded as chiorgAnd VorgWill beThe singular value matrix obtained after singular value decomposition is recorded as OorgWill be provided with2 orthogonal matrixes obtained after singular value decomposition are respectively recorded as chidisAnd VdisWill beThe singular value matrix obtained after singular value decomposition is recorded as Odis
Respectively calculateAndresidual matrix image of the image after the singular value is deprivedThe residual matrix image after the singular value deprivation is marked as Xorg,Xorg=χorg×Λ×VorgWill beThe residual matrix image after the singular value deprivation is marked as Xdis,Xdis=χdis×Λ×VdisWherein Λ represents the identity matrix, the size of Λ and OorgAnd OdisAre consistent in size;
calculating XorgAnd XdisThe mean deviation ratio of (D) is recorded as Wherein X represents XorgAnd XdisThe abscissa of the pixel point in (1) and y represents XorgAnd XdisThe ordinate of the pixel point in (1);
calculating a distorted stereo image S to be evaluateddisRelative to the original undistorted stereo image SorgThe stereo perception evaluation metric of (1), denoted as QSWherein τ represents a constant for adjusting K andat QSThe importance of (1);
according to QFAnd QSTo calculateDistorted stereoscopic image S of evaluationdisThe image quality evaluation score of (1) is Q, Q is QF×(QS)ρWhere ρ represents a weight coefficient value.
2. The method for evaluating the objective quality of a stereoscopic image based on structural distortion according to claim 1, wherein L isorgAnd LdisCoefficient matrix A of the respective corresponding sensitive area matrix mapLThe acquisition process comprises the following steps:
② -a1, pair LorgPerforming Sobel operator processing in horizontal and vertical directions to obtain LorgRespectively, as Z, in the horizontal direction and in the vertical directionh,l1And Zv,l1Then calculate LorgIs marked as Zl1Wherein Z isl1(x, y) represents Zl1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,l1(x, y) represents Zh,l1The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,l1(x, y) represents Zv,l1The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zl1Width of (A), H' represents Zl1High of (d);
② -a2, pair LdisPerforming Sobel operator processing in horizontal and vertical directions to obtain LdisRespectively, as Z, in the horizontal direction and in the vertical directionh,l2And Zv,l2Then calculate LdisIs marked as Zl2Wherein Z isl2(x, y) represents Zl2Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,l2(x, y) represents Zh,l2The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,l2(x, y) represents Zv,l2The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zl2Width of (A), H' represents Zl2High of (d);
a3, calculating the threshold T needed when dividing the area, <math> <mrow> <mi>T</mi> <mo>=</mo> <mi>&alpha;</mi> <mo>&times;</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mi>W</mi> <mo>&prime;</mo> </msup> <mo>&times;</mo> <msup> <mi>H</mi> <mo>&prime;</mo> </msup> </mrow> </mfrac> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <msup> <mi>W</mi> <mo>&prime;</mo> </msup> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <msup> <mi>H</mi> <mo>&prime;</mo> </msup> </munderover> <msub> <mi>Z</mi> <mrow> <mi>l</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <msup> <mi>W</mi> <mo>&prime;</mo> </msup> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <msup> <mi>H</mi> <mo>&prime;</mo> </msup> </munderover> <msub> <mi>Z</mi> <mrow> <mi>l</mi> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein α is a constant, Zl1(x, y) represents Zl1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionl2(x, y) represents Zl2The gradient amplitude of the pixel point with the middle coordinate position (x, y);
② a4, mixing Zl1The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zl1(i, j) reacting Zl2The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zl2(i, j), judgment of Zl1(i,j)>T or Zl2(i,j)>Whether T is true, if so, then L is determinedorgAnd LdisThe pixel point with the middle coordinate position (i, j) belongs to the sensitive area, and order AL(i, j) ═ 1, otherwise, determine LorgAnd LdisThe pixel point with the middle coordinate position (i, j) belongs to the non-sensitive area, and order AL(i, j) is 0, wherein i is more than or equal to 0 and less than or equal to (W-8), and j is more than or equal to 0 and less than or equal to (H-8);
in the step II, RorgAnd RdisCoefficient matrix A of the respective corresponding sensitive area matrix mapRThe acquisition process comprises the following steps:
② -b1, vs RorgPerforming Sobel operator processing in horizontal and vertical directions to obtain RorgRespectively, as Z, in the horizontal direction and in the vertical directionh,r1And Zv,r1Then calculating RorgIs marked as Zr1Wherein Z isr1(x, y) represents Zr1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,r1(x, y) represents Zh,r1The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,r1(x, y) represents Zv,r1The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zr1Is wideH' represents Zr1High of (d);
② -b2, vs RdisPerforming Sobel operator processing in horizontal and vertical directions to obtain RdisRespectively, as Z, in the horizontal direction and in the vertical directionh,r2And Zv,r2Then calculating RdisIs marked as Zr2Wherein Z isr2(x, y) represents Zr2Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,r2(x, y) represents Zh,r2The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,r2(x, y) represents Zv,r2The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zr2Width of (A), H' represents Zr2High of (d);
b3, calculating the threshold T' needed when dividing the area, <math> <mrow> <msup> <mi>T</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mi>&alpha;</mi> <mo>&times;</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mi>W</mi> <mo>&prime;</mo> </msup> <mo>&times;</mo> <msup> <mi>H</mi> <mo>&prime;</mo> </msup> </mrow> </mfrac> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <msup> <mi>W</mi> <mo>&prime;</mo> </msup> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <msup> <mi>H</mi> <mo>&prime;</mo> </msup> </munderover> <msub> <mi>Z</mi> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <msup> <mi>W</mi> <mo>&prime;</mo> </msup> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <msup> <mi>H</mi> <mo>&prime;</mo> </msup> </munderover> <msub> <mi>Z</mi> <mrow> <mi>r</mi> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein α is a constant, Zr1(x, y) represents Zr1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionr2(x, y) represents Zr2The gradient amplitude of the pixel point with the middle coordinate position (x, y);
② b4, mixing Zr1The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zr1(i, j) reacting Zr2The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zr2(i, j), judgment of Zr1(i,j)>T or Zr2(i,j)>Whether T is true, and if so, determining RorgAnd RdisThe pixel point with the middle coordinate position (i, j) belongs to the sensitive area, and order AR(i, j) ═ 1, otherwise, R is determinedorgAnd RdisThe pixel point with the middle coordinate position (i, j) belongs to the non-sensitive area, and order AR(i, j) ≦ 0, where i is 0 ≦ i ≦ W-8, and j is 0 ≦ H-8.
3. The method according to claim 1 or 2, wherein the step (c) is performed according to a preset algorithmMiddle beta1The acquisition process comprises the following steps:
seventhly-1, adopting n undistorted stereo images to establish a distorted stereo image set under different distortion types and different distortion degrees, wherein the distorted stereo image set comprises a plurality of distorted stereo images, and n is more than or equal to 1;
obtaining an average subjective score difference value of each distorted stereo image in the distorted stereo image set by using a subjective quality evaluation method, and recording the average subjective score difference value as DMOS (diffused metal oxide semiconductor), wherein DMOS is 100-MOS (metal oxide semiconductor), MOS represents a subjective score mean value, and DMOS belongs to [0,100 ];
and (c) calculating an evaluation value Q of a sensitive region of the left viewpoint image of each distorted stereoscopic image in the distorted stereoscopic image set relative to the left viewpoint image of the corresponding undistorted stereoscopic image according to the operation processes from the step (i) to the step (c)m,LAnd evaluation value Q of non-sensitive regionnm,L
Seventhly-4, fitting the mean subjective score difference DMOS of the distorted stereo image in the set by adopting a mathematical fitting method and the corresponding Qm,LAnd Qnm,LThereby obtaining beta1The value is obtained.
CN201210145034.0A 2012-05-11 2012-05-11 Stereoscopic image objective quality evaluation method on basis of structural distortion Expired - Fee Related CN102708568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210145034.0A CN102708568B (en) 2012-05-11 2012-05-11 Stereoscopic image objective quality evaluation method on basis of structural distortion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210145034.0A CN102708568B (en) 2012-05-11 2012-05-11 Stereoscopic image objective quality evaluation method on basis of structural distortion

Publications (2)

Publication Number Publication Date
CN102708568A CN102708568A (en) 2012-10-03
CN102708568B true CN102708568B (en) 2014-11-05

Family

ID=46901288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210145034.0A Expired - Fee Related CN102708568B (en) 2012-05-11 2012-05-11 Stereoscopic image objective quality evaluation method on basis of structural distortion

Country Status (1)

Country Link
CN (1) CN102708568B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103108209B (en) * 2012-12-28 2015-03-11 宁波大学 Stereo image objective quality evaluation method based on integration of visual threshold value and passage
CN104036502B (en) * 2014-06-03 2016-08-24 宁波大学 A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology
CN104574363B (en) * 2014-12-12 2017-09-29 南京邮电大学 A kind of full reference image quality appraisement method for considering gradient direction difference
CN108074241B (en) * 2018-01-16 2021-10-22 深圳大学 Target image quality scoring method, device, terminal and storage medium
CN110232680B (en) * 2019-05-30 2021-04-27 广智微芯(扬州)有限公司 Image ambiguity evaluation method and device
CN113920065B (en) * 2021-09-18 2023-04-28 天津大学 Imaging quality evaluation method for visual detection system of industrial site

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833766A (en) * 2010-05-11 2010-09-15 天津大学 Stereoscopic Image Objective Quality Evaluation Algorithm Based on GSSIM
CN101872479A (en) * 2010-06-09 2010-10-27 宁波大学 A Stereo Image Objective Quality Evaluation Method
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7512286B2 (en) * 2003-10-27 2009-03-31 Hewlett-Packard Development Company, L.P. Assessing image quality
US8295565B2 (en) * 2007-03-16 2012-10-23 Sti Medical Systems, Llc Method of image quality assessment to produce standardized imaging data
JP4895204B2 (en) * 2007-03-22 2012-03-14 富士フイルム株式会社 Image component separation device, method, and program, and normal image generation device, method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833766A (en) * 2010-05-11 2010-09-15 天津大学 Stereoscopic Image Objective Quality Evaluation Algorithm Based on GSSIM
CN101872479A (en) * 2010-06-09 2010-10-27 宁波大学 A Stereo Image Objective Quality Evaluation Method
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
利用奇异值分解法的立体图像客观质量评价模型;周俊明等;《计算机辅助设计与图形学学报》;20110531;第23卷(第5期);870-877 *
周俊明等.利用奇异值分解法的立体图像客观质量评价模型.《计算机辅助设计与图形学学报》.2011,第23卷(第5期),870-877. *
基于三维特征和结构相似度的图像质量评价方法;沈丽丽等;《光电子·激光》;20101130;第21卷(第11期);1713-1719 *
沈丽丽等.基于三维特征和结构相似度的图像质量评价方法.《光电子·激光》.2010,第21卷(第11期),1713-1719. *

Also Published As

Publication number Publication date
CN102708568A (en) 2012-10-03

Similar Documents

Publication Publication Date Title
CN101872479B (en) Three-dimensional image objective quality evaluation method
CN102663747B (en) Stereo image objectivity quality evaluation method based on visual perception
CN106097327B (en) In conjunction with the objective evaluation method for quality of stereo images of manifold feature and binocular characteristic
CN103581661B (en) Method for evaluating visual comfort degree of three-dimensional image
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN104036501B (en) A kind of objective evaluation method for quality of stereo images based on rarefaction representation
CN102708568B (en) Stereoscopic image objective quality evaluation method on basis of structural distortion
CN103517065B (en) Method for objectively evaluating quality of degraded reference three-dimensional picture
CN104581143B (en) A kind of based on machine learning without with reference to objective evaluation method for quality of stereo images
CN105744256A (en) Three-dimensional image quality objective evaluation method based on graph-based visual saliency
CN103413298B (en) A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
CN102209257A (en) Stereo image quality objective evaluation method
CN102595185A (en) Stereo image quality objective evaluation method
CN104036502B (en) A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology
CN105282543B (en) Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception
CN102521825B (en) An Objective Evaluation Method of Stereo Image Quality Based on Zero Watermark
CN109429051B (en) An objective evaluation method of no-reference stereoscopic video quality based on multi-view feature learning
CN105407349A (en) No-reference objective three-dimensional image quality evaluation method based on binocular visual perception
CN104954778A (en) Objective stereo image quality assessment method based on perception feature set
CN103338380A (en) Adaptive image quality objective evaluation method
CN102843572B (en) Phase-based stereo image quality objective evaluation method
CN104811691A (en) Stereoscopic video quality objective evaluation method based on wavelet transformation
CN103400378A (en) Method for objectively evaluating quality of three-dimensional image based on visual characteristics of human eyes
CN105654465A (en) Stereo image quality evaluation method through parallax compensation and inter-viewpoint filtering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141105

Termination date: 20210511