[go: up one dir, main page]

CN110853027A - Three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation - Google Patents

Three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation Download PDF

Info

Publication number
CN110853027A
CN110853027A CN201911124950.4A CN201911124950A CN110853027A CN 110853027 A CN110853027 A CN 110853027A CN 201911124950 A CN201911124950 A CN 201911124950A CN 110853027 A CN110853027 A CN 110853027A
Authority
CN
China
Prior art keywords
local
color
features
image
gaussian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911124950.4A
Other languages
Chinese (zh)
Inventor
方玉明
姚怡茹
鄢杰斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911124950.4A priority Critical patent/CN110853027A/en
Publication of CN110853027A publication Critical patent/CN110853027A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种基于局部变动和全局变化的三维合成图像无参考质量评价方法,其特征在于:首先,对于局部变动检测,利用高斯导数提取合成图像的结构和颜色特征;然后,基于结构特征和颜色特征使用局部二值模式进行编码,获得结构和颜色特征图,并对结构和颜色特征图进行计算得到结构特征和颜色特征,从而得到局部结构和颜色的失真信息;接着,对于全局变化检测,提取亮度特征评估三维合成图像的自然性;最后,基于上述提取的视觉特征,训练随机森林回归模型将提取的特征映射到主观质量评分。在三个公开的数据库上的实验结果显示,与现有的无参考图像质量评价方法和全参考三维合成图像质量评价方法相比,该方法表现出了良好的有效性和优越性。

Figure 201911124950

The invention relates to a reference-free quality evaluation method for three-dimensional composite images based on local changes and global changes. The color feature is encoded using the local binary mode to obtain the structure and color feature maps, and the structure and color feature maps are calculated to obtain the structural features and color features, so as to obtain the distortion information of the local structure and color; then, for the global change detection, Luminance features are extracted to evaluate the naturalness of 3D synthetic images; finally, based on the extracted visual features above, a random forest regression model is trained to map the extracted features to subjective quality scores. The experimental results on three public databases show that the proposed method exhibits good effectiveness and superiority compared with the existing no-reference image quality evaluation methods and full-reference 3D synthetic image quality evaluation methods.

Figure 201911124950

Description

一种基于局部变动和全局变化的三维合成图像无参考质量评 价方法A no-reference quality assessment of 3D composite images based on local and global changes Valuation method

技术领域technical field

本发明属于多媒体技术领域,具体属于数字图像和数字图像处理技术领域,具体涉及一种基于局部变动和全局变化的三维合成图像无参考质量评价方法。The invention belongs to the technical field of multimedia, in particular to the technical field of digital images and digital image processing, and particularly relates to a three-dimensional composite image quality evaluation method without reference based on local changes and global changes.

背景技术Background technique

自由视角视频(FVV)和三维影视可以给人们带来身临其境的体验感,这其中的技术在过去几十年间都取得了学术界和工业界的大量关注。通常一个获得自由视角视频的方法是对同一场景用多台相机在不同的视角拍摄的不同的图像,然后将这些不同视角拍摄的图像拼接起来,通过这个方法,使用者们在观看自由视角视频时可以自由改变他们的视角。随着对更好体验感的需求增加,自由视角视频中的视角数量持续增长,存储和传输的负载也同时增加。为了解决这个问题,多视角视频加深(MVD)技术取得了一定的发展。Free-view video (FVV) and 3D film and television can bring people an immersive experience, and these technologies have gained a lot of attention from academia and industry in the past few decades. Usually a method to obtain a free-view video is to use multiple cameras to capture different images of the same scene at different viewpoints, and then stitch these images captured from different viewpoints. Feel free to change their perspective. As the demand for a better experience increases, the number of views in free-view video continues to grow, and the load on storage and transmission also increases. To solve this problem, multi-view video deepening (MVD) technology has been developed.

多视角视频加深技术仅需要原始的视角图和其他相机的深度图,剩下的虚拟视角可以通过基于渲染的加深图片技术(DIBR)产生。然而,DIBR的过程会造成新的视觉失真,比如模糊,不连续,块效应和图像拉伸等等,这些失真很明显是不同于像高斯模糊,高斯噪声和白噪声等等传统失真,这些产生的失真将会严重影响到最终使用者的体验感。因此,设计一种可靠有效的三维合成图像的质量评估方法对三维合成图像的视觉质量进行预测是很有必要的。在过去几年间,一些针对合成失真的质量评估方法已经提出,但这些方法的预测效果不够准确,因此,本发明提出一种基于局部变动和全局变化的三维合成图像质量评价方法,该方法能有效地对合成图像的视觉质量进行预测。The multi-view video deepening technology only needs the original view map and the depth map of other cameras, and the remaining virtual views can be generated by rendering-based deepening of pictures (DIBR). However, the DIBR process can cause new visual distortions such as blurring, discontinuities, blockiness and image stretching, etc. These distortions are clearly different from traditional distortions such as Gaussian blur, Gaussian noise and white noise, which produce The distortion will seriously affect the experience of the end user. Therefore, it is necessary to design a reliable and effective 3D composite image quality assessment method to predict the visual quality of 3D composite images. In the past few years, some quality assessment methods for synthetic distortion have been proposed, but the prediction effect of these methods is not accurate enough. Therefore, the present invention proposes a three-dimensional synthetic image quality assessment method based on local changes and global changes, which can effectively Predict the visual quality of synthetic images.

发明内容SUMMARY OF THE INVENTION

本发明涉及一种基于局部变动和全局变化的三维合成图像无参考质量评价方法,其特征在于:首先,对于局部变动检测,利用高斯导数提取合成图像的结构和颜色特征;然后,基于结构特征和颜色特征使用局部二值模式进行编码,获得结构和颜色特征图,并对结构和颜色特征图进行计算得到结构特征和颜色特征,从而得到局部结构和颜色的失真信息;接着,对于全局变化检测,提取亮度特征评估三维合成图像的自然性;最后,基于上述提取的视觉特征,训练随机森林回归模型将提取的特征映射到主观质量评分。在三个公开的数据库上的实验结果显示,与现有的无参考图像质量评价方法和全参考三维合成图像质量评价方法相比,该方法表现出了良好的有效性和优越性。The invention relates to a reference-free quality evaluation method for three-dimensional composite images based on local changes and global changes. The color feature is encoded using the local binary mode to obtain the structure and color feature maps, and the structure and color feature maps are calculated to obtain the structural features and color features, so as to obtain the distortion information of the local structure and color; then, for the global change detection, Luminance features are extracted to evaluate the naturalness of 3D synthetic images; finally, based on the extracted visual features above, a random forest regression model is trained to map the extracted features to subjective quality scores. The experimental results on three public databases show that the proposed method exhibits good effectiveness and superiority compared with the existing no-reference image quality evaluation methods and full-reference 3D synthetic image quality evaluation methods.

为了实现上述目的,本发明采用的技术方案为:In order to achieve the above object, the technical scheme adopted in the present invention is:

一种基于局部变动和全局变化的三维合成图像无参考质量评价方法,其特征在于:包括以下几个步骤:A method for evaluating the quality of three-dimensional composite images without reference based on local changes and global changes, which is characterized by comprising the following steps:

A.利用高斯导数提取合成图像的结构和颜色特征;A. Use Gaussian derivatives to extract the structure and color features of synthetic images;

B.对得到的结构和颜色特征分别使用局部二值模式进行编码,获得结构和颜色特征图,并基于结构特征图和颜色特征图分别计算得到结构特征和颜色特征,从而得到局部结构和颜色的失真信息;B. Use the local binary mode to encode the obtained structure and color features respectively, obtain the structure and color feature maps, and calculate the structure features and color features based on the structure feature map and color feature map respectively, so as to obtain the local structure and color features. distorted information;

C.对于全局变化,提取亮度特征评估三维合成图像的自然性;C. For global changes, extract brightness features to evaluate the naturalness of 3D composite images;

D.结合所提取的特征信息,使用随机森林回归模型学习视觉特征到主观质量分数的映射关系来预测三维合成图像的质量分数。D. Combined with the extracted feature information, a random forest regression model is used to learn the mapping relationship between visual features and subjective quality scores to predict the quality scores of 3D synthetic images.

进一步地,使用高斯导数提取图像的结构特征和颜色特征。Further, using Gaussian derivatives to extract the structural features and color features of the image.

进一步地,使用高斯导数提取图像的结构特征,其具体步骤是:Further, using the Gaussian derivative to extract the structural features of the image, the specific steps are:

A.局部泰勒级数展开可以代表图像局部特征,通过局部高斯导数,可以获得局部泰勒级数的系数;一张图像的高斯导数可以定义如下:A. The local Taylor series expansion can represent the local features of the image. Through the local Gaussian derivative, the coefficient of the local Taylor series can be obtained; the Gaussian derivative of an image can be defined as follows:

Figure BDA0002276501970000021
Figure BDA0002276501970000021

其中,m≥0和n≥0是沿着水平x和垂直y方向的导数,符号*代表卷积操作;Gσ(x,y,σ)是高斯函数,其标准差为σ的定义如下:Among them, m≥0 and n≥0 are the derivatives along the horizontal x and vertical y directions, and the symbol * represents the convolution operation; G σ (x, y, σ) is a Gaussian function, and its standard deviation σ is defined as follows:

Figure BDA0002276501970000022
Figure BDA0002276501970000022

B.使用二阶高斯导数提取结构特征,首先计算1≤m+n≤2的

Figure BDA0002276501970000023
通过计算高斯导数,所得矩阵
Figure BDA0002276501970000024
可以表示为:B. Use the second-order Gaussian derivative to extract structural features, first calculate 1≤m+n≤2
Figure BDA0002276501970000023
By computing the Gaussian derivative, the resulting matrix
Figure BDA0002276501970000024
It can be expressed as:

Figure BDA0002276501970000031
Figure BDA0002276501970000031

进一步地,使用高斯导数提取图像的颜色特征,其具体步骤是:Further, using the Gaussian derivative to extract the color feature of the image, the specific steps are:

A.在颜色通道的一阶高斯导数上采用不受亮度影响的两个颜色特征,其中一个颜色特征定义如下:A. Two color features that are not affected by brightness are used on the first-order Gaussian derivative of the color channel, one of which is defined as follows:

Figure BDA0002276501970000032
Figure BDA0002276501970000032

其中,

Figure BDA0002276501970000033
R,G,B在颜色空间中分别代表红、绿、蓝通道;另一个颜色特征
Figure BDA0002276501970000034
定义如下:in,
Figure BDA0002276501970000033
R, G, B represent the red, green, and blue channels, respectively, in the color space; another color feature
Figure BDA0002276501970000034
Defined as follows:

Figure BDA0002276501970000035
Figure BDA0002276501970000035

其中,R′、G′和B′分别表示水平方向上R,G和B通道的高斯一阶导数值,并且ρ=2R′-G′-B′,δ=2G′-R′-B′,τ=2B′-R′-G′。where R', G' and B' represent the first-order Gaussian derivative values of the R, G and B channels in the horizontal direction, respectively, and ρ=2R'-G'-B', δ=2G'-R'-B' , τ=2B'-R'-G'.

进一步地,对得到的结构和颜色特征分别使用局部二值模式获得结构和颜色特征图,以此计算质量特征从而得到局部结构和颜色的失真信息。Further, the local binary mode is used for the obtained structure and color features to obtain the structure and color feature maps, and the quality features are calculated to obtain the distortion information of the local structure and color.

进一步地,利用局部二值模式方法获得结构图,得到局部结构的失真信息,其具体步骤是:Further, the structure diagram is obtained by using the local binary mode method, and the distortion information of the local structure is obtained, and the specific steps are:

A.利用局部旋转不变的二值模式方法对

Figure BDA0002276501970000037
中每个像素进行操作,计算得出基于
Figure BDA0002276501970000038
绝对值的特征图谱
Figure BDA0002276501970000039
其计算公式如下:A. Using the local rotation invariant binary mode method to
Figure BDA0002276501970000037
Each pixel in the operation is operated, and the calculation is based on
Figure BDA0002276501970000038
Feature map of absolute value
Figure BDA0002276501970000039
Its calculation formula is as follows:

Figure BDA00022765019700000310
Figure BDA00022765019700000310

其中,s∈{s1,s2,s3,s4,s5};LBP代表LBP操作;riu2代表旋转不变统一模式;D和E代表周围像素的数量和计算半径,周围像素点个数D设置为8,计算半径E为1;由此得到5个特征图谱,分别为

Figure BDA00022765019700000311
Figure BDA00022765019700000312
其中
Figure BDA00022765019700000313
描述了中心像素点和局部区域临近像素点之间的关系;Among them, s∈{s 1 , s 2 , s 3 , s 4 , s 5 }; LBP represents LBP operation; riu2 represents rotation invariant unified mode; D and E represent the number and calculation radius of surrounding pixels, and the surrounding pixels are The number D is set to 8, and the calculated radius E is 1; thus, 5 feature maps are obtained, which are
Figure BDA00022765019700000311
and
Figure BDA00022765019700000312
in
Figure BDA00022765019700000313
Describes the relationship between the central pixel and adjacent pixels in the local area;

B.利用加权直方图表示局部结构失真信息,通过相同的局部二值模式算子对

Figure BDA00022765019700000314
的像素进行累加获得加权直方图,其定义公式如下:B. Use the weighted histogram to represent the local structural distortion information, through the same local binary pattern operator pair
Figure BDA00022765019700000314
The pixels are accumulated to obtain a weighted histogram, and the definition formula is as follows:

Figure BDA0002276501970000041
Figure BDA0002276501970000041

Figure BDA0002276501970000042
Figure BDA0002276501970000042

其中,N代表图片像素的数量;K表示LBP的索引,K∈[0,D+2],是权值其值为特征图谱

Figure BDA0002276501970000044
汇总高斯导数来融合映射高斯导数中根据LBP图强度值的像素值,并且经过归一化操作得到特征向量,增强图片区域中高对比度的变化,以此反映局部结构失真信息。Among them, N represents the number of picture pixels; K represents the index of LBP, K∈[0,D+2], is the weight and its value is the feature map
Figure BDA0002276501970000044
The Gaussian derivatives are aggregated to fuse the pixel values according to the LBP map intensity values in the mapped Gaussian derivatives, and the feature vector is obtained through the normalization operation to enhance the high-contrast changes in the image area to reflect the local structural distortion information.

进一步地,利用编码获得色度图,得到局部色度的失真信息;其中,Further, the chromaticity map is obtained by encoding, and the distortion information of the local chrominance is obtained; wherein,

A.使用局部二值模型得到特征向量:在提取的颜色特征x1上进行LBPriu2操作获得特征图

Figure BDA0002276501970000045
然后将特征图转变成特征向量,其定义公式如下:A. Use the local binary model to obtain the feature vector: perform the LBP riu2 operation on the extracted color feature x 1 to obtain the feature map
Figure BDA0002276501970000045
Then the feature map is transformed into a feature vector, and its definition formula is as follows:

Figure BDA0002276501970000046
Figure BDA0002276501970000046

Figure BDA0002276501970000047
Figure BDA0002276501970000047

其中,

Figure BDA0002276501970000048
是由局部二值模式算子得到的权值,其值为特征图
Figure BDA0002276501970000049
in,
Figure BDA0002276501970000048
is the weight obtained by the local binary pattern operator, and its value is the feature map
Figure BDA0002276501970000049

B.利用加权直方图表示局部色度信息:在提取的颜色特征x2上进行LBPriu2操作获得特征图

Figure BDA00022765019700000410
然后x2的加权直方图计算定义如下:B. Use weighted histogram to represent local chromaticity information: perform LBP riu2 operation on the extracted color feature x 2 to obtain a feature map
Figure BDA00022765019700000410
Then the weighted histogram calculation for x2 is defined as follows:

Figure BDA00022765019700000411
Figure BDA00022765019700000411

Figure BDA00022765019700000412
Figure BDA00022765019700000412

其中,权值

Figure BDA00022765019700000413
为特征图谱
Figure BDA00022765019700000414
最后,表示图像彩色信息的单一特征向量由如下公式计算:Among them, the weight
Figure BDA00022765019700000413
feature map
Figure BDA00022765019700000414
Finally, a single feature vector representing the color information of the image is calculated by the following formula:

Figure BDA00022765019700000415
Figure BDA00022765019700000415

进一步地,对于全局变化,通过提取亮度特征评估三维合成图像的自然性:其中,Further, for global variation, the naturalness of the 3D composite image is evaluated by extracting luminance features: where,

A.使用高斯分布拟合亮度系数,亮度系数(L)定义如下:A. Use a Gaussian distribution to fit the luminance coefficient, the luminance coefficient (L ) is defined as follows:

其中,(i,j)代表了像素的空间位置,并且i∈{1,2,…,a},j∈{1,2,…,b},其中a和b分别代表图像的高度和宽度,μ(i,j)和σ(i,j)定义如下:where (i,j) represents the spatial position of the pixel, and i∈{1,2,…,a},j∈{1,2,…,b}, where a and b represent the height and width of the image, respectively , μ(i,j) and σ(i,j) are defined as follows:

Figure BDA0002276501970000052
Figure BDA0002276501970000052

Figure BDA0002276501970000053
Figure BDA0002276501970000053

其中,ω是一个二维的中心对称高斯权重函数,ω={ωa,b|a∈[-3,3],b∈[-3,3]};Among them, ω is a two-dimensional centrosymmetric Gaussian weight function, ω={ω a,b |a∈[-3,3],b∈[-3,3]};

然后使用一个零均值广义高斯分布拟合亮度参数L′(i,j),其定义公式如下:Then a zero-mean generalized Gaussian distribution is used to fit the luminance parameter L'(i,j), which is defined by the following formula:

Figure BDA0002276501970000054
Figure BDA0002276501970000054

其中,

Figure BDA0002276501970000055
并且
Figure BDA0002276501970000056
其中参数α控制分布的形状,σ控制方差;in,
Figure BDA0002276501970000055
and
Figure BDA0002276501970000056
where the parameter α controls the shape of the distribution and σ controls the variance;

B.随后,在合成图像的5个尺度上计算包括广义高斯分布的形状参数和方差以及亮度系数的峰度和偏度4个参数,产生总共20维特征;此外,通过合成图像与它低通滤波后的图像之间的差异所计算得出一个拉普拉斯金字塔图像,使用广义高斯分布模型来拟合在拉普拉斯金字塔图中的像素值得到形状系数和方差,计算拉普拉斯金字塔图的峰度和偏度,并从五个尺度中提取这四个参数,产生总共20维特征。B. Subsequently, 4 parameters including the shape parameter and variance of the generalized Gaussian distribution and the kurtosis and skewness of the luminance coefficient are calculated on the 5 scales of the synthesized image, resulting in a total of 20-dimensional features; The difference between the filtered images is calculated to obtain a Laplacian pyramid image, a generalized Gaussian distribution model is used to fit the pixel values in the Laplacian pyramid image to obtain the shape coefficient and variance, and the Laplacian is calculated. The kurtosis and skewness of the pyramid plot, and extract these four parameters from five scales, resulting in a total of 20-dimensional features.

进一步地,使用随机森林回归方法训练质量预测模型,其中,Further, a random forest regression method is used to train a quality prediction model, where,

A.特征信息的选取,通过局部特征和全局特征,获得总共一个310维度的特征向量,包含了270维度的局部变动的特征和40维度全局自然性特征;A. Selection of feature information, through local features and global features, a total of 310-dimensional feature vectors are obtained, including 270-dimensional local variable features and 40-dimensional global natural features;

B.使用随机森林回归方法训练视觉质量预测模型,将质量特征映射到主观评价中来。将三维合成视图质量数据库随意划分为训练集和测试集进行1000次训练,最后使用皮尔斯线性相关系数均值(PLCC)、斯皮尓曼等级相关系数(SRCC)、肯德尔等级相关系数(KRCC)和均方根误差(RMSE)作为最终的结果。B. Use the random forest regression method to train the visual quality prediction model to map the quality features to the subjective evaluation. The 3D synthetic view quality database was randomly divided into training and test sets for 1000 training runs, and finally the mean Pierce Linear Correlation Coefficient (PLCC), Spearman Rank Correlation Coefficient (SRCC), Kendall Rank Correlation Coefficient (KRCC) and Root Mean Square Error (RMSE) as the final result.

附图说明Description of drawings

图1为本发明的算法框架图。FIG. 1 is an algorithm framework diagram of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present invention.

其中,本文所涉及的技术特征、简写/缩写、符号等,以本领域技术人员的公知认识/通常理解为基础进行解释、定义/说明。Wherein, the technical features, abbreviations/abbreviations, symbols, etc. involved in this document are explained, defined/illustrated on the basis of the common knowledge/common understanding of those skilled in the art.

受到提出局部和全局分布的三维合成图像降解和人类视觉系统对结构、颜色信息和全局自然改变十分敏感的启发,本发明设计了一种新颖有效的基于局部变动和全局变化的三维合成视图无参考质量评价方法(LVGC),可以有效提高对三维合成图像质量评价的效果,所用的视觉特征包括结构特征、色度特征、图像自然性、回归预测。Inspired by the proposed local and global distribution of 3D composite image degradation and the fact that the human visual system is very sensitive to structure, color information and global natural changes, the present invention designs a novel and effective 3D composite view based on local and global changes without reference. The quality evaluation method (LVGC) can effectively improve the effect of quality evaluation of three-dimensional composite images. The visual features used include structural features, chromaticity features, image naturalness, and regression prediction.

本发明各个部分的具体操作如下:The concrete operations of each part of the present invention are as follows:

(1)结构特征提取:(1) Structural feature extraction:

研究显示,局部泰勒级数展开可以代表图片局部特征并且局部泰勒级数的系数可以通过局部高斯分布得到,一张图像的高斯分布可以定义为:The research shows that the local Taylor series expansion can represent the local features of the image and the coefficients of the local Taylor series can be obtained by the local Gaussian distribution. The Gaussian distribution of an image can be defined as:

Figure BDA0002276501970000061
Figure BDA0002276501970000061

其中m≥0和n≥0是沿着水平(定义为x)和垂直(定义为y)方向的导数,特别地,符号*代表卷积操作,Gσ(x,y,σ)是高斯函数,并且它的正偏差σ的定义如下:where m≥0 and n≥0 are the derivatives along the horizontal (defined as x) and vertical (defined as y) directions, in particular, the symbol * represents the convolution operation and (x,y,σ) is the Gaussian function , and its positive deviation σ is defined as follows:

Figure BDA0002276501970000062
Figure BDA0002276501970000062

受到其他研究的启发,本发明应用了二阶高斯导数来提取结构特征;首先在1≤m+n≤2下计算得到

Figure BDA0002276501970000063
通过计算高斯导数,矩阵
Figure BDA0002276501970000064
可以表示为:Inspired by other researches, the present invention applies the second-order Gaussian derivative to extract structural features; first, it is calculated under 1≤m+n≤2 to obtain
Figure BDA0002276501970000063
By computing the Gaussian derivative, the matrix
Figure BDA0002276501970000064
It can be expressed as:

Figure BDA00022765019700000713
Figure BDA00022765019700000713

随后,利用局部均匀旋转不变的二值模式算子(ULBP)对

Figure BDA0002276501970000071
中每个像素进行操作实现旋转不变性,可以计算得出基于
Figure BDA0002276501970000072
绝对值的特征图谱
Figure BDA0002276501970000073
其计算公式如下:Then, using the Local Uniform Rotation Invariant Binary Pattern Operator (ULBP) pair
Figure BDA0002276501970000071
Rotation invariance is achieved by operating on each pixel in , which can be calculated based on
Figure BDA0002276501970000072
Feature map of absolute value
Figure BDA0002276501970000073
Its calculation formula is as follows:

Figure BDA0002276501970000074
Figure BDA0002276501970000074

其中,s∈{s1,s2,s3,s4,s5};LBP代表LBP操作;riu2代表旋转不变均匀模式;D和E代表周围像素的数量和邻近半径。特别地,将周围像素点D个数设置为8,临近半径E设置为1,得到5个特征图谱(分别为

Figure BDA0002276501970000075
Figure BDA0002276501970000076
),其中
Figure BDA0002276501970000077
描述了中心像素点和局部区域临近像素点之间的关系,并且这个局部细节可以有效地捕捉到由于不同失真类型而造成的复杂降解。where s∈{s 1 , s 2 , s 3 , s 4 , s 5 }; LBP stands for LBP operation; riu2 stands for rotation-invariant uniform mode; D and E stand for the number of surrounding pixels and the adjacent radius. In particular, set the number of surrounding pixels D to 8 and the adjacent radius E to 1 to obtain 5 feature maps (respectively
Figure BDA0002276501970000075
and
Figure BDA0002276501970000076
),in
Figure BDA0002276501970000077
The relationship between the central pixel and adjacent pixels in the local area is described, and this local detail can effectively capture the complex degradation caused by different distortion types.

尽管局部二值模式可以检测中心像素和它临近像素之间的不同,但是它不能准确地捕获梯度信息,它编码了临近像素的区别,这样会削弱局部二值模式区分局部变异的能力。这点十分关键,局部对比度的改变对评估图片视觉质量时会产生很大的影响。众所周知,对比度变换是与图片视觉质量高度相关的。因此,本发明通过相同的局部二值模式算子来累加的像素从而获得加权直方图,定义如下:Although the local binary mode can detect the difference between the center pixel and its neighboring pixels, it cannot accurately capture the gradient information, it encodes the difference between adjacent pixels, which will weaken the ability of the local binary mode to distinguish the local variation. This is critical, as changes in local contrast can have a large impact on assessing the visual quality of an image. It is well known that contrast transformation is highly related to the visual quality of pictures. Therefore, the present invention accumulates through the same local binary pattern operator to obtain a weighted histogram, which is defined as follows:

Figure BDA0002276501970000079
Figure BDA0002276501970000079

Figure BDA00022765019700000710
Figure BDA00022765019700000710

其中,N代表图片像素的数量;K表示LBP的索引,K∈[0,D+2],并且

Figure BDA00022765019700000711
为权值,其值为特征图谱
Figure BDA00022765019700000712
本发明运用高斯导数汇总融合映射高斯导数中图强度的像素值,并且经过归一化操作得到特征向量;经过这些操作,可以增强图片区域中高对比度的变化。where N represents the number of picture pixels; K represents the index of the LBP, K∈[0,D+2], and
Figure BDA00022765019700000711
is the weight, and its value is the feature map
Figure BDA00022765019700000712
The present invention uses the Gaussian derivative to summarize and fuse the pixel values of the image intensity in the Gaussian derivative, and obtains a feature vector through normalization operations; through these operations, the changes of high contrast in the image area can be enhanced.

(2)颜色特征提取:(2) Color feature extraction:

为了提取颜色特征,本发明在一阶高斯导数的色度通道上采用了两个不受亮度影响的颜色亮度;实验证实,颜色的一阶高斯导数信息可以用来感知局部结构的退化,其中一个颜色特征可以被定义为:In order to extract color features, the present invention adopts two color luminances that are not affected by luminance on the chrominance channel of the first-order Gaussian derivative; experiments have confirmed that the information of the first-order Gaussian derivative of color can be used to perceive the degradation of local structures, one of which is Color features can be defined as:

其中,R、G、B在颜色空间中分别代表红、绿、蓝通道。然后,在x1上进行LBPriu2操作提取特征图谱

Figure BDA0002276501970000083
然后将特征图转变成特征向量,计算公式定义如下:in, R, G, and B represent the red, green, and blue channels, respectively, in the color space. Then, perform LBP riu2 operation on x 1 to extract feature maps
Figure BDA0002276501970000083
Then the feature map is transformed into a feature vector, and the calculation formula is defined as follows:

Figure BDA0002276501970000084
Figure BDA0002276501970000084

其中,

Figure BDA0002276501970000086
是权值,其值为特征图谱
Figure BDA0002276501970000087
另外一个特征为
Figure BDA0002276501970000088
其定义如下:in,
Figure BDA0002276501970000086
is the weight, and its value is the feature map
Figure BDA0002276501970000087
Another feature is
Figure BDA0002276501970000088
It is defined as follows:

Figure BDA0002276501970000089
Figure BDA0002276501970000089

Figure BDA00022765019700000810
Figure BDA00022765019700000810

其中,R′、G′和B′分别表示水平方向上R、G、B通道的高斯一阶导数值,并且ρ=2R′-G′-B′,δ=2G′-R′-B′,τ=2B′-R′-G′。然后,将

Figure BDA00022765019700000811
操作应用于x2计算得到特征图谱
Figure BDA00022765019700000812
权重直方图计算如下:Among them, R', G' and B' represent the first-order Gaussian derivative values of the R, G, and B channels in the horizontal direction, respectively, and ρ=2R'-G'-B', δ=2G'-R'-B' , τ=2B'-R'-G'. followed by
Figure BDA00022765019700000811
The operation is applied to x 2 to calculate the feature map
Figure BDA00022765019700000812
The weight histogram is calculated as follows:

Figure BDA00022765019700000813
Figure BDA00022765019700000813

其中,

Figure BDA00022765019700000815
是被定义为
Figure BDA00022765019700000816
的权重,计算公式如下:in,
Figure BDA00022765019700000815
is defined as
Figure BDA00022765019700000816
The weight of , the calculation formula is as follows:

Figure BDA00022765019700000817
Figure BDA00022765019700000817

颜色特征对于亮度和与亮度有关的场景信息影响比如阴影是不改变的,因此,由于不受光照的影响,它们可以表征强大的结构信息;不仅如此,由单一因素(比如模糊因素)造成的图像失真会损坏图像的结构,但是它们不一定与亮度相联的影响相关。Color features are invariant to luminance and luminance-related scene information effects such as shadows, therefore, because they are not affected by lighting, they can represent strong structural information; not only that, images caused by a single factor (such as blurring) Distortions can damage the structure of an image, but they are not necessarily related to the effects of brightness.

(3)图像自然性表征:(3) Image naturalness representation:

三维合成图像的亮度失真可能会影响自然性,一个高清三维合成图像应该带有自然图片的自然特征。因此,本发明运用基于亮度的质量特征来评定三维合成图像的自然性,考虑到自然图片的亮度参数会遵循一个高斯分布,使用了亮度系数来计算合成图像的自然性;亮度参数(L′)定义如下:The brightness distortion of the 3D composite image may affect the naturalness, and a high-definition 3D composite image should have the natural characteristics of a natural picture. Therefore, the present invention uses the brightness-based quality feature to evaluate the naturalness of the three-dimensional composite image. Considering that the brightness parameter of the natural picture will follow a Gaussian distribution, the brightness coefficient is used to calculate the naturalness of the composite image; the brightness parameter (L') Defined as follows:

Figure BDA0002276501970000091
Figure BDA0002276501970000091

其中,(i,j)代表了空间索引,并且i∈{1,2,…,a},j∈{1,2,…,b},其中a和b分别代表图像的高度和宽度。where (i,j) represents the spatial index, and i∈{1,2,…,a},j∈{1,2,…,b}, where a and b represent the height and width of the image, respectively.

特别地,μ(i,j)和σ(i,j)定义如下:In particular, μ(i,j) and σ(i,j) are defined as follows:

Figure BDA0002276501970000092
Figure BDA0002276501970000092

Figure BDA0002276501970000093
Figure BDA0002276501970000093

其中,ω是经过三个标准偏差采样并重新缩放到单位量的2D中心对称高斯权重函数,ω={ωa,b|a∈[-3,3],b∈[-3,3]}。where ω is a 2D centrosymmetric Gaussian weight function sampled by three standard deviations and rescaled to unity, ω={ω a,b |a∈[-3,3],b∈[-3,3]} .

亮度参数L′(i,j)用一个零均值广义高斯分布拟合,公式定义如下:The luminance parameter L'(i,j) is fitted with a zero-mean generalized Gaussian distribution, and the formula is defined as follows:

Figure BDA0002276501970000094
Figure BDA0002276501970000094

其中,

Figure BDA0002276501970000095
并且
Figure BDA0002276501970000096
参数α控制分布的一般形状,σ控制方差,两个参数(α,σ2)通过这个模型得到评估;并且亮度系数的峰度和偏度由来自5个以上的经典分布计算得出,产生了总共20个特征。in,
Figure BDA0002276501970000095
and
Figure BDA0002276501970000096
The parameter α controls the general shape of the distribution, σ controls the variance, and two parameters (α, σ 2 ) are evaluated by this model; and the kurtosis and skewness of the luminance coefficients are calculated from more than 5 classical distributions, yielding 20 features in total.

然后,通过合成图像与它低通滤波后的版本之间的区别所计算得到一个拉普拉斯金字塔图,使用一个广义高斯分布模型来拟合在拉普拉斯金字塔图中的像素值分布,以拉普拉斯金字塔图的峰度和偏度作为特征,本发明从五个级度中提取质量意识特征,总共产生了20个特征。Then, a Laplacian pyramid is computed by the difference between the composite image and its low-pass filtered version, and a generalized Gaussian distribution model is used to fit the distribution of pixel values in the Laplacian pyramid, Taking the kurtosis and skewness of the Laplacian pyramid graph as features, the present invention extracts quality-conscious features from five levels, resulting in a total of 20 features.

(4)回归模型和质量预测:(4) Regression model and quality prediction:

研究表明在感知视觉信息时,多尺度性质存在于人类视觉系统中,因此提取在多尺度上提取图片的视觉特征可以进行更好表征。通过局部特征和全局特征表示该模型,本发明可以总共获得一个310维度的特征向量,包含了270维度的局部变动的特征和40维度全局自然性特征;然后,使用随机森林方法训练视觉质量预测模型,这样可以将质量特征映射到主观评价中来;通过随意划分数据库:80%的图像样本和对应的主观评价分数在数据库中被用于训练,余下的20%用于测试;最后汇总了皮尔斯线性相关系数均值(PLCC),斯皮尓曼等级相关系数(SRCC),肯德尔等级相关系数(KRCC)和均方根误差(RMSE)作为最终的结果。Studies have shown that multi-scale properties exist in the human visual system when perceiving visual information, so extracting visual features that extract images at multiple scales can be better characterized. By representing the model by local features and global features, the present invention can obtain a total of 310-dimensional feature vectors, including 270-dimensional local variable features and 40-dimensional global natural features; then, the random forest method is used to train the visual quality prediction model , so that quality features can be mapped to subjective evaluations; by randomly partitioning the database: 80% of the image samples and corresponding subjective evaluation scores are used in the database for training, and the remaining 20% are used for testing; finally, the Pierce linear The mean correlation coefficient (PLCC), Spearman rank correlation coefficient (SRCC), Kendall rank correlation coefficient (KRCC) and root mean square error (RMSE) were used as final results.

本发明的过程如图1所示,具体过程如下:The process of the present invention is shown in Figure 1, and the concrete process is as follows:

步骤1:使用高斯导数提取结构和颜色特征;Step 1: Extract structure and color features using Gaussian derivatives;

步骤2:使用局部二值模式来编码结构和颜色特征图谱,用来计算质量意识特征从而得到局部结构化和颜色的失真;Step 2: Use the local binary pattern to encode the structure and color feature maps, which are used to calculate the quality-aware features to obtain local structure and color distortion;

步骤3:过全局改变提取亮度特征用以评估三维合成图像的自然性;Step 3: Extract luminance features through global changes to evaluate the naturalness of the 3D composite image;

步骤4:基于上述提取的特征信息,使用随机森林回归从视觉特征到主观评分映射来训练质量预测模型。Step 4: Based on the above extracted feature information, use random forest regression to map from visual features to subjective scores to train a quality prediction model.

使用皮尔斯线性相关系数均值(PLCC),斯皮尓曼等级相关系数(SRCC),肯德尔相关系数(KRCC)和均方根误差(RMSE)作为最终的比较结果。通常来说,越高的PLCC和SRCC与越低的RMSE值代表了算法有更好的表现,即算法有着更佳的算法预测精准度。为了验证本发明提出的算法的性能,将算法与现有的有参考、无参考质量评价方法在三个公开数据库MCL-3D、IRCCyN/IVC、IETR-DIBR进行比较,比较的评价方法包括PSNR,SSIM,BRISQUE,NIQE,BIQI,NRSL,CM-LOG,MP-PSNRr,MW-PSNR,MW-PSNRr,LOGS,Ref,APT和NIQSV+;前七种方法是针对自然图片的质量评价方法,后七种是特别针对合成视角设计的质量评价方法。The mean Pierce Linear Correlation Coefficient (PLCC), Spearman's Rank Correlation Coefficient (SRCC), Kendall's Correlation Coefficient (KRCC) and Root Mean Square Error (RMSE) were used as final comparison results. Generally speaking, higher PLCC and SRCC and lower RMSE value represent better performance of the algorithm, that is, the algorithm has better prediction accuracy of the algorithm. In order to verify the performance of the algorithm proposed by the present invention, the algorithm is compared with the existing quality evaluation methods with reference and without reference in three public databases MCL-3D, IRCCyN/IVC, and IETR-DIBR. The compared evaluation methods include PSNR, SSIM, BRISQUE, NIQE, BIQI, NRSL, CM-LOG, MP-PSNRr, MW-PSNR, MW-PSNRr, LOGS, Ref, APT and NIQSV+; the first seven methods are quality evaluation methods for natural pictures, the latter seven It is a quality evaluation method specially designed for synthetic viewing angle.

MCL-3D数据库包含了从九个深度图像信息源中挑选的693张立体的图片组。IRCCyN/IVC DIBR数据库由三个MVD序列中挑选而出的12张参考图片和通过7种不同的DIBR技术产生的84张合成图片组成。IETR DIBR数据库由10个MVD序列和7个最新的DIBR技术产生的150合成图片构成,与RCCyN/IVC DIBR数据库相似,IETR DIBR也主要关注渲染失真。The MCL-3D database contains 693 stereoscopic image sets selected from nine depth image sources. The IRCCyN/IVC DIBR database consists of 12 reference pictures selected from three MVD sequences and 84 composite pictures generated by 7 different DIBR techniques. The IETR DIBR database consists of 10 MVD sequences and 7 150 composite images produced by the latest DIBR technology. Similar to the RCCyN/IVC DIBR database, IETR DIBR also mainly focuses on rendering distortion.

表1:本发明与现存全参考方法的比较Table 1: Comparison of the present invention with the existing full reference method

Figure BDA0002276501970000111
Figure BDA0002276501970000111

Figure BDA0002276501970000121
Figure BDA0002276501970000121

表1为提出的方法与现存全参考方法的比较,从这些比较中,本发明提出的无参考方法表现效果更好。Table 1 shows the comparison between the proposed method and the existing full reference method. From these comparisons, the no-reference method proposed by the present invention performs better.

表2:本发明与现存无参考方法的比较Table 2: Comparison of the present invention with existing methods without reference

Figure BDA0002276501970000122
Figure BDA0002276501970000122

Figure BDA0002276501970000131
Figure BDA0002276501970000131

表2为提出的方法与现存无参考方法的比价,从这些比较中,本发明提出的无参考方法在所测试的数据库上表现更好。Table 2 shows the comparison between the proposed method and the existing no-reference method. From these comparisons, the no-reference method proposed by the present invention performs better on the tested database.

上述实施方式是对本发明的说明,不是对本发明的限定,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的保护范围由所附权利要求及其等同物限定。The above-mentioned embodiment is an illustration of the present invention, not a limitation of the present invention. It can be understood that various changes, modifications, replacements and modifications can be made to these embodiments without departing from the principle and spirit of the present invention. The protection scope of the present invention It is defined by the appended claims and their equivalents.

Claims (9)

1. A three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation is characterized in that: the method comprises the following steps:
A. extracting the structure and color characteristics of the synthesized image by using a Gaussian derivative;
B. coding the obtained structure and color characteristics by using a local binary pattern respectively to obtain a structure characteristic diagram and a color characteristic diagram, and calculating the structure characteristic diagram and the color characteristic diagram respectively based on the structure characteristic diagram and the color characteristic diagram to obtain distortion information of the local structure and the color;
C. extracting brightness characteristics to evaluate the naturalness of the three-dimensional synthetic image for global change;
D. and (4) combining the extracted characteristic information, and using a random forest regression model to learn the mapping relation between the visual characteristics and the subjective quality scores to predict the quality scores of the three-dimensional synthetic images.
2. The method of claim 1, wherein: structural features and color features of the image are extracted using gaussian derivatives.
3. The method of claim 2, wherein: the structural features of the image are extracted by using Gaussian derivatives, and the method comprises the following specific steps:
A. the local Taylor series expansion can represent the local characteristics of the image, and the coefficients of the local Taylor series can be obtained through local Gaussian derivatives; the gaussian derivative of an image can be defined as follows:
Figure FDA0002276501960000011
where m ≧ 0 and n ≧ 0 are derivatives along the horizontal x and vertical y directions, the symbol denotes the convolution operation; gσ(x, y, σ) is a Gaussian function whose standard deviation σ is defined as follows:
B. using the second order Gaussian derivative to extract structural features, first calculating the sum of m + n < 1 > and 2
Figure FDA0002276501960000014
By calculating the Gaussian derivatives, the resulting matrix
Figure FDA0002276501960000015
Can be expressed as:
Figure FDA0002276501960000013
4. the method of claim 2, wherein: extracting color features of the image by using Gaussian derivatives, which comprises the following specific steps:
A. two color features that are not affected by luminance are employed on the first order gaussian derivative of the color channel, one of which is defined as follows:
Figure FDA0002276501960000021
wherein,
Figure FDA0002276501960000022
r, G and B respectively represent red, green and blue channels in a color space; another color characteristic
Figure FDA0002276501960000025
The definition is as follows:
Figure FDA0002276501960000024
wherein R ', G', and B 'respectively represent gaussian first derivative values of R, G, and B channels in the horizontal direction, and ρ ═ 2R' -G '-B', δ ═ 2G '-R' -B ', τ ═ 2B' -R '-G'.
5. The method of claim 2, wherein: and obtaining a structure characteristic graph and a color characteristic graph by respectively using the obtained structure and color characteristics through a local binary pattern, and calculating quality characteristics so as to obtain distortion information of the local structure and color.
6. The method of claim 5, wherein: obtaining a structure chart by using a local binary pattern method to obtain distortion information of a local structure, wherein the method comprises the following specific steps of:
A. binary pattern method pair using local rotation invariance
Figure FDA0002276501960000026
Each pixel of the image is operated on, the calculation is based onCharacteristic map of absolute value
Figure FDA0002276501960000028
The calculation formula is as follows:
Figure FDA0002276501960000029
wherein s ∈ { s ∈ [)1,s2,s3,s4,s5}; LBP stands for LBP operation; riu2 represents a rotation invariant unified mode; d and E represent the number and the calculation radius of surrounding pixels, the number D of the surrounding pixels is set to be 8, and the calculation radius E is 1; thereby obtaining 5 characteristic maps which are respectively
Figure FDA00022765019600000210
And
Figure FDA00022765019600000211
wherein
Figure FDA00022765019600000212
Describing the relationship between the central pixel point and the adjacent pixel points in the local area;
B. representing local structure distortion information by using weighted histogram, and using same local binary pattern operator pair
Figure FDA00022765019600000213
The pixels of (a) are accumulated to obtain a weighted histogram, which is defined by the following formula:
Figure FDA0002276501960000031
Figure FDA0002276501960000032
wherein N represents the number of picture pixels; k denotes the index of LBP, K ∈ [0, D +2 ]],
Figure FDA0002276501960000038
Is a weight value which is a characteristic map
Figure FDA0002276501960000039
And summarizing the Gaussian derivatives to fuse and map pixel values in the Gaussian derivatives according to the LBP image intensity value, and obtaining a characteristic vector through normalization operation to enhance the change of high contrast in the image area so as to reflect local structure distortion information.
7. The method of claim 5, wherein: obtaining a chromaticity diagram by using coding to obtain distortion information of local chromaticity; wherein,
A. obtaining a feature vector by using a local binary model: at the extracted color feature x1Carry out LBPriu2Operation acquisition feature mapThe feature map is then converted into feature vectors, which define the formula:
Figure FDA0002276501960000033
Figure FDA0002276501960000034
wherein,
Figure FDA00022765019600000311
is a weight value obtained by a local binary pattern operator, and the value is a characteristic diagram
Figure FDA00022765019600000312
B. The local chrominance information is represented by a weighted histogram: at the extracted color feature x2Carry out LBPriu2Operation acquisition feature map
Figure FDA00022765019600000313
Then x2The weighted histogram calculation of (a) is defined as follows:
Figure FDA0002276501960000035
Figure FDA0002276501960000036
wherein, the weight value
Figure FDA00022765019600000314
Is a characteristic map
Figure FDA00022765019600000315
Finally, a single feature vector representing the color information of the image is calculated by the following formula:
Figure FDA0002276501960000037
8. the method of claim 1, wherein: for global variation, the naturalness of the three-dimensional synthetic image is evaluated by extracting the brightness features: wherein,
A. the luminance coefficient is fitted using a gaussian distribution, and the luminance coefficient (L') is defined as follows:
Figure FDA0002276501960000041
where (i, j) represents the spatial position of the pixel, and i e {1,2, …, a }, j e {1,2, …, b }, where a and b represent the height and width of the image, respectively, and μ (i, j) and σ (i, j) are defined as follows:
Figure FDA0002276501960000042
Figure FDA0002276501960000043
where ω is a two-dimensional, centrosymmetric gaussian weighting function, ω ═ ωa,b|a∈[-3,3],b∈[-3,3]};
The luminance parameter L' (i, j) is then fitted using a zero-mean generalized gaussian distribution, which defines the formula:
Figure FDA0002276501960000044
wherein,
Figure FDA0002276501960000045
and is
Figure FDA0002276501960000046
Where parameter α controls the shape of the distribution, σ controls the variance;
B. subsequently, 4 parameters including shape parameters and variance of the generalized gaussian distribution and kurtosis and skewness of the luminance coefficient are calculated on 5 scales of the composite image, resulting in a total of 20-dimensional features; in addition, a laplacian pyramid image is computed from the difference between the composite image and its low-pass filtered image, the shape coefficients and variances are obtained using a generalized gaussian distribution model to fit the pixel values in the laplacian pyramid, the kurtosis and skewness of the laplacian pyramid are computed, and the four parameters are extracted from the five scales, yielding a total of 20-dimensional features.
9. The method of claim 1, wherein: a quality prediction model is trained using a random forest regression method, wherein,
A. selecting feature information, namely obtaining a total 310-dimensional feature vector through local features and global features, wherein the total 310-dimensional feature vector comprises 270-dimensional local variation features and 40-dimensional global naturalness features;
B. and training a visual quality prediction model by using a random forest regression method, and mapping the quality characteristics to subjective evaluation. The three-dimensional synthetic view quality database is randomly divided into a training set and a testing set to be trained for 1000 times, and finally, a pierce linear correlation coefficient mean (PLCC), a pierce Mandarin level correlation coefficient (SRCC), a Kendall level correlation coefficient (KRCC) and a Root Mean Square Error (RMSE) are used as final results.
CN201911124950.4A 2019-11-18 2019-11-18 Three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation Pending CN110853027A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911124950.4A CN110853027A (en) 2019-11-18 2019-11-18 Three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911124950.4A CN110853027A (en) 2019-11-18 2019-11-18 Three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation

Publications (1)

Publication Number Publication Date
CN110853027A true CN110853027A (en) 2020-02-28

Family

ID=69600595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911124950.4A Pending CN110853027A (en) 2019-11-18 2019-11-18 Three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation

Country Status (1)

Country Link
CN (1) CN110853027A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288699A (en) * 2020-10-23 2021-01-29 北京百度网讯科技有限公司 Method, device, equipment and medium for evaluating relative definition of image
CN112785494A (en) * 2021-01-26 2021-05-11 网易(杭州)网络有限公司 Three-dimensional model construction method and device, electronic equipment and storage medium
CN113643262A (en) * 2021-08-18 2021-11-12 上海大学 No-reference panoramic image quality evaluation method, system, equipment and medium
CN115511833A (en) * 2022-09-28 2022-12-23 广东百能家居有限公司 Glass surface scratch detection system
CN116309216A (en) * 2023-02-27 2023-06-23 南京博视医疗科技有限公司 Pseudo-color image fusion method and image fusion system based on multiple wave bands
CN116758060A (en) * 2023-08-10 2023-09-15 江苏森标科技有限公司 Vertical basket of flowers visual detection system of battery piece

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408561A (en) * 2016-09-10 2017-02-15 天津大学 Texture feature-based image quality evaluating method without reference
CN108010024A (en) * 2017-12-11 2018-05-08 宁波大学 It is a kind of blind with reference to tone mapping graph image quality evaluation method
CN109919959A (en) * 2019-01-24 2019-06-21 天津大学 Tone-mapping image quality assessment method based on color, naturalness and structure
CN110046673A (en) * 2019-04-25 2019-07-23 上海大学 No reference tone mapping graph image quality evaluation method based on multi-feature fusion
CN110363704A (en) * 2019-05-29 2019-10-22 西北大学 Image Super-resolution Reconstruction Model Construction and Reconstruction Method Fusion Morphology and Color

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408561A (en) * 2016-09-10 2017-02-15 天津大学 Texture feature-based image quality evaluating method without reference
CN108010024A (en) * 2017-12-11 2018-05-08 宁波大学 It is a kind of blind with reference to tone mapping graph image quality evaluation method
CN109919959A (en) * 2019-01-24 2019-06-21 天津大学 Tone-mapping image quality assessment method based on color, naturalness and structure
CN110046673A (en) * 2019-04-25 2019-07-23 上海大学 No reference tone mapping graph image quality evaluation method based on multi-feature fusion
CN110363704A (en) * 2019-05-29 2019-10-22 西北大学 Image Super-resolution Reconstruction Model Construction and Reconstruction Method Fusion Morphology and Color

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288699A (en) * 2020-10-23 2021-01-29 北京百度网讯科技有限公司 Method, device, equipment and medium for evaluating relative definition of image
CN112288699B (en) * 2020-10-23 2024-02-09 北京百度网讯科技有限公司 Method, device, equipment and medium for evaluating relative definition of image
CN112785494A (en) * 2021-01-26 2021-05-11 网易(杭州)网络有限公司 Three-dimensional model construction method and device, electronic equipment and storage medium
CN112785494B (en) * 2021-01-26 2023-06-16 网易(杭州)网络有限公司 Three-dimensional model construction method and device, electronic equipment and storage medium
CN113643262A (en) * 2021-08-18 2021-11-12 上海大学 No-reference panoramic image quality evaluation method, system, equipment and medium
CN115511833A (en) * 2022-09-28 2022-12-23 广东百能家居有限公司 Glass surface scratch detection system
CN116309216A (en) * 2023-02-27 2023-06-23 南京博视医疗科技有限公司 Pseudo-color image fusion method and image fusion system based on multiple wave bands
CN116309216B (en) * 2023-02-27 2024-01-09 南京博视医疗科技有限公司 Pseudo-color image fusion method and image fusion system based on multiple wave bands
CN116758060A (en) * 2023-08-10 2023-09-15 江苏森标科技有限公司 Vertical basket of flowers visual detection system of battery piece
CN116758060B (en) * 2023-08-10 2023-10-27 江苏森标科技有限公司 Vertical basket of flowers visual detection system of battery piece

Similar Documents

Publication Publication Date Title
CN110853027A (en) Three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation
Yue et al. Combining local and global measures for DIBR-synthesized image quality evaluation
CN108830796B (en) Hyperspectral image super-resolution reconstruction method based on spectral-space combination and gradient domain loss
CN109671023A (en) A kind of secondary method for reconstructing of face image super-resolution
CN109345502B (en) Stereo image quality evaluation method based on disparity map stereo structure information extraction
CN107635136B (en) View-based access control model perception and binocular competition are without reference stereo image quality evaluation method
AU2016302049B2 (en) 2D-to-3D video frame conversion
CN105959684A (en) Stereo image quality evaluation method based on binocular fusion
Wang et al. Quaternion representation based visual saliency for stereoscopic image quality assessment
CN114648482B (en) Quality evaluation method and system for stereoscopic panoramic images
CN104994375A (en) Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency
CN114998596B (en) Quality evaluation method of high dynamic range stereoscopic omnidirectional image based on visual perception
CN103426173A (en) Objective evaluation method for stereo image quality
CN108470178B (en) A depth map saliency detection method combined with depth reliability evaluation factor
CN109788275A (en) Naturality, structure and binocular asymmetry are without reference stereo image quality evaluation method
CN112950596A (en) Tone mapping omnidirectional image quality evaluation method based on multi-region and multi-layer
CN114881905A (en) Processing method for fusing infrared image and visible light image based on wavelet transformation
Kuo et al. Depth estimation from a monocular view of the outdoors
CN105869115A (en) Depth image super-resolution method based on kinect2.0
CN111641822B (en) A Repositioning Stereo Image Quality Evaluation Method
CN110691236B (en) Panoramic video quality evaluation method
CN112508847A (en) Image quality evaluation method based on depth feature and structure weighted LBP feature
CN105828061B (en) A kind of virtual view quality evaluating method of view-based access control model masking effect
CN109801257A (en) No reference DIBR generates image quality evaluating method
CN107483918A (en) Saliency-based full-reference stereo image quality assessment method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200228