CN109978834A - A kind of screen picture quality evaluating method based on color and textural characteristics - Google Patents
A kind of screen picture quality evaluating method based on color and textural characteristics Download PDFInfo
- Publication number
- CN109978834A CN109978834A CN201910162786.XA CN201910162786A CN109978834A CN 109978834 A CN109978834 A CN 109978834A CN 201910162786 A CN201910162786 A CN 201910162786A CN 109978834 A CN109978834 A CN 109978834A
- Authority
- CN
- China
- Prior art keywords
- color
- features
- image
- texture
- statistical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000013441 quality evaluation Methods 0.000 claims abstract description 34
- 230000000007 visual effect Effects 0.000 claims abstract description 16
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000011160 research Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000000844 transformation Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 239000003086 colorant Substances 0.000 claims 1
- 238000013179 statistical model Methods 0.000 abstract 1
- 230000000875 corresponding effect Effects 0.000 description 23
- 238000011156 evaluation Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000001303 quality assessment method Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004456 color vision Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种基于色彩和纹理特征的屏幕图像质量评价方法,其特征在于:首先,设计参数模型来描述屏幕图像中色彩信息的特征,并采用统计模型提取与质量相关的特征以评估屏幕图像的空间色彩连续性。其次,采用局部纹理特征来描述屏幕图像的空间纹理连续性。最后,使用支持向量回归作为映射函数将视觉特征映射到主观分数。实验结果表明,与现有的无参考屏幕图像算法性能相比,该方法在预测屏幕图像的视觉质量方面表现出良好的性能,甚至优于一些全参考屏幕图像算法性能。
The invention relates to a screen image quality evaluation method based on color and texture features, which is characterized in that: first, a parameter model is designed to describe the features of color information in the screen image, and a statistical model is used to extract the quality-related features to evaluate the screen image spatial color continuity. Second, local texture features are used to describe the spatial texture continuity of screen images. Finally, visual features are mapped to subjective scores using support vector regression as the mapping function. Experimental results show that the proposed method shows good performance in predicting the visual quality of screen images compared with the performance of existing algorithms without reference screen images, and even outperforms some algorithms with full reference screen images.
Description
技术领域technical field
本发明设计了一种基于色彩和纹理特征的屏幕图像质量评价方法,属于多媒体技术领域,具体属于数字图像和数字视频处理技术领域。The invention designs a screen image quality evaluation method based on color and texture features, belonging to the technical field of multimedia, in particular to the technical field of digital image and digital video processing.
背景技术Background technique
随着无线网络的快速发展,数字图像不再仅仅是通过数码单反相机捕捉自然场景。事实上,数字图像包含多种图像内容,如自然场景内容,电脑产生的图形、手工绘画、特殊的标志以及软件合成的图像等。上述提及的图像统称为屏幕图像。屏幕图像由自然场景、图形、表格、文字以及电脑产生的特殊的符号组成,包含了不同于自然图像的信息。With the rapid development of wireless networks, digital images are no longer just about capturing natural scenes with DSLR cameras. In fact, digital images contain a variety of image content, such as natural scene content, computer-generated graphics, hand-painted, special signs, and images synthesized by software. The images mentioned above are collectively referred to as screen images. Screen images are composed of natural scenes, graphics, tables, words, and special symbols generated by computers, and contain information that is different from natural images.
近些年来,均方误差和峰值信噪比已经被广泛使用于自然失真图像的视觉质量评价。然而,这些传统方法的定量测量与人类的评分并不一致,因为这些方法仅仅考虑参考图像和失真图像的相应像素的差异,并没有考虑到人类视觉系统的特点。为了解决这些问题,许多的研究将人类视觉特点考虑其中并设计先进的视觉质量评价方法。这些现有的方法都是为了评价自然图像的质量,往往对自然图像质量的评价能获得较好的效果,但在屏幕图像数据库上的性能却远低于自然图像,因而,不能有效地对失真屏幕图像的质量进行评价。In recent years, mean square error and peak signal-to-noise ratio have been widely used for visual quality evaluation of naturally distorted images. However, the quantitative measures of these traditional methods are not consistent with human ratings, because these methods only consider the difference between the corresponding pixels of the reference image and the distorted image, and do not take into account the characteristics of the human visual system. To solve these problems, many studies take human visual characteristics into consideration and design advanced visual quality evaluation methods. These existing methods are all for evaluating the quality of natural images, and they can often achieve better results in evaluating the quality of natural images, but their performance on the screen image database is much lower than that of natural images, so they cannot effectively measure distortion. The quality of the screen image is evaluated.
由于图像内容的差异性,通过自然失真图像的质量评价方法对屏幕图像进行质量评价可能会忽略一些重要的信息,这将导致感知到的屏幕图像质量不准确。优越的屏幕图像质量评价算法可以将包括图形、边缘和纹理在内的丰富信息考虑其中,并给出正确的质量评价分数。因此,如何有效评价屏幕图像的视觉质量是很重要的。Due to the difference of image content, the quality evaluation of screen images by the quality evaluation method of natural distorted images may ignore some important information, which will lead to inaccurate perceived quality of screen images. A superior screen image quality evaluation algorithm can take into account the rich information including graphics, edges and textures and give the correct quality evaluation score. Therefore, how to effectively evaluate the visual quality of screen images is very important.
一般来说,主观质量评估是最有效和最可靠的,它反应了人眼对屏幕图像质量好坏的直观判断,但是,这种方法耗时耗力,因此主观质量评估不能应用于实际的应用中。因此,基于屏幕图像内在的特点设计有效和鲁棒的屏幕图像无参考质量评价方法是值得研究的。Generally speaking, subjective quality assessment is the most effective and reliable, which reflects the human eye's intuitive judgment on the quality of screen images. However, this method is time-consuming and labor-intensive, so subjective quality assessment cannot be applied in practical applications. middle. Therefore, it is worth studying to design an effective and robust no-reference quality evaluation method for screen images based on the inherent characteristics of screen images.
提出能够对屏幕图像进行自动评价的算法的目的在于:The purpose of proposing an algorithm that can automatically evaluate screen images is to:
(1)可以设计无参考的屏幕图像质量评价算法嵌入到含屏幕图像的任何处理环节,来对此环节的质量进行监督并及时进行反馈,利于后续的处理。(1) A non-referenced screen image quality evaluation algorithm can be designed to be embedded in any processing link containing screen images to supervise the quality of this link and provide timely feedback, which is beneficial to subsequent processing.
(2)通过高效的屏幕图像质量评价方法,可以对屏幕图像的生成与显示起到指导作用,使得屏幕图像更加符合人类视觉系统的感知。(2) Through an efficient screen image quality evaluation method, it can play a guiding role in the generation and display of screen images, so that the screen images are more in line with the perception of the human visual system.
因此,预测准确有效的屏幕图像质量评价算法对于屏幕图像的发展会有很大的促进作用。Therefore, an accurate and effective screen image quality evaluation algorithm will greatly promote the development of screen images.
发明内容SUMMARY OF THE INVENTION
为了改善屏幕图像的客观质量评价方法的性能以及针对屏幕图像特定失真类型和内容,提出了一种新的基于空间连续性的屏幕图像质量评价模型。所采用的视觉特征包含基于空间色彩连续性的色彩统计特征和基于空间纹理连续性的纹理统计特征。In order to improve the performance of the objective quality evaluation method of screen images and for the specific distortion types and contents of screen images, a new screen image quality evaluation model based on spatial continuity is proposed. The adopted visual features include color statistical features based on spatial color continuity and texture statistical features based on spatial texture continuity.
本发明的过程如图1所示,具体过程如下:The process of the present invention is shown in Figure 1, and the concrete process is as follows:
步骤1:采用基于零阶和一阶不变性的色彩描述符提取屏幕图像的色彩统计特征,以评估屏幕图像的空间色彩连续性。Step 1: Use zero-order and first-order invariant-based color descriptors to extract color statistical features of screen images to evaluate the spatial color continuity of screen images.
步骤2:采用局部三值模式操作提取屏幕图像的纹理统计特征,来描述屏幕图像的空间纹理连续性。Step 2: Extract the texture statistical features of the screen image by using the local ternary mode operation to describe the spatial texture continuity of the screen image.
步骤3:采用机器学习的方法,通过支持向量回归将提取得到的特征映射到图像的质量分数,利用学习到的特征进行映射后的分数作为屏幕图像质量评价的标准。Step 3: Using the machine learning method, map the extracted features to the image quality score through support vector regression, and use the learned features to map the scores as the standard for screen image quality evaluation.
使用三种常用的标准来评估算法预测屏幕图像质量的准确性。评估标准包含:用于评估预测的准确性的皮尔逊线性相关系数PLCC(Pearson Linear CorrelationCoefficient);用于评估预测的单调性斯皮尔曼的相关系数SRCC(Spearman Rank-orderCorrelation Coefficient);衡量客观分数和主观分数的相关性的均方根误差RMSE(RootMean Squared Error)。一般来说,更高的PLCC和SRCC,更低的RMSE值表示算法模型有更好的预测性能。为了验证本发明提出的算法的性能,将所提算法与现有的全参考和无参考图像质量评价方法在数据库SIQAD和SCID上进行比较。为了消除评估偏差,重复用上述标准对每个算法进行了1000次的实验评估,取每个标准的结果的中值作为最终的算法结果。实验结果表明本发明提出的屏幕图像无参考质量评价模型明显优于目前的其他模型。Three commonly used criteria are used to evaluate the algorithm's accuracy in predicting screen image quality. The evaluation criteria include: Pearson Linear Correlation Coefficient PLCC (Pearson Linear Correlation Coefficient) for evaluating the accuracy of prediction; Spearman Rank-order Correlation Coefficient SRCC (Spearman Rank-order Correlation Coefficient) for evaluating the monotonicity of prediction; The Root Mean Squared Error (RMSE) of the correlation of subjective scores. In general, higher PLCC and SRCC, and lower RMSE values indicate better predictive performance of the algorithmic model. In order to verify the performance of the algorithm proposed in the present invention, the proposed algorithm is compared with the existing full-reference and no-reference image quality evaluation methods on the databases SIQAD and SCID. In order to eliminate the evaluation bias, each algorithm was repeatedly evaluated 1000 times with the above criteria, and the median of the results of each criterion was taken as the final algorithm result. The experimental results show that the no-reference quality evaluation model for screen images proposed by the present invention is significantly better than other current models.
一种对于屏幕图像的质量评价方法,其特征提取包括以下几个步骤:A quality evaluation method for screen images, the feature extraction includes the following steps:
A.采用色彩不变性描述符提取屏幕图像中的色彩统计特征,评估屏幕图像的空间色彩连续性。A. Using color invariant descriptors to extract color statistical features in screen images and evaluate the spatial color continuity of screen images.
B.采用局部三值模式操作算子和对数导数提取局部纹理统计特征,描述屏幕图像的空间纹理连续性。B. The local ternary mode operation operator and logarithmic derivative are used to extract local texture statistical features to describe the spatial texture continuity of the screen image.
C.采用支持向量回归作为映射函数将图像感知特征映射至主观分数。C. Using support vector regression as the mapping function to map image perceptual features to subjective scores.
2.根据权利要求1所述的对于屏幕图像的质量评价方法,其特征在于:所包含的视觉特征包括:色彩统计特征、纹理统计特征。2 . The quality evaluation method for screen images according to claim 1 , wherein the included visual features include: color statistical features and texture statistical features. 3 .
进一步地,所述视觉特征包含色彩统计特征。色彩统计特征通过零阶色彩不变性描述符和一阶色彩不变性描述符获得。Further, the visual features include color statistical features. Color statistical features are obtained through zero-order color invariance descriptors and first-order color invariance descriptors.
进一步地,通过两种零阶色彩不变性描述符和两种一阶色彩不变性描述符提取色彩统计特征,其具体步骤是:Further, color statistical features are extracted through two zero-order color invariance descriptors and two first-order color invariance descriptors, and the specific steps are:
A.采用零阶色彩不变性描述符饱和度φ和色度ψ提取色彩统计特征。饱和度可被定义为:A. Use zero-order color invariance descriptors saturation φ and chroma ψ to extract color statistical features. Saturation can be defined as:
其中,R,G,B分别代表图像的红绿蓝三个通道的颜色。考虑到空间相关性,计算去除冗余信息后的图像φ(i,j)′,且相邻像素间的四个方向的饱和度差异的计算如下公式所示:Among them, R, G, B represent the color of the red, green and blue channels of the image, respectively. Taking into account the spatial correlation, the image φ(i, j)' after removing redundant information is calculated, and the calculation of the saturation difference in the four directions between adjacent pixels is shown in the following formula:
φh(i,j)′=φ(i,j+1)-φ(i,j) (2)φ h (i,j)′=φ(i,j+1)-φ(i,j) (2)
φv(i,j)′=φ(i+1,j)-φ(i,j) (3)φ v (i,j)′=φ(i+1,j)-φ(i,j) (3)
φd1(i,j)′=φ(i+1,j+1)-φ(i,j) (4)φ d1 (i,j)′=φ(i+1,j+1)-φ(i,j) (4)
φd2(i,j)′=φ(i+1,j-1)-φ(i,j) (5)φ d2 (i,j)′=φ(i+1,j-1)-φ(i,j) (5)
其中h表示水平方向,v表示垂直方向,d1表示主对角线方向,d2表示次对角线方向。φ(i,j)表示在第i行,第j列的像素的饱和度。Where h represents the horizontal direction, v represents the vertical direction, d1 represents the main diagonal direction, and d2 represents the secondary diagonal direction. φ(i, j) represents the saturation of the pixel in the i-th row and the j-th column.
由于屏幕图像的强度值分布并不像自然图像所服从的高斯或类高斯分布那样,因此通过对φd(i,j)′(d∈{h,v,d1,d2})的绝对值进行直方图统计并分块来提取色彩特征c1∈{φ′h,φ′v,φ′d1,φ′d2}。使用下面的模型来计算相对饱和度的统计直方图:Since the intensity value distribution of screen images is not like the Gaussian or Gaussian-like distribution obeyed by natural images, it is determined by performing Histogram statistics and binning to extract color features c 1 ∈ {φ′ h , φ′ v , φ′ d1 , φ′ d2 }. Use the following model to calculate a statistical histogram of relative saturation:
其中,m是直方图块索引m∈[1,10],且,d代表四个方向,即d∈{h,v,d1,d2},H和W分别是图像的高与宽,且i∈{1,2,…,H},j∈{1,2,…,W}。where m is the histogram block index m∈[1,10], and d represents four directions, namely d∈{h, v, d 1 , d 2 }, H and W are the height and width of the image, respectively, And i∈{1,2,…,H}, j∈{1,2,…,W}.
基于色度描述符对图像的光度和反射不变性进行度量,首先将图像从RGB颜色空间转换到OC空间,转换的公式为:其中,O1和O2表示不随光度和几何变换而变化的色彩分量,O3是亮度分量。色度可以被定义为:The luminosity and reflection invariance of the image is measured based on the chromaticity descriptor. First, the image is converted from the RGB color space to the OC space. The conversion formula is: Among them, O1 and O2 represent the color components that do not change with photometric and geometric transformations, and O3 is the luminance component. Chroma can be defined as:
类似的,使用相邻像素间的色度差异来提取基于图像色度的特征,四个方向的色度差异的计算公式如下所示:Similarly, the chromaticity difference between adjacent pixels is used to extract the features based on the chromaticity of the image. The calculation formula of the chromaticity difference in the four directions is as follows:
ψh(i,j)′=Γ(ψ(i,j+1),ψ(i,j)) (9)ψ h (i,j)′=Γ(ψ(i,j+1),ψ(i,j)) (9)
ψv(i,j)′=Γ(ψ(i+1,j),ψ(i,j)) (10)ψ v (i, j)′=Γ(ψ(i+1,j),ψ(i,j)) (10)
ψd1(i,j)′=Γ(ψ(i+1,j+1),ψ(i,j)) (11)ψ d1 (i, j)′=Γ(ψ(i+1, j+1), ψ(i, j)) (11)
ψd2(i,j)′=Γ(ψ(i+1,j-1),ψ(i,j)) (12)ψ d2 (i, j)′=Γ(ψ(i+1,j-1),ψ(i,j)) (12)
其中,ψ(i,j)表示屏幕图像空间为(i,j)上的色度值,色度差异ψd(i,j)′表示相邻像素间的颜色角度差异,且d∈{h,v,d1,d2},颜色角度差异操作Γ可以被定义为:Among them, ψ(i,j) represents the chromaticity value on the screen image space (i,j), the chromaticity difference ψd (i,j)′ represents the color angle difference between adjacent pixels, and d∈{h , v, d 1 , d 2 }, the color angle difference operation Γ can be defined as:
类似于饱和度描述符,对色度差异图进行直方图统计c2∈{ψ′h,ψ′v,ψ′d1,ψ′d2},并进一步提取相应的色彩特征。Similar to the saturation descriptor, histogram statistics are performed on the chrominance difference map c 2 ∈{ψ′ h ,ψ′ v ,ψ′ d1 ,ψ′ d2 }, and further extract the corresponding color features.
B.采用一阶色彩不变性描述符对顶角θ和球面角δ提取色彩统计特征。对顶角可以被定义为:B. The first-order color invariance descriptor is used to extract color statistical features for vertex angle θ and spherical angle δ. Opposite angles can be defined as:
其中和分别代表O1和O2的高斯偏导,且含有参数σ的高斯函数被定义为:因此,可以定义 其中Rx,Gx和Bx分别是彩色图像R,G,B通道的一阶高斯偏导。类似于公式(9)-(12),对顶角可以在四个方向上计算得到,即θh(i,j)′,θv(i,j)′,θd1(i,j)′,θd2(i,j)′。且相应的统计直方图被计算出c3∈{θ′h,θ′v,θ′d1,θ′d2},并根据直方图并进一步提取色彩特征。in and represent the Gaussian partial derivatives of O1 and O2, respectively, and the Gaussian function with parameter σ is defined as: Therefore, it can be defined where R x , G x and B x are the first-order Gaussian partial derivatives of the R, G, and B channels of the color image, respectively. Similar to equations (9)-(12), the diagonal angles can be calculated in four directions, namely θ h (i, j)′, θ v (i, j)′, θ d1 (i, j)′ , θ d2 (i,j)′. and the corresponding statistical histogram is calculated c 3 ∈ {θ′ h , θ′ v , θ′ d1 , θ′ d2 }, and further extract color features according to the histogram.
最后一个色彩不变性描述符球面角可以被定义为:The last color invariant descriptor spherical angle can be defined as:
其中被定义为且表示点乘操作,另外,可以被定义为:in is defined as and represents the dot product operation, in addition, can be defined as:
其中γ=R2+G2,同样地,可以计算四个方向上的球面角,即δh(i,j)′,δv(i,j)′,δd1(i,j)′,δd2(i,j)′,并进一步计算出相应的统计直方图c4∈{δ′h,δ′v,δ′d1,δ′d2},从而根据直方图提取对应的色彩统计特征。in γ=R 2 +G 2 , similarly, the spherical angles in four directions can be calculated, namely δ h (i,j)′,δ v (i,j)′,δ d1 (i,j)′,δ d2 (i,j)′, and further calculate the corresponding statistical histogram c 4 ∈{δ′ h ,δ′ v ,δ′ d1 ,δ′ d2 }, thereby extracting the corresponding color statistical features according to the histogram.
最终,通过上述的四个色彩不变性描述符从失真屏幕图像的两个尺度中总共提取出320个色彩特征,并且由这些特征构成感知质量的色彩特征向量,计算公式如下:Finally, a total of 320 color features are extracted from the two scales of the distorted screen image through the above four color invariance descriptors, and these features constitute the color feature vector of perceptual quality. The calculation formula is as follows:
其中,是连接操作,表示SVR模型输入的特征向量。in, is the connection operation, Feature vector representing the input to the SVR model.
进一步地,通过局部三值模式操作算子提取纹理统计特征,其具体步骤是:Further, the texture statistical features are extracted through the local ternary mode operation operator, and the specific steps are:
因为局部三值模式操作可以提取对图像失真敏感的重要信息,所以首先根据局部三值模式操作将失真屏幕图像分成顶层模式(u)和底层模式(l)两个通道,通过这两个通道,按照公式(6)和(7)可以计算出相应地统计直方图。利用统计直方图分别提取相应的纹理统计特征,即和且m∈[1,10]。根据现有的研究,图像的对数导数可以很好地表示图像的纹理信息,该导数可以被定义为相邻像素的四个方向的强度值差异。因此,也可以对屏幕图像的顶部模式和底部模式进行对数导数提取相应的纹理特征。通过对图像的顶部模式和底部模式进行对数操作得到更新后的模式u(d,j)′和l(d,j)′,计算公式如下所示:Because the local ternary mode operation can extract important information sensitive to image distortion, the distorted screen image is firstly divided into two channels, the top mode (u) and the bottom mode (l), according to the local three-value mode operation. The corresponding statistical histogram can be calculated according to formulas (6) and (7). Use the statistical histogram to extract the corresponding texture statistical features respectively, namely and and m∈[1, 10]. According to the existing research, the logarithmic derivative of the image can well represent the texture information of the image, and the derivative can be defined as the difference of the intensity values of adjacent pixels in four directions. Therefore, the logarithmic derivative of the top mode and bottom mode of the screen image can also be used to extract the corresponding texture features. The updated modes u(d, j)' and l(d, j)' are obtained by logarithmic operation on the top mode and bottom mode of the image, and the calculation formula is as follows:
u(i,j)′=log[u(d,j)+C] (18)u(i,j)′=log[u(d,j)+C] (18)
l(i,j)′=log[l(i,j)+C] (19)l(i,j)′=log[l(i,j)+C] (19)
其中,C是一个很小的常数。进一步计算顶部模式相邻像素间的空间相关性,计算公式如下所示:where C is a small constant. To further calculate the spatial correlation between adjacent pixels in the top mode, the calculation formula is as follows:
同样地,底部模式的相邻像素间的空间相关性也被计算出来,然后根据这些计算得到的相关性图进一步计算相应的统计直方图,即和从而根据直方图提取对应的纹理统计特征。Similarly, the spatial correlations between adjacent pixels of the bottom pattern are also calculated, and then the corresponding statistical histograms are further calculated based on these calculated correlation maps, i.e. and Thereby, the corresponding texture statistical features are extracted according to the histogram.
最终,通过上述的顶部模式和底部模式从失真屏幕图像的两个尺度中总共提取出200个色彩特征,并且由这些特征构成感知质量的纹理特征向量,计算公式如下:Finally, a total of 200 color features are extracted from the two scales of the distorted screen image through the above-mentioned top mode and bottom mode, and these features constitute the texture feature vector of perceptual quality. The calculation formula is as follows:
其中,是连接操作,表示SVR模型输入的特征向量。in, is the connection operation, Feature vector representing the input to the SVR model.
附图说明Description of drawings
图1为本发明的算法流程框架图。FIG. 1 is an algorithm flow frame diagram of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present invention.
其中,本文所涉及的技术特征、简写/缩写、符号等,以本领域技术人员的公知认识/通常理解为基础进行解释、定义/说明。Wherein, the technical features, abbreviations/abbreviations, symbols, etc. involved in this document are explained, defined/illustrated on the basis of the common knowledge/common understanding of those skilled in the art.
为了改善屏幕图像的客观质量评价方法的性能以及针对屏幕图像特定失真类型和内容,提出了一种新的基于空间连续性的屏幕图像质量评价模型。所采用的视觉特征包含基于空间色彩连续性的色彩统计特征和基于空间纹理连续性的纹理统计特征。In order to improve the performance of the objective quality evaluation method of screen images and for the specific distortion types and contents of screen images, a new screen image quality evaluation model based on spatial continuity is proposed. The adopted visual features include color statistical features based on spatial color continuity and texture statistical features based on spatial texture continuity.
本发明的过程如图1所示,具体过程如下:The process of the present invention is shown in Figure 1, and the concrete process is as follows:
步骤1:采用基于零阶和一阶不变性的色彩描述符提取屏幕图像的色彩统计特征,以评估屏幕图像的空间色彩连续性。Step 1: Use zero-order and first-order invariant-based color descriptors to extract color statistical features of screen images to evaluate the spatial color continuity of screen images.
步骤2:采用局部三值模式操作提取屏幕图像的纹理统计特征,来描述屏幕图像的空间纹理连续性。Step 2: Extract the texture statistical features of the screen image by using the local ternary mode operation to describe the spatial texture continuity of the screen image.
步骤3:采用机器学习的方法,通过支持向量回归将提取得到的特征映射到图像的质量分数,利用学习到的特征进行映射后的分数作为屏幕图像质量评价的标准。Step 3: Using the machine learning method, map the extracted features to the image quality score through support vector regression, and use the learned features to map the scores as the standard for screen image quality evaluation.
使用三种常用的标准来评估算法预测屏幕图像质量的准确性。评估标准包含:用于评估预测的准确性的皮尔逊线性相关系数PLCC(Pearson Linear CorrelationCoefficient);用于评估预测的单调性斯皮尔曼的相关系数SRCC(Spearman Rank-orderCorrelation Coefficient);衡量客观分数和主观分数的相关性的均方根误差RMSE(RootMean Squared Error)。一般来说,更高的PLCC和SRCC,更低的RMSE值表示算法模型有更好的预测性能。为了验证本发明提出的算法的性能,将所提算法与现有的全参考和无参考图像质量评价方法在数据库SIQAD和SCID上进行比较。为了消除评估偏差,重复用上述标准对每个算法进行了1000次的实验评估,取每个标准的结果的中值作为最终的算法结果。实验结果表明本发明提出的屏幕图像无参考质量评价模型明显优于目前的其他模型。Three commonly used criteria are used to evaluate the algorithm's accuracy in predicting screen image quality. The evaluation criteria include: Pearson Linear Correlation Coefficient PLCC (Pearson Linear Correlation Coefficient) for evaluating the accuracy of prediction; Spearman Rank-order Correlation Coefficient SRCC (Spearman Rank-order Correlation Coefficient) for evaluating the monotonicity of prediction; The Root Mean Squared Error (RMSE) of the correlation of subjective scores. In general, higher PLCC and SRCC, and lower RMSE values indicate better predictive performance of the algorithmic model. In order to verify the performance of the algorithm proposed in the present invention, the proposed algorithm is compared with the existing full-reference and no-reference image quality evaluation methods on the databases SIQAD and SCID. In order to eliminate the evaluation bias, each algorithm was repeatedly evaluated 1000 times with the above criteria, and the median of the results of each criterion was taken as the final algorithm result. The experimental results show that the no-reference quality evaluation model for screen images proposed by the present invention is significantly better than other current models.
本发明各个部分的具体操作如下:The concrete operations of each part of the present invention are as follows:
·色彩统计特征的提取:Extraction of color statistical features:
色彩统计特征由基于空间色彩连续性的零阶和一阶不变性色彩描述符提取得到。Color statistics are extracted from zero- and first-order invariant color descriptors based on spatial color continuity.
对于零阶不变性色彩描述符,其表示的是图像颜色通道的零阶导数信息。首先通过色彩饱和度不变性描述符提取相应的颜色统计特征,饱和度是对于一种纯色被白光所稀释的程度,它是图像的颜色表示的重要组成部分。通常,高饱和度表示图像具有更好的视觉质量。对于给定的屏幕图像I={R,G,B},色彩饱和度的计算方式如下所示:For the zero-order invariant color descriptor, it represents the zero-order derivative information of the image color channel. First, the corresponding color statistical features are extracted through the color saturation invariance descriptor. Saturation is the degree to which a pure color is diluted by white light, which is an important part of the color representation of the image. Generally, high saturation means the image has better visual quality. For a given screen image I = {R, G, B}, the color saturation is calculated as follows:
通常相邻像素的饱和度值的变化可能会导致视觉失真的产生,因此,为了消除空间冗余性和预测空间相关性,分别计算了水平方向(h)、垂直方向(v)、主对角线方向(d1)和次对角线方向(d2)的相邻像素间的色彩相对饱和度。其计算公式如下所示:Usually, the change of the saturation value of adjacent pixels may lead to the generation of visual distortion. Therefore, in order to eliminate the spatial redundancy and predict the spatial correlation, the horizontal direction (h), vertical direction (v), main diagonal The relative saturation of color between adjacent pixels in the line direction (d1) and the sub-diagonal direction (d2). Its calculation formula is as follows:
φh(i,j)′=φ(i,j+1)-φ(i,j) (2)φ h (i,j)′=φ(i,j+1)-φ(i,j) (2)
φv(i,j)′=φ(i+1,j)-φ(i,j) (3)φv(i, j )′=φ(i+1,j)-φ(i,j) (3)
φd1(i,j)′=φ(i+1,j+1)-φ(i,j) (4)φ d1 (i, j)′=φ(i+1, j+1)-φ(i, j) (4)
φd2(i,j)′=φ(i+1,j-1)-φ(i,j) (5)φ d2 (i, j)′=φ(i+1, j-1)-φ(i, j) (5)
在此基础上,取φd(i,j)′,d∈(h,v,d1,d2)的绝对值,并以直方图的形式表示色彩特征。直方图的分块数为10,提取的色彩统计特征为c1∈{φ′h,φ′v,φ′d1,φ′d2}。相应的直方图计算如下所示:On this basis, take the absolute value of φ d (i,j)′,d∈(h,v,d1,d2), and express the color feature in the form of a histogram. The number of blocks of the histogram is 10, and the extracted color statistical features are c 1 ∈ {φ′ h , φ′ v , φ′ d1 , φ′ d2 }. The corresponding histogram calculation is as follows:
其中,m是直方图块索引m∈[1,10],且,d代表四个方向,即d∈{h,v,d1,d2},H和W分别是图像的高与宽,且i∈{1,2,…,H},j∈{1,2,…,W}。where m is the histogram block index m∈[1,10], and d represents four directions, namely d∈{h,v,d 1 ,d 2 }, H and W are the height and width of the image, respectively, And i∈{1,2,…,H}, j∈{1,2,…,W}.
其次,通过色彩色度不变性描述符提取相应色彩统计特征。色度是影响图像的颜色认知的重要因素。将图像从RGB颜色空间转换到OC空间:其中,O1和O2表示不随光度和几何变换而变化的色彩分量,O3是亮度分量。色度的计算方式如下:Secondly, the corresponding color statistical features are extracted through the color chromaticity invariance descriptor. Chroma is an important factor affecting the color perception of an image. Convert an image from RGB color space to OC space: Among them, O1 and O2 represent the color components that do not change with photometric and geometric transformations, and O3 is the luminance component. Chroma is calculated as follows:
类似的,使用相邻像素间的色度差异来提取基于图像色度的特征,水平方向(h)、垂直方向(v)、主对角线方向(d1)和次对角线方向(d2)四个方向的色度差异的计算公式如下所示:Similarly, the chromaticity difference between adjacent pixels is used to extract features based on image chromaticity, horizontal (h), vertical (v), main diagonal (d1) and sub-diagonal (d2) The formula for calculating the chromaticity difference in the four directions is as follows:
ψh(i,j)′=Γ(ψ(i,j+1),ψ(i,j)) (9)ψ h (i, j)′=Γ(ψ(i, j+1), ψ(i, j)) (9)
ψv(i,j)′=Γ(ψ(i+1,j),ψ(i,j)) (10)ψ v (i, j)′=Γ(ψ(i+1, j), ψ(i, j)) (10)
ψd1(i,j)′=Γ(ψ(i+1,j+1),ψ(i,j)) (11)ψ d1 (i, j)′=Γ(ψ(i+1, j+1), ψ(i, j)) (11)
ψd2(i,j)′=Γ(ψ(i+1,j-1),ψ(i,j)) (12)ψ d2 (i, j)′=Γ(ψ(i+1, j-1),ψ(i,j)) (12)
其中,ψ(i,j)表示屏幕图像空间为(i,j)上的色度值,ψd(i,j)′表示相邻像素间的颜色色度差异,且d∈{h,v,d1,d2},颜色色度差异操作Γ可以被定义为:Among them, ψ(i,j) represents the chromaticity value on the screen image space (i,j), ψd (i,j)′ represents the color chromaticity difference between adjacent pixels, and d∈{h,v , d 1 , d 2 }, the color chromaticity difference operation Γ can be defined as:
类似于饱和度描述符,对色度差异图进行直方图统计c2∈{ψ′h,ψ′v,ψ′d1,ψ′d2},并进一步提取相应的色彩特征。Similar to the saturation descriptor, histogram statistics are performed on the chrominance difference map c 2 ∈{ψ′ h ,ψ′ v ,ψ′ d1 ,ψ′ d2 }, and further extract the corresponding color features.
对于一阶不变性色彩描述符,其表示的是图像颜色通道的一阶导数信息。利用色彩对顶角描述符提取相应的颜色统计特征。对顶角定义如下:For the first-order invariant color descriptor, it represents the first-order derivative information of the image color channel. Corresponding color statistical features are extracted using color pair vertex descriptors. The vertex angle is defined as follows:
其中和分别代表O1和O2的高斯偏导,且含有参数σ的高斯函数被定义为:因此,可以定义 其中Rx,Gx和Bx分别是彩色图像R,G,B通道的一阶高斯偏导。类似于公式(9)-(12),在四个方向上计算对顶角差异,即θh(i,j)′,θv(i,j)′,θd1(i,j)′,θd2(i,j)′。且计算出相应的统计直方图即c3∈{θ′h,θ′v,θ′d1,θ′d2},并根据直方图并进一步提取色彩统计特征。in and represent the Gaussian partial derivatives of O1 and O2, respectively, and the Gaussian function with parameter σ is defined as: Therefore, it can be defined where R x , G x and B x are the first-order Gaussian partial derivatives of the R, G, and B channels of the color image, respectively. Similar to formulas (9)-(12), the difference of the opposite angles is calculated in four directions, namely θ h (i,j)′, θ v (i, j)′, θ d1 (i, j)′, θ d2 (i,j)′. And the corresponding statistical histogram is calculated as c 3 ∈{θ′ h , θ′ v , θ′ d1 , θ′ d2 }, and further extract color statistical features according to the histogram.
最后一个一阶不变性色彩描述符球面角可以被定义为:The last first-order invariant color descriptor spherical angle can be defined as:
其中被定义为且表示点乘操作,另外,可以被定义为:in is defined as and represents the dot product operation, in addition, can be defined as:
其中γ=R2+G2,同样地,可以计算四个方向上的球面角差异,即δh(i,j)′,δv(i,j)′,δd1(i,j)′,δd2(i,j)′,并进一步计算出相应的统计直方图c4∈{δ′h,δ′v,δ′d1,δ′d2},从而根据直方图提取对应的色彩统计特征。in γ=R 2 +G 2 , similarly, the spherical angle difference in four directions can be calculated, namely δ h (i,j)′,δ v (i,j)′,δ d1 (i,j)′, δ d2 (i,j)′, and further calculate the corresponding statistical histogram c 4 ∈{δ′ h ,δ′ v ,δ′ d1 ,δ′ d2 }, so as to extract the corresponding color statistical features according to the histogram.
最终,通过上述的四个色彩不变性描述符从失真屏幕图像的两个尺度中总共提取出320个色彩特征,并且由这些特征构成屏幕图像感知质量的色彩统计特征向量,向量定义如下:Finally, a total of 320 color features are extracted from the two scales of the distorted screen image through the above four color invariance descriptors, and these features constitute the color statistical feature vector of the perceptual quality of the screen image, and the vector is defined as follows:
其中,是连接操作,表示SVR模型输入的特征向量。in, is the connection operation, Feature vector representing the input to the SVR model.
·纹理统计特征的提取:Extraction of texture statistical features:
局部特征由基于空间纹理连续性的局部三值模式操作提取得到。Local features are extracted by local ternary mode operations based on spatial texture continuity.
局部三值模式操作可以提取对图像失真敏感的重要信息,所以首先根据局部三值模式操作将失真屏幕图像分成顶层模式(u)和底层模式(l)两个通道,通过这两个通道,按照公式(6)和(7)可以计算出相应地统计直方图。利用统计直方图分别提取相应的纹理统计特征,即和且m∈[1,10]。另外,现有的研究表明图像的对数导数可以很好地表示图像的纹理信息,该导数可以被定义为相邻像素的四个方向的强度值差异。因此,也可以对屏幕图像的顶部模式和底部模式的对数导数提取相应的纹理特征。通过对图像的顶部模式和底部模式进行对数操作得到更新后的模式u(i,j)′和l(i,j)′,计算公式如下所示:The local ternary mode operation can extract important information sensitive to image distortion, so first, according to the local ternary mode operation, the distorted screen image is divided into two channels: the top mode (u) and the bottom mode (l). Equations (6) and (7) can calculate corresponding statistical histograms. Use the statistical histogram to extract the corresponding texture statistical features respectively, namely and and m∈[1,10]. In addition, existing research shows that the logarithmic derivative of the image can well represent the texture information of the image, and the derivative can be defined as the difference in the intensity values of adjacent pixels in four directions. Therefore, the corresponding texture features can also be extracted for the logarithmic derivatives of the top mode and bottom mode of the screen image. The updated modes u(i,j)' and l(i,j)' are obtained by logarithmic operation on the top mode and bottom mode of the image, and the calculation formula is as follows:
u(i,j)′=log[u(i,j)+C] (18)u(i,j)′=log[u(i,j)+C] (18)
l(i,j)′=log[l(i,j)+C] (19)l(i,j)′=log[l(i,j)+C] (19)
其中,C是一个很小的常数。进一步计算顶部模式相邻像素间的空间相关性,计算公式如下所示:where C is a small constant. To further calculate the spatial correlation between adjacent pixels in the top mode, the calculation formula is as follows:
同样地,计算底部模式的相邻像素间的空间相关性,然后根据这些计算得到的相关性图进一步计算相应的统计直方图,即和从而根据直方图提取对应的纹理统计特征。Similarly, the spatial correlation between adjacent pixels of the bottom pattern is calculated, and then the corresponding statistical histogram is further calculated according to these calculated correlation maps, namely and Thereby, the corresponding texture statistical features are extracted according to the histogram.
最终,根据上述的顶部底部初始模式和更新模式,并从失真屏幕图像的两个尺度中共提取出200个色彩特征,并且由这些特征构成感知质量的纹理特征向量,向量定义如下:Finally, according to the above-mentioned top-bottom initial mode and update mode, a total of 200 color features are extracted from the two scales of the distorted screen image, and these features constitute the texture feature vector of perceptual quality, and the vector is defined as follows:
其中,是连接操作,表示SVR模型输入的特征向量。in, is the connection operation, Feature vector representing the input to the SVR model.
·屏幕图像质量的计算:Calculation of screen image quality:
对于失真屏幕图像,通过上述操作获得520维特征,包括320维色彩统计特征和200维纹理统计特征。色彩统计特征中含基于零阶不变性描述符提取的色彩特征160维和基于一阶不变性描述符提取的色彩特征160维。纹理统计特征中含基于局部三值模式的顶部模式和底部模式的40维纹理特征以及基于局部三值模式的顶部更新模式和底部更新模式的160维纹理特征。在本发明中,使用机器学习方法SVR作为最终质量分数的映射函数。在实验中,将屏幕图像数据库中的屏幕图像随机分为训练集和测试集1000次,其中80%为训练数据集,其余20%为测试数据集。For distorted screen images, 520-dimensional features are obtained through the above operations, including 320-dimensional color statistical features and 200-dimensional texture statistical features. The color statistical features include 160-dimensional color features extracted based on zero-order invariant descriptors and 160-dimensional color features extracted based on first-order invariant descriptors. The texture statistical features include 40-dimensional texture features based on the top mode and bottom mode of the local three-value mode, and 160-dimensional texture features based on the top update mode and bottom update mode of the local three-value mode. In the present invention, the machine learning method SVR is used as the mapping function of the final quality score. In the experiment, the screen images in the screen image database are randomly divided into training set and test set 1000 times, of which 80% are training data sets and the remaining 20% are test data sets.
此外,为了评估本发明提出的模型预测的质量和主观分数的契合程度,使用了如下所示的逻辑回归函数将预测的质量分数映射到公共空间,并剔除非线性相关项。In addition, in order to evaluate the fit of the quality predicted by the proposed model and the subjective scores, the logistic regression function shown below was used to map the predicted quality scores to the common space, and non-linear correlations were eliminated.
其中,f(x)表示进行非线性逻辑回归后的预测质量分数,{β1,β2,β3,β4,β5}为设定参数。Among them, f(x) represents the prediction quality score after nonlinear logistic regression, and {β 1 , β 2 , β 3 , β 4 , β 5 } are set parameters.
表1:本发明和其它不同全参考质量评价方法模型在数据库SIQAD和SCID中的性能比较:Table 1: The performance comparison of the present invention and other different full reference quality evaluation method models in databases SIQAD and SCID:
表1为不同图像全参考质量评价方法的实验结果对比。从这些实验结果中,提出的屏幕图像质量评价方法与主观评价相关性更高。Table 1 compares the experimental results of different image full reference quality evaluation methods. From these experimental results, the proposed screen image quality evaluation method is more correlated with subjective evaluation.
表2:本发明和其它不同无参考质量评价方法模型在数据库SIQAD和SCID中的性能比较:Table 2: The performance comparison of the present invention and other different non-reference quality evaluation method models in databases SIQAD and SCID:
表2为不同图像无参考质量评价方法的实验结果对比。从这些实验结果中,提出的屏幕图像质量评价方法与主观评价相关性更高。Table 2 compares the experimental results of different image no-reference quality evaluation methods. From these experimental results, the proposed screen image quality evaluation method is more correlated with subjective evaluation.
表3:数据库SIQAD和SCID中使用不同特征的性能比较:Table 3: Performance comparison using different features in databases SIQAD and SCID:
表3为使用不同特征来预测屏幕图像质量的实验结果对比。Table 3 is a comparison of experimental results using different features to predict screen image quality.
上述实施方式是对本发明的说明,不是对本发明的限定,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的保护范围由所附权利要求及其等同物限定。The above-mentioned embodiment is an illustration of the present invention, not a limitation of the present invention. It can be understood that various changes, modifications, replacements and modifications can be made to these embodiments without departing from the principle and spirit of the present invention. The protection scope of the present invention It is defined by the appended claims and their equivalents.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910162786.XA CN109978834A (en) | 2019-03-05 | 2019-03-05 | A kind of screen picture quality evaluating method based on color and textural characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910162786.XA CN109978834A (en) | 2019-03-05 | 2019-03-05 | A kind of screen picture quality evaluating method based on color and textural characteristics |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109978834A true CN109978834A (en) | 2019-07-05 |
Family
ID=67077844
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910162786.XA Pending CN109978834A (en) | 2019-03-05 | 2019-03-05 | A kind of screen picture quality evaluating method based on color and textural characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109978834A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111310774A (en) * | 2020-04-01 | 2020-06-19 | 江苏商贸职业学院 | PM2.5 concentration measurement method based on image quality |
CN113362301A (en) * | 2021-06-03 | 2021-09-07 | 中国农业银行股份有限公司 | Imaging instrument testing method, device, equipment, storage medium and program product |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104023226A (en) * | 2014-05-28 | 2014-09-03 | 北京邮电大学 | HVS-based novel video quality evaluation method |
US20170286798A1 (en) * | 2016-03-31 | 2017-10-05 | Ningbo University | Objective assessment method for color image quality based on online manifold learning |
CN107481236A (en) * | 2017-08-28 | 2017-12-15 | 方玉明 | A kind of quality evaluating method of screen picture |
CN107507166A (en) * | 2017-07-21 | 2017-12-22 | 华侨大学 | It is a kind of based on support vector regression without refer to screen image quality measure method |
CN109191460A (en) * | 2018-10-15 | 2019-01-11 | 方玉明 | A kind of quality evaluating method for tone mapping image |
-
2019
- 2019-03-05 CN CN201910162786.XA patent/CN109978834A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104023226A (en) * | 2014-05-28 | 2014-09-03 | 北京邮电大学 | HVS-based novel video quality evaluation method |
US20170286798A1 (en) * | 2016-03-31 | 2017-10-05 | Ningbo University | Objective assessment method for color image quality based on online manifold learning |
CN107507166A (en) * | 2017-07-21 | 2017-12-22 | 华侨大学 | It is a kind of based on support vector regression without refer to screen image quality measure method |
CN107481236A (en) * | 2017-08-28 | 2017-12-15 | 方玉明 | A kind of quality evaluating method of screen picture |
CN109191460A (en) * | 2018-10-15 | 2019-01-11 | 方玉明 | A kind of quality evaluating method for tone mapping image |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111310774A (en) * | 2020-04-01 | 2020-06-19 | 江苏商贸职业学院 | PM2.5 concentration measurement method based on image quality |
CN113362301A (en) * | 2021-06-03 | 2021-09-07 | 中国农业银行股份有限公司 | Imaging instrument testing method, device, equipment, storage medium and program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109191460B (en) | Quality evaluation method for tone mapping image | |
CN110443800B (en) | Video image quality evaluation method | |
CN110458803A (en) | Frame image color cast and brightness detection method and device based on colored ribbon calibration | |
CN104504722B (en) | Method for correcting image colors through gray points | |
CN109218716A (en) | Based on color statistics and comentropy without reference tone mapping graph image quality evaluation method | |
CN103617596A (en) | Image color style transformation method based on flow pattern transition | |
CN112381761B (en) | A robust low-light enhanced image quality evaluation method | |
CN111489333B (en) | No-reference night natural image quality evaluation method | |
CN114511567B (en) | Tongue body and tongue coating image identification and separation method | |
CN109978834A (en) | A kind of screen picture quality evaluating method based on color and textural characteristics | |
CN116233380A (en) | A brightness perception quantitative calculation method, system, device and medium | |
CN108711160A (en) | A kind of Target Segmentation method based on HSI enhancement models | |
WO2022120799A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
Hu et al. | No reference quality assessment for Thangka color image based on superpixel | |
CN109348219B (en) | Method for image conversion under different structures and image conversion device | |
Yuan et al. | Color image quality assessment with multi deep convolutional networks | |
Li et al. | Image inpainting based on scene transform and color transfer | |
CN117132544A (en) | Image color cast detection method based on deep learning and color space transformation | |
CN110120030A (en) | Processing method, application, computer-readable medium and the disease occurring area measuring method of wheat diseases generation image | |
US8295539B2 (en) | Method and system of immersive sensation enhancement for video sequence displaying | |
CN108540789B (en) | Image optimization method | |
CN103136722A (en) | Color gamut analysis based image partition method and system | |
CN118411932B (en) | LED display screen brightness correction method and correction system thereof | |
Guo et al. | Color difference matrix index for tone-mapped images quality assessment | |
CN104766301B (en) | A kind of monochromatic gathering algorithm based on image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190705 |
|
RJ01 | Rejection of invention patent application after publication |