[go: up one dir, main page]

CN109587474B - Distortion recovery degree-based no-reference video quality evaluation method and device - Google Patents

Distortion recovery degree-based no-reference video quality evaluation method and device Download PDF

Info

Publication number
CN109587474B
CN109587474B CN201811533786.8A CN201811533786A CN109587474B CN 109587474 B CN109587474 B CN 109587474B CN 201811533786 A CN201811533786 A CN 201811533786A CN 109587474 B CN109587474 B CN 109587474B
Authority
CN
China
Prior art keywords
image
frame
quality
training
distortion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811533786.8A
Other languages
Chinese (zh)
Other versions
CN109587474A (en
Inventor
董培祥
朱立松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cntv Wuxi Co ltd
Original Assignee
Cntv Wuxi Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cntv Wuxi Co ltd filed Critical Cntv Wuxi Co ltd
Priority to CN201811533786.8A priority Critical patent/CN109587474B/en
Publication of CN109587474A publication Critical patent/CN109587474A/en
Application granted granted Critical
Publication of CN109587474B publication Critical patent/CN109587474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention provides a no-reference video quality evaluation method and a no-reference video quality evaluation device based on distortion recovery degree, wherein the no-reference video quality evaluation method comprises the following steps: s10, extracting each frame of image in the video to be evaluated; s20, sequentially extracting the brightness component in each frame of image to obtain a gray level image of each frame of image; s30, restoring the gray level image of each frame of image in turn by using a pre-trained image distortion restoration model; s40, calculating the quality restoration degree of the gray-scale image of each frame of image in turn according to a preset rule; s50, evaluating the quality of the video to be evaluated according to the quality recovery degree of the gray level image of each frame of image, wherein the method is an objective video quality evaluation method, does not need manual participation of observers in the evaluation process of the video quality, can save a large amount of cost, and has higher efficiency.

Description

Distortion recovery degree-based no-reference video quality evaluation method and device
Technical Field
The invention relates to the technical field of computer vision, in particular to a no-reference video quality evaluation method and a no-reference video quality evaluation device based on distortion recovery degree.
Background
During the process of acquiring, storing, processing and transmitting the video, distortion of the video, such as picture blur, noise and the like, is inevitably introduced, and the distortion directly causes the degradation of the subjective experience of the user watching the video, so how to evaluate the video quality quickly and accurately at low cost is a very important issue for video content providers.
Video quality evaluation methods are classified into subjective evaluation methods and objective evaluation methods. The subjective method is to Score the subjective quality of a video by an observer (human), and is generally represented by Mean Opinion Score (MOS), which is a scoring judgment directly made by the observer on the quality of the video, or Mean Opinion difference Score (DMOS), which is a scoring judgment made by the observer on the difference in quality between a distorted video and an original video pair when the observer views the distorted video and the original video simultaneously. The subjective evaluation method can be directly used for measuring the subjective experience of human eyes on video quality, and has the advantages of accurate result, but the subjective evaluation method needs a plurality of observers to score the watching of the same video and then take an average value, so the method has high cost and low efficiency, and cannot be directly used in practical application.
The objective evaluation method is that the computer directly calculates the video according to a certain algorithm to obtain the quality of the video without manual intervention. Generally, the objective video quality assessment method separately processes each frame of image and calculates the objective quality of each frame of image in the video segment. Objective assessment methods are classified into three categories according to whether reference images are required: a Full Reference (FR) method, a partial Reference (RR) method, and a No Reference (NR) side. The FR method is to evaluate the quality of a distorted image by comparing the difference between the distorted image and an original undistorted image, and currently, commonly used indexes include: peak Signal Noise Ratio (PSNR), Structural Similarity (SSIM), Visual Information Fidelity (VIF), and the like, although the accuracy of the method is high, the method requires participation of an original undistorted image, which is often difficult to obtain in practice. The RR method needs to compare some features of the distorted image and the original undistorted image to determine the degree of image quality loss, and the features used for comparison include probability distribution of wavelet transform coefficients, multi-scale geometric analysis, contrast sensitivity function, and the like. The NR method does not require any information of the original image, and estimates the quality of the image directly from the distorted image itself, and thus has the most widespread application in practice.
The no-reference image quality evaluation methods are mainly classified into two types, one is a method for specific distortion, and the other is a method for non-specific distortion. The method for specific distortion is to evaluate different distortion types respectively to obtain the severity of each type of distortion, wherein the distortion types comprise noise, blur, blocking effect and the like. In practice, the distortion of an image may be a combination of multiple distortion types, for example, in the image compression process, the loss of high frequency components inevitably causes the increase of the image blurring degree along with the increase of the image compression degree, and the block-based encoding compression method such as h.264/h.265 causes the compressed image to present a significant blocking effect visually, so that the method is more practical for the evaluation method that the non-specific type distortion is closer to the human visual system. Among non-specific distortion-oriented estimation methods, Blind/reference-free Image Spatial Quality estimation (BRISQUE) is a typical representative. The basic idea of the BRISQUE algorithm is that although there may be a large difference in the brightness or color distribution of the images of the natural scene, after the images are processed by using the brightness normalization method, the normalized brightness coefficients of the images have a significant statistical rule, and the statistical rule is destroyed by distortion. Based on the thought, the BRISQE algorithm firstly calculates the image multi-scale Mean-removed Contrast Normalized (MSCN) coefficients, carries out asymmetric generalized Gaussian fitting on the coefficients and the correlation coefficients along different directions to obtain parameters as features, and then trains a Support Vector Regression (SVR) model to train to obtain a final image quality evaluation model. The BRISQUE algorithm completely works in a space domain, the overall execution efficiency is very high, and the operation speed of the algorithm is very high. However, in practice, it is found that the MSCN distribution of different types of images is greatly different, and the model fitted to a wide data set has a large deviation. In addition, the BRISQUE model only aims at images of natural scenes, and for film and television videos and comprehensive videos, due to the fact that a large amount of special effect processing exists, the difference degree of MSCN coefficient distribution is large, and a unified model is difficult to train for treatment evaluation.
Although the no-reference video quality assessment method is widely applied in all links from video content to end users, the no-reference video quality assessment task becomes very difficult due to the lack of original distortion-free video as comparison and the fact that the content of the video is varied.
Disclosure of Invention
Aiming at the problems, the invention provides a method and a device for evaluating the quality of a non-reference video based on a distortion recovery degree, which effectively solve the technical problem that the quality evaluation of the non-reference video in the prior art is difficult to realize.
The technical scheme provided by the invention is as follows:
a no-reference video quality evaluation method based on distortion recovery degree comprises the following steps:
s10, extracting each frame of image in the video to be evaluated;
s20, sequentially extracting the brightness component in each frame of image to obtain a gray level image of each frame of image;
s30, restoring the gray level image of each frame of image in turn by using a pre-trained image distortion restoration model;
s40, calculating the quality restoration degree of the gray-scale image of each frame of image in turn according to a preset rule;
s50, evaluating the quality of the video to be evaluated according to the quality recovery degree of the gray-scale image of each frame of image.
Further preferably, in step S20, the luminance components in each frame of image are sequentially extracted, and the grayscale image of each frame of image is obtained, specifically: the luminance component in each frame image is obtained by converting the RGB image into the YCbCr image, by,
Figure BDA0001906364880000031
where Y is the luminance component in the YCbCr image, Cb and Cr are the chrominance components of the YCbCr image, and R, G and B represent the red, blue and green components, respectively, in the RGB image.
Further preferably, in step S30, the method includes the step of training the image distortion recovery model:
s31, constructing an image distortion recovery model;
s32, constructing a training data set, wherein the training data set comprises a distorted image and a non-distorted image;
s32, training the image distortion recovery model by adopting a supervised training method, wherein in the training process, the input of the image distortion recovery model is a distorted image, and the output is a non-distorted image.
Further preferably, in step S32, the undistorted image is compressed at different levels to obtain distorted images at corresponding levels;
in step S33, an image distortion recovery model is trained using different levels of distorted images.
Further preferably, in step S40, the RD-PSNR and/or RD-SSIM between the gray-scale map of the input image distortion restoration model and the output restored image is calculated to obtain the quality restoration degree of each frame of image.
The invention also provides a no-reference video quality evaluation device based on the distortion recovery degree, which comprises the following steps:
the image extraction module is used for extracting each frame of image in the video to be evaluated;
the brightness component extraction module is used for sequentially extracting the brightness components in each frame of image extracted by the image extraction module to obtain a gray level image of each frame of image;
the image distortion recovery model is used for sequentially recovering the gray level images of each frame of image extracted by the brightness component extraction module;
and the quality evaluation module is used for sequentially calculating the quality recovery degree of the gray-scale image of each frame of image according to a preset rule and evaluating the quality of the video to be evaluated according to the quality recovery degree of the gray-scale image of each frame of image.
Further preferably, in the luminance component extraction module, specifically: the luminance component in each frame of image is obtained by converting the RGB image into the YCbCr image, and the converting means is,
Figure BDA0001906364880000041
where Y is the luminance component in the YCbCr image, Cb and Cr are the chrominance components of the YCbCr image, and R, G and B represent the red, blue and green components, respectively, in the RGB image.
Further preferably, the non-reference video quality evaluation apparatus further includes an image distortion recovery model training module, including:
the model building unit is used for building an image distortion recovery model;
the training set constructing unit is used for constructing a training data set, and the training data set comprises distorted images and undistorted images;
and the training unit is used for training the image distortion recovery model by adopting a supervised training device, and in the training process, the input of the image distortion recovery model is a distorted image and the output of the image distortion recovery model is a non-distorted image.
Further preferably, in the training set constructing unit, the undistorted image is compressed at different levels to obtain a distorted image at a corresponding level;
in the training unit, the image distortion recovery model is trained using different levels of distorted images.
Further preferably, in the quality evaluation module, the RD-PSNR and/or RD-SSIM between the gray-scale image of the input image distortion recovery model and the output recovered image is calculated to obtain the quality recovery degree of each frame of image.
The method and the device for evaluating the quality of the non-reference video based on the distortion recovery degree have the beneficial effects that:
the non-reference video quality evaluation method provided by the invention adopts the pre-constructed image distortion recovery model to recover the gray level image of each frame of image in the video to be evaluated respectively, obtains the quality recovery degree of each frame of image by calculating the RD-PSNR and/or RD-SSIM between the recovered image and the gray level image before recovery, and further evaluates the quality of the video to be evaluated, is an objective video quality evaluation method, does not need manual participation of an observer in the evaluation process of the video quality, can save a large amount of cost, has higher efficiency, and provides a feasible scheme for quality comparison between videos with different resolutions; in addition, the no-reference video quality evaluation method can be used for evaluating the quality of distortion videos without specific distortion types, can be used for processing the video quality distortion under different types of distortion coupling conditions, and is very wide in application; moreover, the distortion recovery degree metric index provided in the no-reference video quality evaluation method is more in line with the subjective perception mode of the human visual system on the video quality, and has better consistency with the subjective experience of people.
Drawings
The foregoing features, technical features, advantages and embodiments are further described in the following detailed description of the preferred embodiments, which is to be read in connection with the accompanying drawings.
FIG. 1 is a schematic flow chart of a distortion recovery degree-based no-reference video quality evaluation method according to the present invention;
FIG. 2 is a diagram illustrating the process of image compression of an original undistorted image according to the present invention;
fig. 3 is a schematic structural diagram of a non-reference video quality evaluation device based on distortion recovery degree in the present invention.
Reference numerals:
100-no-reference video quality evaluation device, 110-image extraction module, 120-brightness component extraction module, 130-image distortion recovery model and 140-quality evaluation module.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
As shown in fig. 1, which is a schematic flow chart of a no-reference video quality evaluation method based on distortion recovery degree provided by the present invention, it can be seen from the figure that the no-reference video quality evaluation method includes:
s10, extracting each frame of image in the video to be evaluated;
s20, sequentially extracting the brightness component in each frame of image to obtain a gray level image of each frame of image;
s30, restoring the gray level image of each frame of image in turn by using a pre-trained image distortion restoration model;
s40, calculating the quality restoration degree of the gray-scale image of each frame of image in turn according to a preset rule;
s50, evaluating the quality of the video to be evaluated according to the quality recovery degree of the gray-scale image of each frame of image.
Because the no-reference video quality evaluation method provided by the invention is realized by respectively evaluating the quality of each frame of image in the video to be evaluated, each frame of image is obtained from the video to be evaluated in a video decoding mode in step S10.
For the human visual system, the sensitivity to the chrominance is lower than that to the luminance, so the non-reference video quality evaluation method provided by the invention evaluates the quality of the luminance component in each frame of image. As is known, a color image is generally represented by three primary colors of R (red), G (green), and B (blue), and a luminance and chrominance-separated image is represented by Y, Cb and Cr, where Y is a luminance component and Cb and Cr are chrominance components. According to the standard ITU-R BT.601-7, the conversion formula from the RGB format to the YCbCr format is as follows (1):
Figure BDA0001906364880000061
based on this, in step S20, the luminance component Y in each frame image is obtained by converting the RGB image into the YCbCr image, and then the gray scale image of each frame image is obtained.
And after the gray level image of each frame of image is obtained, sequentially restoring by adopting a pre-trained image distortion restoration model to obtain a restored image. In the process of training the image distortion recovery model, firstly, constructing the image distortion recovery model; constructing a distorted image and a non-distorted image; and finally, training the image distortion recovery model by adopting a supervised training method. In the training process, the input of the image distortion recovery model is a distorted image, and the output of the image distortion recovery model is an undistorted image, so that the image distortion recovery model is obtained through training. In the process of image restoration, inputting the gray-scale image corresponding to each frame of image in the video to be evaluated into the image distortion restoration model to obtain the restored image. It is known that image distortion recovery is an important direction of research in the fields of computer vision and image processing, and among many distortion types, image blur is widely present in video. The video blurring is caused by many reasons, and the common reasons include: fast relative motion between the target and the camera in the video shooting process, camera focusing blur, loss of high frequency in the video compression process, and the like. With the development of the deep learning technology, the image distortion recovery effect is better and better, in the method, the image distortion recovery algorithm is not limited, and any image distortion recovery algorithm with good effect can be used in the no-reference video quality evaluation method provided by the invention and is used for constructing an image distortion recovery model to recover the gray level image.
In order to ensure the accuracy of the image distortion recovery model, in the process of constructing the training set, it should be ensured that images with different qualities are included in the training set; in the model training process, the distorted images with different qualities are used for training the image distortion recovery model. That is, in addition to the standard data set, a plurality of quality images are prepared in the training set for fine adjustment of the model parameters. Specifically, the multi-level quality image can be obtained by compressing the original undistorted image to different degrees, for example, in one example, the compression level is divided into 5 levels: the image quality of the distorted image is the worst in the levels 1-5, and the image quality of the distorted image is the highest in the level 5, as shown in FIG. 2.
After the gray-scale image of each frame of image is restored by using an image distortion restoration model to obtain a restored image, calculating RD-PSNR and/or RD-SSIM between the gray-scale image of the input image distortion restoration model and the outputted restored image to obtain the quality restoration degree of each frame of image, wherein the calculation formula of the RD-PSNR is as shown in formula (2):
Figure BDA0001906364880000071
wherein MSE is the mean square error between the gray scale image I and the output restored image K,
Figure BDA0001906364880000072
the image resolution of the grayscale image and the output restored image is M × N, I(I, j) is the pixel value at (I, j) in the grayscale image I, and K (I, j) I (I, j) is the pixel value at (I, j) in the restored image K.
The calculation formula of RD-SSIM is shown as formula (3):
Figure BDA0001906364880000073
wherein: mu.sIIs the mean value of the grey scale map I (input image),
Figure BDA0001906364880000074
μKin order to output the mean value of the restored image K,
Figure BDA0001906364880000075
σIis the variance of the gray-scale image I,
Figure BDA0001906364880000076
σKin order to output the variance of the restored image K,
Figure BDA0001906364880000077
Figure BDA0001906364880000078
σIKcovariance between grayscale I and output restored image:
Figure BDA0001906364880000079
c1、c2is a coefficient of c1=(k1L)2,c2=(k2L)2And L is the dynamic range of the image pixel values. In one example, when the pixel value of the grayscale map I is represented by 8bit, L is 2bitdepth-1=255,k1And k2Recommended default value of k1=0.01,k2=0.03。
For video V, its set of frames is { f1,f2,…,f#VWhere # V denotes the number of frames in the video V, two indexes for measuring the quality of the video V are as shown in equations (4) and (5):
Figure BDA0001906364880000081
Figure BDA0001906364880000082
as shown in fig. 3, which is a schematic structural diagram of a non-reference video quality evaluation apparatus based on distortion recovery degree according to the present invention, it can be seen that the non-reference video quality evaluation apparatus 100 includes: the image distortion restoration method comprises an image extraction module 110, a brightness component extraction module 120, an image distortion restoration model 130 and a quality evaluation module 140, wherein the brightness component extraction module 120 is connected with the image extraction module 110, the image distortion restoration model 130 is connected with the brightness component extraction module 120, and the quality evaluation module 140 is connected with the image distortion restoration model 130. The image extraction module 110 is configured to extract each frame of image in a video to be evaluated; the luminance component extraction module 120 is configured to sequentially extract the luminance components in each frame of image extracted by the image extraction module 110 to obtain a grayscale image of each frame of image; the image distortion recovery model 130 is used for sequentially recovering the grayscale images of each frame of image extracted by the luminance component extraction module 120; the quality evaluation module 140 is configured to sequentially calculate a quality restoration degree of the grayscale map of each frame of image according to a preset rule, and evaluate the quality of the video to be evaluated according to the quality restoration degree of the grayscale map of each frame of image.
The no-reference video quality evaluation method provided by the invention is realized by respectively evaluating the quality of each frame of image in the video to be evaluated, and each frame of image is obtained from the video to be evaluated in the image extraction module 110 in a video decoding mode.
For the human visual system, the sensitivity to the chrominance is lower than that to the luminance, so the non-reference video quality evaluation method provided by the invention evaluates the quality of the luminance component in each frame of image. As is known, a color image is generally represented by three primary colors of R (red), G (green), and B (blue), and a luminance and chrominance-separated image is represented by Y, Cb and Cr, where Y is a luminance component and Cb and Cr are chrominance components. According to the standard ITU-R BT.601-7, the conversion formula from the RGB format to the YCbCr format is as follows (1):
Figure BDA0001906364880000083
based on this, in the luminance component extraction module 120, the luminance component Y in each frame of image is obtained by converting the RGB image into the YCbCr image, and then the gray scale image of each frame of image is obtained.
After the luminance component extraction module 120 obtains the grayscale image of each frame of image, it then uses the pre-trained image distortion recovery model 130 to recover in sequence to obtain the recovered image. In the process of training the image distortion recovery model 130, firstly, the image distortion recovery model 130 is constructed through a model construction unit; then, constructing a distorted image and an undistorted image by a training set construction unit; finally, the training unit trains the image distortion recovery model 130 using supervised training methods. In the training process, the input of the image distortion recovery model 130 is a distorted image, and the output is a non-distorted image, so that the image distortion recovery model 130 is obtained through training. In the process of image restoration, the gray scale image corresponding to each frame of image in the video to be evaluated is input into the image distortion restoration model 130, and the restored image can be obtained. It is known that image distortion recovery is an important direction of research in the fields of computer vision and image processing, and among many distortion types, image blur is widely present in video. The video blurring is caused by many reasons, and the common reasons include: fast relative motion between the target and the camera in the video shooting process, camera focusing blur, loss of high frequency in the video compression process, and the like. With the development of the deep learning technology, the image distortion recovery effect is better and better, in the invention, the image distortion recovery algorithm is not limited, and any image distortion recovery algorithm with good effect can be used in the no-reference video quality evaluation method of the invention for constructing the image distortion recovery model 130 to recover the gray level image.
In order to ensure the accuracy of the image distortion recovery model 130, in the process of constructing the training set by the training set constructing unit, it should be ensured that images with different qualities are included in the training set; in the model training process, the image distortion recovery model 130 is trained using distorted images of different qualities. That is, in addition to the standard data set, a plurality of quality images are prepared in the training set for fine adjustment of the model parameters. Specifically, the multi-level quality image can be obtained by compressing the original undistorted image to different degrees, for example, in one example, the compression level is divided into 5 levels: the image quality of the distorted image is the worst in the levels 1-5, and the image quality of the distorted image is the highest in the level 5, as shown in FIG. 2.
After the gray-scale image of each frame of image is restored by using the image distortion restoration model 130 to obtain a restored image, the quality evaluation module 140 calculates the RD-PSNR and/or RD-SSIM between the gray-scale image of the input image distortion restoration model 130 and the outputted restored image to obtain the quality restoration degree of each frame of image, and the calculation formula of the RD-PSNR is as follows (2):
Figure BDA0001906364880000091
wherein MSE is the mean square error between the gray scale image I and the output restored image K,
Figure BDA0001906364880000092
the image resolution of the grayscale map and the output restored image is M × N, I (I, j) is the pixel value at (I, j) in the grayscale map I, and K (I, j) I (I, j) is the pixel value at (I, j) in the restored image K.
The calculation formula of RD-SSIM is shown as formula (3):
Figure BDA0001906364880000101
wherein: mu.sIIs the mean value of the grey scale map I (input image),
Figure BDA0001906364880000102
μKin order to output the mean value of the restored image K,
Figure BDA0001906364880000103
σIis the variance of the gray-scale image I,
Figure BDA0001906364880000104
σKin order to output the variance of the restored image K,
Figure BDA0001906364880000105
Figure BDA0001906364880000106
σIKcovariance between grayscale I and output restored image:
Figure BDA0001906364880000107
c1、c2is a coefficient of c1=(k1L)2,c2=(k2L)2And L is the dynamic range of the image pixel values. In one example, when the pixel value of the grayscale map I is represented by 8bit, L is 2bitdepth-1=255,k1And k2Recommended default value of k1=0.01,k2=0.03。
For video V, its set of frames is { f1,f2,…,f#VWhere # V denotes the number of frames in the video V, two indexes for measuring the quality of the video V are as shown in equations (4) and (5):
Figure BDA0001906364880000108
Figure BDA0001906364880000109
it should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for persons skilled in the art, numerous modifications and adaptations can be made without departing from the principle of the present invention, and such modifications and adaptations should be considered as within the scope of the present invention.

Claims (6)

1. A no-reference video quality evaluation method based on distortion recovery degree is characterized by comprising the following steps:
s10, extracting each frame of image in the video to be evaluated;
s20, sequentially extracting the brightness component in each frame of image to obtain a gray level image of each frame of image;
s30, restoring the gray level image of each frame of image in turn by using a pre-trained image distortion restoration model;
s40, calculating the quality restoration degree of the gray-scale image of each frame of image in turn according to a preset rule;
s50, evaluating the quality of the video to be evaluated according to the quality recovery degree of the gray level image of each frame of image;
in step S20, the luminance components in each frame of image are sequentially extracted to obtain a grayscale image of each frame of image, specifically: the luminance component in each frame image is obtained by converting the RGB image into the YCbCr image, by,
Figure FDA0002781980590000011
where Y is the luminance component in the YCbCr image, Cb and Cr are the chrominance components of the YCbCr image, R, G and B represent the red, blue and green components, respectively, in the RGB image;
in step S30, the method includes the step of training the image distortion recovery model:
s31, constructing an image distortion recovery model;
s32, constructing a training data set, wherein the training data set comprises a distorted image and a non-distorted image;
s32, training the image distortion recovery model by adopting a supervised training method, wherein in the training process, the input of the image distortion recovery model is a distorted image, and the output is a non-distorted image.
2. The non-reference video quality evaluation method according to claim 1,
in step S32, performing compression of different levels on the undistorted image to obtain a distorted image of a corresponding level;
in step S33, an image distortion recovery model is trained using different levels of distorted images.
3. The method for evaluating the quality of a non-reference video according to claim 1 or 2, wherein in step S40, the RD-PSNR and/or RD-SSIM between the gray-scale map of the distortion recovery model of the input image and the output recovered image are calculated to obtain the quality recovery of each frame of image.
4. A distortion recovery degree-based no-reference video quality evaluation apparatus, comprising:
the image extraction module is used for extracting each frame of image in the video to be evaluated;
the brightness component extraction module is used for sequentially extracting the brightness components in each frame of image extracted by the image extraction module to obtain a gray level image of each frame of image;
the image distortion recovery model is used for sequentially recovering the gray level images of each frame of image extracted by the brightness component extraction module;
the quality evaluation module is used for sequentially calculating the quality recovery degree of the gray-scale image of each frame of image according to a preset rule and evaluating the quality of the video to be evaluated according to the quality recovery degree of the gray-scale image of each frame of image;
in the luminance component extraction module, specifically: the luminance component in each frame of image is obtained by converting the RGB image into the YCbCr image, and the converting means is,
Figure FDA0002781980590000021
where Y is the luminance component in the YCbCr image, Cb and Cr are the chrominance components of the YCbCr image, R, G and B represent the red, blue and green components, respectively, in the RGB image;
the non-reference video quality evaluation device further comprises an image distortion recovery model training module, and the image distortion recovery model training module comprises:
the model building unit is used for building an image distortion recovery model;
the training set constructing unit is used for constructing a training data set, and the training data set comprises distorted images and undistorted images;
and the training unit is used for training the image distortion recovery model by adopting a supervised training device, and in the training process, the input of the image distortion recovery model is a distorted image and the output of the image distortion recovery model is a non-distorted image.
5. The non-reference video quality evaluation device according to claim 4,
in a training set construction unit, compressing undistorted images at different levels to obtain distorted images at corresponding levels;
in the training unit, the image distortion recovery model is trained using different levels of distorted images.
6. The non-reference video quality evaluation device according to claim 4 or 5, wherein in the quality evaluation module, the RD-PSNR and/or RD-SSIM between the gray-scale image of the input image distortion recovery model and the output recovery image is calculated to obtain the quality recovery degree of each frame of image.
CN201811533786.8A 2018-12-14 2018-12-14 Distortion recovery degree-based no-reference video quality evaluation method and device Active CN109587474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811533786.8A CN109587474B (en) 2018-12-14 2018-12-14 Distortion recovery degree-based no-reference video quality evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811533786.8A CN109587474B (en) 2018-12-14 2018-12-14 Distortion recovery degree-based no-reference video quality evaluation method and device

Publications (2)

Publication Number Publication Date
CN109587474A CN109587474A (en) 2019-04-05
CN109587474B true CN109587474B (en) 2021-03-12

Family

ID=65929617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811533786.8A Active CN109587474B (en) 2018-12-14 2018-12-14 Distortion recovery degree-based no-reference video quality evaluation method and device

Country Status (1)

Country Link
CN (1) CN109587474B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435244B (en) * 2020-11-27 2025-04-08 广州华多网络科技有限公司 Quality evaluation method and device for live video, computer equipment and storage medium
CN112767310B (en) * 2020-12-31 2024-03-22 咪咕视讯科技有限公司 Video quality evaluation method, device and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478691A (en) * 2008-12-31 2009-07-08 浙江大学 Non-reference evaluation method for Motion Jpeg2000 video objective quality
CN101996406A (en) * 2010-11-03 2011-03-30 中国科学院光电技术研究所 No-reference structure sharpness image quality assessment method
CN108428227A (en) * 2018-02-27 2018-08-21 浙江科技学院 No-reference image quality assessment method based on fully convolutional neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401330B2 (en) * 2009-10-09 2013-03-19 At&T Intellectual Property I, L.P. No-reference spatial aliasing measure for digital image resizing
CN101853504B (en) * 2010-05-07 2012-04-25 厦门大学 Image quality evaluation method based on visual characteristics and structural similarity

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478691A (en) * 2008-12-31 2009-07-08 浙江大学 Non-reference evaluation method for Motion Jpeg2000 video objective quality
CN101996406A (en) * 2010-11-03 2011-03-30 中国科学院光电技术研究所 No-reference structure sharpness image quality assessment method
CN108428227A (en) * 2018-02-27 2018-08-21 浙江科技学院 No-reference image quality assessment method based on fully convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A no-reference method for JPEG image compression impairment evaluation;Haibo Dong,Ci Wang;《2011 4th International Congress on Image and Signal Processing》;20111212;全文 *

Also Published As

Publication number Publication date
CN109587474A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
Yang et al. Perceptual quality assessment of screen content images
CN104079925B (en) Ultra high-definition video image quality method for objectively evaluating based on vision perception characteristic
CN104243973B (en) Video perceived quality non-reference objective evaluation method based on areas of interest
Gu et al. Hybrid no-reference quality metric for singly and multiply distorted images
CN110443800B (en) Video image quality evaluation method
Ma et al. Reduced-reference video quality assessment of compressed video sequences
CN112435244B (en) Quality evaluation method and device for live video, computer equipment and storage medium
Ginesu et al. A multi-factors approach for image quality assessment based on a human visual system model
Appina et al. Study of subjective quality and objective blind quality prediction of stereoscopic videos
CN106028026A (en) Effective objective video quality evaluation method based on temporal-spatial structure
Jakhetiya et al. A prediction backed model for quality assessment of screen content and 3-D synthesized images
US10085015B1 (en) Method and system for measuring visual quality of a video sequence
CN109741285B (en) Method and system for constructing underwater image data set
CN105451016A (en) No-reference video quality evaluation method suitable for video monitoring system
CN115965889A (en) A video quality assessment data processing method, device and equipment
Ye et al. Visibility metric for visually lossless image compression
CN109587474B (en) Distortion recovery degree-based no-reference video quality evaluation method and device
Bohr et al. A no reference image blur detection using cumulative probability blur detection (cpbd) metric
Appina et al. A full reference stereoscopic video quality assessment metric
Patil et al. Survey on image quality assessment techniques
JP5013487B2 (en) Video objective image quality evaluation system
Patil A survey on image quality assessment techniques, challenges and databases
Pastor et al. On the accuracy of open video quality metrics for local decision in av1 video codec
CN111354048B (en) Quality evaluation method and device for obtaining pictures by facing camera
Shoham et al. A novel perceptual image quality measure for block based image compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant