[go: up one dir, main page]

CN110944200B - Method for evaluating immersive video transcoding scheme - Google Patents

Method for evaluating immersive video transcoding scheme Download PDF

Info

Publication number
CN110944200B
CN110944200B CN201911257216.5A CN201911257216A CN110944200B CN 110944200 B CN110944200 B CN 110944200B CN 201911257216 A CN201911257216 A CN 201911257216A CN 110944200 B CN110944200 B CN 110944200B
Authority
CN
China
Prior art keywords
video
quality
field
view
frame rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911257216.5A
Other languages
Chinese (zh)
Other versions
CN110944200A (en
Inventor
马展
孟宇
张旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201911257216.5A priority Critical patent/CN110944200B/en
Publication of CN110944200A publication Critical patent/CN110944200A/en
Application granted granted Critical
Publication of CN110944200B publication Critical patent/CN110944200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明公开了一种有效的评估沉浸式视频转码方案的方法。具体步骤如下:(1)对完整的360°全景视频进行分割切片为多个规则矩形;(2)对分块后的视频中每一帧的各分块进行sobel滤波提取边缘特征信息,在梯度信息域计算连续帧的残差,对残差结果取均值和方差;(3)根据用户的视场位置,结合分块特征,对视场内容进行特征换算和模型参数计算;(4)输入将要应用于当前视频压缩编码的参数,包括分辨率、帧率和量化步长,输出该编码方案对比原始未转码视频的主观感知质量变化。利用本方法能够针对沉浸式观看的用户行为特点进行快速准确的质量预测,预测结果贴近真实用户主观的质量感知数据。

Figure 201911257216

The invention discloses an effective method for evaluating an immersive video transcoding scheme. The specific steps are as follows: (1) segment the complete 360° panoramic video into a plurality of regular rectangles; (2) perform sobel filtering on each block of each frame in the segmented video to extract edge feature information, and in the gradient The information domain calculates the residuals of consecutive frames, and takes the mean and variance of the residual results; (3) According to the user's field of view position, combined with the block feature, perform feature conversion and model parameter calculation on the content of the field of view; (4) The input will be Parameters applied to the current video compression encoding, including resolution, frame rate, and quantization step size, and output the change in subjective perceptual quality of the encoding scheme compared to the original untranscoded video. Using this method, fast and accurate quality prediction can be carried out according to the user behavior characteristics of immersive viewing, and the prediction result is close to the subjective quality perception data of real users.

Figure 201911257216

Description

Method for evaluating immersive video transcoding scheme
Technical Field
The invention relates to the field of computational vision, in particular to a method for evaluating an immersive video transcoding scheme.
Background
In the process of video acquisition, transmission and storage, the video needs to be compressed and encoded to meet the requirements of storage and transmission. In the process of video compression and encoding, the main parameters which have a large influence on the subjective quality of the encoded video include three parameters, namely resolution, frame rate and quantization step size. In past researches, it has been found that the subjective quality influence of each of the three parameters on the coded video is related to the video content, for example, for a video scene containing a high-speed moving object, the influence of the change of the frame rate on the subjective quality is more obvious than that of a natural landscape video with slow change. In addition, the situation that the subjective quality of the same video content has a large difference under the same code rate due to the difference of the encoding parameters also easily causes the waste of transmission and storage resources. Therefore, how to evaluate the quality of the coding scheme in the compression process becomes a very critical problem for different video contents, and is also a problem that is always explored in the development process of video quality evaluation technology.
The video quality evaluation technology aims to evaluate the video quality after lossy processing such as compression and transmission. The existing video quality evaluation algorithm can be divided into subjective evaluation and objective evaluation on the aspect of a method. Objective quality assessment is mainly divided into three categories according to the amount of information provided by reference videos: full reference methods, half reference methods, and no reference methods.
The full-reference method requires both original (no quality loss) video and lossy video, and is easy to implement and apply, and the method mainly compares the information content of two pieces of video with the same content and the similarity of certain characteristics. The non-reference method only depends on the lossy video, and the implementation difficulty is high, and some common implementation methods at present include some specific algorithms to detect specific types of distortion for evaluation, such as fuzzy detection, noise detection, edge analysis, and the like. The semi-reference method only needs partial information of an original video or a reference video or extracts partial characteristics as a reference for evaluating the quality of a lossy video, the method provides a solution under the condition that the reference video information cannot be completely acquired, the application in an actual system can provide a more stable and accurate evaluation result than that without reference, and meanwhile, the unnecessary consumption of storage and transmission resources caused by the full-reference method is avoided. Based on the above discussion, for the coding scheme to evaluate the video quality evaluation scene with the original high-quality reference, if coding is performed first to obtain the lossy video and then evaluation is performed, not only is the calculation resource wasted, but also more processing time is required, and ultra-low delay response cannot be achieved. Therefore, establishing the link between the coding parameters and the video quality based on the idea of the semi-reference video quality evaluation technology is a reasonable choice for solving the aforementioned problems.
With the development of software and hardware technologies, immersive media contents such as VR (virtual reality) and AR (augmented reality) gradually enter the consumer market, and play more and more important roles in the fields of education, medical treatment, entertainment and the like. Immersive interaction methods are very different from conventional flat video interaction in both viewing environment and user freedom. When a user watches a traditional flat video, the screen of the playing device can only cover a local area in the center of the visual field of the user, and the user has no freedom in watching content. In an immersive viewing environment, the video content generally covers the entire field of view of the user, isolating most of the unnecessary external visual interference. Meanwhile, the 360-degree video content gives a higher degree of freedom to the user, the view field of the user can only cover part of the video content which is selected by the user at a certain moment, the user can change the direction and the position according to the subjective intention of the user in the watching process, and the attention of the user is usually focused on the central part of the current view field.
Based on the above changes, the conventional method for evaluating the common quality of the flat video cannot meet the requirements of immersive viewing. On one hand, the change of the viewing environment can bring about the change of the subjective quality perception characteristics of the user, and some quality evaluation models applied to the traditional plane video can have larger errors on the evaluation result; on the other hand, directly evaluating the quality of a complete panoramic video has not been able to accommodate the local features of a user's focus in an immersive viewing environment.
However, an immersive video quality evaluation model which can be designed specifically for the above changes and has high practicability is not yet proposed, and how to carry out reasonable design optimization to adapt to changes brought by an immersive viewing environment to a video quality evaluation technology becomes a very important subject how to directly link video coding parameters with video subjective quality to solve coding scheme evaluation.
Disclosure of Invention
In view of the above prior art variations and features, it is an object of the present invention to propose a method of evaluating an immersive video transcoding scheme.
The invention utilizes the technology of a semi-reference quality evaluation model, takes the coding parameters and a small amount of characteristics of the original video as model input, and can output the quality loss of the coding parameters relative to the original high-quality video through simple calculation, and the technical scheme is specifically adopted as follows:
a method of evaluating an immersive video transcoding scheme, comprising the steps of:
step 1, dividing a complete high-bit-rate panoramic video into a plurality of regular rectangles, wherein the size of each rectangle is smaller than one half of the size of the corresponding user in the transverse direction and the longitudinal direction;
step 2, conducting sobel filtering on each block of each frame in the video after blocking to extract edge characteristic information, obtaining corresponding gradient domain information, then making difference on the corresponding blocks of the two continuous frames before and after blocking, and taking mean value sigma of the obtained residual errormeanTaking the standard deviation sigma of the sumstd
Step 3, after the position information of the user view field is obtained, calculating the parameters of the quality evaluation model according to the coverage condition of the view field to each block video:
Figure BDA0002310610960000021
Figure BDA0002310610960000031
Figure BDA0002310610960000032
wherein
Figure BDA0002310610960000033
Is a coefficient, alpha, describing the variation of video quality with decreasing video resolutionqIs to describe the coefficient of change, alpha, of video quality as the compression quantization step increasestIs a coefficient describing the change of video quality as the frame rate decreases, and
Figure BDA0002310610960000034
Figure BDA0002310610960000035
n denotes the number of blocks covered by the current field of view, skRepresenting the area, s, covered by the field of view of each block of videoFoVWhich represents the area of coverage of the field of view,
Figure BDA0002310610960000036
and
Figure BDA0002310610960000037
representing the characteristic results of each of the segmented videos,
Figure BDA0002310610960000038
the sum of the characteristics of all the blocked videos covered by the field of view;
step 4, calculating a quality evaluation model, wherein a specific formula is as follows:
Figure BDA0002310610960000039
Figure BDA00023106109600000310
wherein Q (s, Q, t) is a video quality assessment prediction result after encoding according to a given parameter; resolution ratio
Figure BDA00023106109600000311
Frame rate
Figure BDA00023106109600000312
Quantization step size
Figure BDA00023106109600000313
Star、TtarAnd QtarThree coding parameters, S, representing the corresponding actual resolution, frame rate and quantization step size at transcodingori,ToriAnd QtarRepresenting the resolution, frame rate and quantization step size, alpha, of the original high quality videosIs input dependent QtarA numerical value;
and 5, evaluating each coding scheme by using the methods in the steps 1 to 4 to obtain a corresponding quality evaluation prediction result, and selecting a corresponding resolution, frame rate and quantization step size parameter combination when the Q (s, Q, t) value is maximum as a final coding scheme.
The invention provides a semi-reference video quality evaluation method capable of adapting to an immersive viewing environment, and a direct mapping relation between coding parameters and video quality can be established by combining feature-dependent model parameters and taking the coding parameters as two key points of input. In addition, the block feature prediction module is added to improve the response speed of the actual system during deployment and operation by combining the user behavior characteristics in the immersive viewing environment. The method not only directly establishes the direct relation between the compression coding parameters and the video quality, but also can realize the self-adaptation of the video content based on the characteristic dependence of the model parameters, thereby realizing the immersive video quality evaluation method with strong practicability and high accuracy, being used for evaluating different coding schemes and selecting the optimized coding scheme according to the evaluation result.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is a schematic view of video blocking.
Detailed Description
Referring to fig. 1, the method for the server to evaluate the immersive video transcoding scheme of the invention specifically comprises the following steps:
step 1, as shown in fig. 2, aiming at the randomness characteristic that only the image in the view field is visible to the user and the view field of the user is selected in the viewing process of the panoramic video, in order to accelerate the subsequent processing, firstly, the complete high-bit-rate panoramic video is partitioned, the subsequent partitioning characteristic calculation steps are matched to deal with the view field of the user which may appear at any position, one complete panoramic video is divided into a plurality of regular rectangles, the size of each rectangle in the horizontal direction and the longitudinal direction is smaller than one half of the size of the corresponding view field in the horizontal direction and the longitudinal direction, and therefore the accuracy of the content characteristic calculation of the subsequent view field can be ensured. And a proper block feature calculation strategy is introduced to adapt to the randomly-appearing field position of the user, so that the calculation speed is optimized on the premise of ensuring the accuracy.
And 2, extracting the characteristics of each block of video content after the division is finished, wherein the required characteristics comprise the following characteristics on the premise of ensuring the result accuracy and simplifying the calculated amount:
the method is characterized in that: after each frame in the video is subjected to sobel filtering to extract edge features, the difference is made between two continuous frames, the residual error is averaged, N-1 numerical values are finally generated by the content of the N frames of video, the average value of the N-1 numerical values is recorded as
Figure BDA0002310610960000041
Where n is the corresponding block number.
The second characteristic: after each frame in the video is subjected to sobel filtering to extract edge features, the two continuous frames are subjected to subtraction, the standard deviation is taken for the residual error, N-1 numerical values are finally generated by the content of the N frames of the video, the average value of the N-1 numerical values is taken and recorded as
Figure BDA0002310610960000042
Where n is the corresponding block number.
Step 3, after the position information of the user view field is obtained, calculating the parameters of the quality evaluation model according to the coverage condition of the view field to each block video:
Figure BDA0002310610960000043
Figure BDA0002310610960000044
Figure BDA0002310610960000045
wherein sigmameanAnd σstdBased on the video gradient domain characteristics calculated in the previous steps,
Figure BDA0002310610960000051
is a coefficient, alpha, describing the variation of video quality with decreasing video resolutionqIs to describe the coefficient of change, alpha, of video quality as the compression quantization step increasestIs a coefficient describing the change of video quality as the frame rate decreases, and
Figure BDA0002310610960000052
Figure BDA0002310610960000053
n denotes the number of blocks covered by the current field of view, skRepresenting the area, s, covered by the field of view of each block of videoFoVWhich represents the area of coverage of the field of view,
Figure BDA0002310610960000054
and
Figure BDA0002310610960000055
representing the characteristic results of each of the segmented videos,
Figure BDA0002310610960000056
the sum of the characteristics of all the partitioned videos covered by the field of view.
Step 4, performing quality evaluation on a plurality of alternative coding schemes meeting the requirements, and substituting specific parameters of each combination into calculation under the condition that a plurality of different combinations of resolution, frame rate and quantization step size can meet the corresponding storage or transmission requirements, so as to obtain a quality evaluation result corresponding to each coding scheme, wherein the specific formula is as follows:
Figure BDA0002310610960000057
Figure BDA0002310610960000058
wherein alpha issIs input dependent QtarThe value, Q (s, Q, t), is the video quality assessment prediction result after encoding according to the given parameters; resolution ratio
Figure BDA0002310610960000059
Frame rate
Figure BDA00023106109600000510
Quantization step size
Figure BDA00023106109600000511
Star、TtarAnd QtarThree coding parameters, S, representing the corresponding actual resolution, frame rate and quantization step size at transcodingori,ToriAnd QtarRepresenting the parameters of the original high quality video.
And 5, after each coding scheme to be selected is evaluated, obtaining a corresponding quality evaluation prediction result, and selecting a corresponding resolution, frame rate and quantization step parameter combination when the Q (s, Q, t) value is maximum as a final coding scheme.
Based on the steps, the quality evaluation result corresponding to each coding scheme combination meeting the transcoding requirement can be obtained, and therefore the coding scheme with the optimal quality is selected. According to the method, under the condition that coding is not carried out, quality loss brought to the original video by the corresponding coding parameters can be evaluated only according to the coding parameters and the original high-quality video characteristics, and meanwhile, rapid calculation can be carried out on user view fields which randomly appear in an actual application scene based on the feature extraction and calculation strategy of blocks, so that the method is more suitable for high-freedom immersive video watching and high-quality panoramic video transmission scenes.

Claims (1)

1. A method of evaluating an immersive video transcoding scheme comprising the steps of:
step 1, dividing a complete high-bit-rate panoramic video into a plurality of regular rectangles, wherein the size of each rectangle is smaller than one half of the size of the corresponding user in the transverse direction and the longitudinal direction;
step 2, conducting sobel filtering on each block of each frame in the video after blocking to extract edge characteristic information, obtaining corresponding gradient domain information, then making difference on the corresponding blocks of the two continuous frames before and after blocking, and taking average value of obtained residual errors
Figure FDA0003388276780000011
And taking the standard deviation of
Figure FDA0003388276780000012
Step 3, after the position information of the user view field is obtained, calculating the parameters of the quality evaluation model according to the coverage condition of the view field to each block video:
Figure FDA0003388276780000013
Figure FDA0003388276780000014
Figure FDA0003388276780000015
wherein
Figure FDA0003388276780000016
Is a coefficient, alpha, describing the variation of video quality with decreasing video resolutionqIs to describe the coefficient of change, alpha, of video quality as the compression quantization step increasestIs a coefficient describing the change of video quality as the frame rate decreases, and
Figure FDA0003388276780000017
Figure FDA0003388276780000018
n denotes the number of blocks covered by the current field of view, skRepresenting the area, s, covered by the field of view of each block of videoFoVWhich represents the area of coverage of the field of view,
Figure FDA0003388276780000019
the sum of the characteristics of all the blocked videos covered by the field of view;
step 4, calculating a quality evaluation model, wherein a specific formula is as follows:
Figure FDA00033882767800000110
Figure FDA00033882767800000111
wherein Q (s, Q, t) is a video quality assessment prediction result after encoding according to a given parameter; resolution ratio
Figure FDA00033882767800000112
Frame rate
Figure FDA00033882767800000113
Quantization step size
Figure FDA00033882767800000114
Star、TtarAnd QtarThree coding parameters, S, representing the corresponding actual resolution, frame rate and quantization step size at transcodingori、ToriAnd QoriRepresenting the resolution, frame rate and quantization step size, alpha, of the original high quality videosIs input dependent QtarA numerical value;
and 5, evaluating each coding scheme by using the methods in the steps 1 to 4 to obtain a corresponding quality evaluation prediction result, and selecting a corresponding resolution, frame rate and quantization step size parameter combination when the Q (s, Q, t) value is maximum as a final coding scheme.
CN201911257216.5A 2019-12-10 2019-12-10 Method for evaluating immersive video transcoding scheme Active CN110944200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911257216.5A CN110944200B (en) 2019-12-10 2019-12-10 Method for evaluating immersive video transcoding scheme

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911257216.5A CN110944200B (en) 2019-12-10 2019-12-10 Method for evaluating immersive video transcoding scheme

Publications (2)

Publication Number Publication Date
CN110944200A CN110944200A (en) 2020-03-31
CN110944200B true CN110944200B (en) 2022-03-15

Family

ID=69910006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911257216.5A Active CN110944200B (en) 2019-12-10 2019-12-10 Method for evaluating immersive video transcoding scheme

Country Status (1)

Country Link
CN (1) CN110944200B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113497932B (en) * 2020-04-07 2022-10-18 上海交通大学 Method, system and medium for measuring video transmission time delay
CN111696081B (en) * 2020-05-18 2024-04-09 南京大学 Method for reasoning panoramic video quality from visual field video quality
CN114556430A (en) * 2020-10-30 2022-05-27 深圳市大疆创新科技有限公司 Data processing method and device, image signal processor and movable platform
CN112653892B (en) * 2020-12-18 2024-04-23 杭州当虹科技股份有限公司 Method for realizing transcoding test evaluation by utilizing video features
CN114760506B (en) * 2022-04-11 2024-02-09 北京字跳网络技术有限公司 Video transcoding evaluation method, device, equipment and storage medium
CN115225961B (en) * 2022-04-22 2024-01-16 上海赛连信息科技有限公司 No-reference network video quality evaluation method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9615098B1 (en) * 2009-11-30 2017-04-04 Google Inc. Adaptive resolution transcoding for optimal visual quality
CN106973281A (en) * 2017-01-19 2017-07-21 宁波大学 A kind of virtual view video quality Forecasting Methodology
CN107040783A (en) * 2015-10-22 2017-08-11 联发科技股份有限公司 Video coding and decoding methods and devices for non-spliced pictures of video coding system
CN107040771A (en) * 2017-03-28 2017-08-11 北京航空航天大学 A kind of Encoding Optimization for panoramic video
WO2018136301A1 (en) * 2017-01-20 2018-07-26 Pcms Holdings, Inc. Field-of-view prediction method based on contextual information for 360-degree vr video
CN108513119A (en) * 2017-02-27 2018-09-07 阿里巴巴集团控股有限公司 Mapping, processing method, device and the machine readable media of image
CN108833880A (en) * 2018-04-26 2018-11-16 北京大学 Method and device for viewpoint prediction and optimal transmission of virtual reality video using cross-user behavior patterns
CN108924554A (en) * 2018-07-13 2018-11-30 宁波大学 A kind of panorama video code Rate-distortion optimization method of spherical shape weighting structures similarity
CN109769104A (en) * 2018-10-26 2019-05-17 西安科锐盛创新科技有限公司 Unmanned plane panoramic picture transmission method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109660825B (en) * 2017-10-10 2021-02-09 腾讯科技(深圳)有限公司 Video transcoding method and device, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9615098B1 (en) * 2009-11-30 2017-04-04 Google Inc. Adaptive resolution transcoding for optimal visual quality
CN107040783A (en) * 2015-10-22 2017-08-11 联发科技股份有限公司 Video coding and decoding methods and devices for non-spliced pictures of video coding system
CN106973281A (en) * 2017-01-19 2017-07-21 宁波大学 A kind of virtual view video quality Forecasting Methodology
WO2018136301A1 (en) * 2017-01-20 2018-07-26 Pcms Holdings, Inc. Field-of-view prediction method based on contextual information for 360-degree vr video
CN108513119A (en) * 2017-02-27 2018-09-07 阿里巴巴集团控股有限公司 Mapping, processing method, device and the machine readable media of image
CN107040771A (en) * 2017-03-28 2017-08-11 北京航空航天大学 A kind of Encoding Optimization for panoramic video
CN108833880A (en) * 2018-04-26 2018-11-16 北京大学 Method and device for viewpoint prediction and optimal transmission of virtual reality video using cross-user behavior patterns
CN108924554A (en) * 2018-07-13 2018-11-30 宁波大学 A kind of panorama video code Rate-distortion optimization method of spherical shape weighting structures similarity
CN109769104A (en) * 2018-10-26 2019-05-17 西安科锐盛创新科技有限公司 Unmanned plane panoramic picture transmission method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向虚拟视质量的3D视频编码研究;杨超;《中国博士学位论文全文数据库 信息科技辑》;20180215;全文 *

Also Published As

Publication number Publication date
CN110944200A (en) 2020-03-31

Similar Documents

Publication Publication Date Title
CN110944200B (en) Method for evaluating immersive video transcoding scheme
Yang et al. Predicting the perceptual quality of point cloud: A 3d-to-2d projection-based exploration
TWI805784B (en) A method for enhancing quality of media
Moorthy et al. Visual quality assessment algorithms: what does the future hold?
CN112419219B (en) Image enhancement model training method, image enhancement method and related device
CN104618803B (en) Information-pushing method, device, terminal and server
CN104427291B (en) A kind of image processing method and equipment
JP5165743B2 (en) Method and apparatus for synchronizing video data
CN114363623A (en) Image processing method, image processing apparatus, image processing medium, and electronic device
CN110751649A (en) Video quality evaluation method and device, electronic equipment and storage medium
CN113538324B (en) Evaluation method, model training method, device, medium and electronic device
Shao et al. No-reference view synthesis quality prediction for 3-D videos based on color–depth interactions
CN106664404A (en) Block segmentation mode processing method in video coding and relevant apparatus
CN118283218A (en) Audio and video reconstruction method and device for real-time conversation and electronic equipment
CN114422795A (en) Face video coding method, decoding method and device
CN117478886A (en) Multimedia data encoding method, device, electronic equipment and storage medium
WO2023246926A1 (en) Model training method, video encoding method, and video decoding method
Xie et al. Just noticeable visual redundancy forecasting: a deep multimodal-driven approach
Xintao et al. Hide the image in fc-densenets to another image
CN117953126A (en) Face rendering method and device and electronic equipment
CN116980604A (en) Video encoding method, video decoding method and related equipment
CN113628121B (en) Method and device for processing and training multimedia data
Kaimal et al. A modified anti-forensic technique for removing detectable traces from digital images
CN115409721A (en) Dark light video enhancement method and device
CN108259891B (en) Blind assessment method of 3D video quality based on binocular spatiotemporal intrinsic reasoning mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant