[go: up one dir, main page]

CN118450195A - Video image quality adjustment method, system, computer equipment and storage medium - Google Patents

Video image quality adjustment method, system, computer equipment and storage medium Download PDF

Info

Publication number
CN118450195A
CN118450195A CN202410541608.9A CN202410541608A CN118450195A CN 118450195 A CN118450195 A CN 118450195A CN 202410541608 A CN202410541608 A CN 202410541608A CN 118450195 A CN118450195 A CN 118450195A
Authority
CN
China
Prior art keywords
video
difference
data
color
adjustment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410541608.9A
Other languages
Chinese (zh)
Inventor
鲁永明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guangchen Information Technology Co ltd
Original Assignee
Guangzhou Guangchen Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guangchen Information Technology Co ltd filed Critical Guangzhou Guangchen Information Technology Co ltd
Priority to CN202410541608.9A priority Critical patent/CN118450195A/en
Publication of CN118450195A publication Critical patent/CN118450195A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本发明涉及视频图像处理技术领域,尤指一种视频图像画质调节方法、系统、计算机设备及存储介质。该方法包括:获取观众用户端视频以及对应的主播用户端视频;将获取到的观众用户端视频以及对应的主播用户端视频输入预设的差异分析模型,输出差异类型,差异类型包括颜色差异、画面清晰度差异、运动特征差异和帧率差异;根据所述差异类型调节观众用户端的视频图像画质,包括颜色调节、清晰度调节、运动效果调节和帧率调节中的至少一种。本发明能够根据观众设备上的直播情况,分析观众用户端视频与主播用户端视频之间的差异,并对观众用户端的视频图像画质进行动态调节,缩小观众用户端视频与主播用户端视频之间的差异,从而提升用户观看体验。

The present invention relates to the field of video image processing technology, and in particular to a method, system, computer device and storage medium for adjusting the quality of video images. The method comprises: obtaining a video of an audience user end and a corresponding video of an anchor user end; inputting the obtained video of an audience user end and the corresponding video of an anchor user end into a preset difference analysis model, and outputting a difference type, the difference type including color difference, picture clarity difference, motion feature difference and frame rate difference; adjusting the video image quality of the audience user end according to the difference type, including at least one of color adjustment, clarity adjustment, motion effect adjustment and frame rate adjustment. The present invention can analyze the difference between the video of an audience user end and the video of an anchor user end according to the live broadcast situation on the audience device, and dynamically adjust the video image quality of the audience user end, thereby reducing the difference between the video of an audience user end and the video of an anchor user end, and thus improving the user viewing experience.

Description

视频图像画质调节方法、系统、计算机设备及存储介质Video image quality adjustment method, system, computer equipment and storage medium

技术领域Technical Field

本发明涉及视频图像处理技术领域,尤指一种视频图像画质调节方法、系统、计算机设备及存储介质。The present invention relates to the technical field of video image processing, and in particular to a video image quality adjustment method, system, computer equipment and storage medium.

背景技术Background technique

当前,视频直播正变得越来越受欢迎。在视频直播系统中,主播可以通过直播应用向观众实时展示自己的表演。然而,由于观众使用的设备质量参差不齐,导致观众和主播之间在画面质量上存在差异,致使部分观众在观看直播时体验不佳。为解决这一问题,需要改进直播系统的画质适配能力,以确保观众能够获得更加统一和清晰的观看体验。Currently, live video streaming is becoming more and more popular. In a live video streaming system, the host can show their performance to the audience in real time through a live streaming application. However, due to the uneven quality of the devices used by the audience, there is a difference in picture quality between the audience and the host, resulting in a poor experience for some viewers when watching live streaming. To solve this problem, it is necessary to improve the picture quality adaptation capability of the live streaming system to ensure that the audience can get a more unified and clear viewing experience.

发明内容Summary of the invention

为解决上述问题,本发明的主要目的在于提供一种视频图像画质调节方法、系统、计算机设备及存储介质,其能够根据观众设备上的直播情况,分析观众用户端视频与主播用户端视频之间的差异,并对观众用户端的视频图像画质进行动态调节,缩小观众用户端视频与主播用户端视频之间的差异,从而提升用户观看体验。To solve the above problems, the main purpose of the present invention is to provide a video image quality adjustment method, system, computer device and storage medium, which can analyze the difference between the viewer user-side video and the host user-side video according to the live broadcast situation on the viewer's device, and dynamically adjust the video image quality of the viewer user side, thereby reducing the difference between the viewer user-side video and the host user-side video, thereby improving the user's viewing experience.

为实现上述目的,本发明采用的技术方案是:To achieve the above object, the technical solution adopted by the present invention is:

一种视频图像画质调节方法,包括以下步骤:A method for adjusting the quality of a video image comprises the following steps:

M1、获取观众用户端视频以及对应的主播用户端视频;M1. Obtain the viewer user end video and the corresponding anchor user end video;

M2、将获取到的观众用户端视频以及对应的主播用户端视频输入预设的差异分析模型,输出差异类型,差异类型包括颜色差异、画面清晰度差异、运动特征差异和帧率差异;M2. Input the obtained viewer user end video and the corresponding anchor user end video into a preset difference analysis model, and output the difference type, which includes color difference, picture clarity difference, motion feature difference and frame rate difference;

M3、根据所述差异类型调节观众用户端的视频图像画质,包括颜色调节、清晰度调节、运动效果调节和帧率调节中的至少一种。M3. Adjust the video image quality of the viewer user terminal according to the difference type, including at least one of color adjustment, clarity adjustment, motion effect adjustment and frame rate adjustment.

进一步,所述差异分析模型的构建步骤包括:Furthermore, the step of constructing the difference analysis model includes:

S1、通过直播系统自动收集历史时期观众用户端视频以及对应的主播用户端视频的数据,并进行分类存储;S1. Automatically collect data of audience user-end videos and corresponding anchor user-end videos in historical periods through the live broadcast system, and classify and store them;

S2、对收集到的数据进行预处理;S2, preprocessing the collected data;

S3、从预处理后的数据中提取特征,包括:颜色特征、清晰度特征、运动特征和帧率特征;S3, extracting features from the preprocessed data, including: color features, clarity features, motion features and frame rate features;

S4、将数据集划分为训练集和测试集;S4, divide the data set into training set and test set;

S5、基于随机森林算法构建差异分析模型;S5. Construct a difference analysis model based on random forest algorithm;

S6、使用训练集对差异分析模型进行训练,优化模型参数以提高模型性能;S6. Use the training set to train the difference analysis model and optimize the model parameters to improve the model performance;

S7、使用测试集评估模型的性能,包括:通过混淆矩阵、准确率、召回率和F1值指标进行评估;S7. Use the test set to evaluate the performance of the model, including: evaluation through confusion matrix, precision, recall and F1 value indicators;

S8、通过API接口或集成SDK方式将经过训练和评估的差异分析模型部署到直播系统中。S8. Deploy the trained and evaluated differential analysis model to the live broadcast system through the API interface or integrated SDK.

进一步,步骤S2包括:Further, step S2 comprises:

对收集到的历史时期观众用户端视频以及对应的主播用户端视频的数据进行清洗,包括去除重复数据、处理缺失值和处理异常值;Clean the data of the audience user-end videos and the corresponding anchor user-end videos collected in the historical period, including removing duplicate data, processing missing values and processing outliers;

对不同来源的数据进行统一的格式转换和标准化处理;Perform unified format conversion and standardization on data from different sources;

根据需要对数据进行降维处理;Perform dimensionality reduction on the data as needed;

对数据集进行增强处理,增加数据的多样性和数量。Enhance the data set to increase the diversity and quantity of the data.

进一步,步骤S3包括:Further, step S3 includes:

清晰度特征提取:结合均方误差、结构相似性指数和峰值信噪比,综合评估视频帧的清晰度得分;Clarity feature extraction: Combine mean square error, structural similarity index and peak signal-to-noise ratio to comprehensively evaluate the clarity score of the video frame;

运动特征提取:通过光流法检测视频中的运动信息,通过帧间差分法计算相邻帧之间的运动情况;Motion feature extraction: Detect motion information in the video using the optical flow method, and calculate the motion between adjacent frames using the inter-frame difference method;

帧率特征提取:统计视频的帧率信息,包括平均帧率和最大帧率;Frame rate feature extraction: statistics of video frame rate information, including average frame rate and maximum frame rate;

颜色特征提取,具体包括以下步骤:Color feature extraction specifically includes the following steps:

将预处理后的视频数据转换到HSV色彩空间,通过计算每一帧图像中各个颜色在色调、饱和度、亮度三个通道的分布情况,得到每帧图像的颜色直方图数据;The preprocessed video data is converted to the HSV color space, and the color histogram data of each frame of the image is obtained by calculating the distribution of each color in the three channels of hue, saturation, and brightness in each frame of the image;

利用K均值聚类算法,对颜色直方图数据进行聚类和分类,将相似颜色分为同一类别,实现颜色信息的提取和分类;The K-means clustering algorithm is used to cluster and classify the color histogram data, and similar colors are classified into the same category to achieve the extraction and classification of color information;

通过颜色关联性分析方法,探索不同颜色之间的关联关系,挖掘视频中不同颜色之间的搭配和变化规律特征。Through the color correlation analysis method, we explore the correlation between different colors and dig out the matching and change rules between different colors in the video.

进一步,在步骤S4中,训练集与测试集的比例为8:2。Further, in step S4, the ratio of the training set to the test set is 8:2.

进一步,在步骤M3中,Further, in step M3,

所述颜色调节,包括:调整观众用户端视频的色彩饱和度、色调和亮度参数;The color adjustment includes: adjusting the color saturation, hue and brightness parameters of the viewer user end video;

所述清晰度调节,包括:调整观众用户端视频的锐度、对比度和分辨率参数;The clarity adjustment includes: adjusting the sharpness, contrast and resolution parameters of the viewer user end video;

所述运动效果调节,包括:调整观众用户端视频的运动补偿、帧间插值和动态模糊参数;The motion effect adjustment includes: adjusting the motion compensation, inter-frame interpolation and dynamic blur parameters of the viewer user end video;

所述帧率调节,包括:调整观众用户端视频的帧率参数。The frame rate adjustment includes: adjusting the frame rate parameters of the viewer user end video.

一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行如以上所述的方法。A computer-readable storage medium stores a computer program, wherein the computer program is executed by a processor to perform the above method.

一种计算机设备,包括:程序,所述程序包括用于执行如以上所述的方法。A computer device comprises: a program, wherein the program comprises instructions for executing the method described above.

进一步,所述计算机设备包括:处理器和存储器,其中所述程序被存储在所述存储器中,并且被配置成由处理器执行。Furthermore, the computer device includes: a processor and a memory, wherein the program is stored in the memory and is configured to be executed by the processor.

一种视频图像画质调节系统,所述视频图像画质调节系统用于执行如以上所述的方法;所述视频图像画质调节系统包括:A video image quality adjustment system, the video image quality adjustment system is used to execute the method as described above; the video image quality adjustment system comprises:

数据采集模块,用于获取观众用户端视频以及对应的主播用户端视频;The data acquisition module is used to obtain the viewer user end video and the corresponding anchor user end video;

数据分析模块,用于将获取到的观众用户端视频以及对应的主播用户端视频输入预设的差异分析模型,输出差异类型,差异类型包括颜色差异、画面清晰度差异、运动特征差异和帧率差异;The data analysis module is used to input the obtained viewer user end video and the corresponding anchor user end video into a preset difference analysis model, and output the difference type, which includes color difference, picture clarity difference, motion feature difference and frame rate difference;

画质调节模块,用于根据所述差异类型调节观众用户端的视频图像画质,包括颜色调节、清晰度调节、运动效果调节和帧率调节中的至少一种。The image quality adjustment module is used to adjust the video image quality of the viewer user terminal according to the difference type, including at least one of color adjustment, clarity adjustment, motion effect adjustment and frame rate adjustment.

本发明的有益效果在于:The beneficial effects of the present invention are:

本发明通过预设的差异分析模型,对观众用户端视频与主播用户端视频进行分析,可以输出不同类型的差异,包括颜色差异、画面清晰度差异、运动特征差异和帧率差异,从而准确捕捉到两者之间的具体区别。根据所得到的差异类型,对观众用户端的视频图像画质进行个性化调节,包括颜色、清晰度、运动效果和帧率等方面至少一种调节方式,以更好地适应观众设备与主播设备之间的差异,提高观看体验。而且,本发明能够实现动态调节观众用户端视频图像画质,即时响应不同场景下的需求变化,使观众在观看视频时获得更为流畅、清晰的画面效果,增强用户的沉浸感。通过缩小观众用户端视频与主播用户端视频之间的差异,该方法能够有效提升用户的观看体验,让用户获得更为真实、舒适的视听感受。The present invention analyzes the video of the audience user end and the video of the anchor user end through a preset difference analysis model, and can output different types of differences, including color difference, picture clarity difference, motion feature difference and frame rate difference, so as to accurately capture the specific difference between the two. According to the obtained difference type, the video image quality of the audience user end is personalized, including at least one adjustment method in terms of color, clarity, motion effect and frame rate, so as to better adapt to the difference between the audience device and the anchor device, and improve the viewing experience. Moreover, the present invention can dynamically adjust the image quality of the video image of the audience user end, and instantly respond to the changes in demand in different scenes, so that the audience can obtain a smoother and clearer picture effect when watching the video, and enhance the user's immersion. By reducing the difference between the video of the audience user end and the video of the anchor user end, the method can effectively improve the user's viewing experience and allow the user to obtain a more realistic and comfortable audio-visual experience.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本发明的视频图像画质调节方法的示意图。FIG. 1 is a schematic diagram of a method for adjusting video image quality according to the present invention.

具体实施方式Detailed ways

下面,结合附图以及具体实施方式,对本发明做进一步描述,需要说明的是,在不相冲突的前提下,以下描述的各实施例之间或各技术特征之间可以任意组合形成新的实施例。The present invention is further described below in conjunction with the accompanying drawings and specific implementation methods. It should be noted that, under the premise of no conflict, the various embodiments or technical features described below can be arbitrarily combined to form a new embodiment.

请参阅图1所示,本发明提供一种视频图像画质调节方法、系统、计算机设备及存储介质,其能够根据观众设备上的直播情况,分析观众用户端视频与主播用户端视频之间的差异,并个性化地对观众用户端的视频图像画质进行动态调节,缩小观众用户端视频与主播用户端视频之间的差异,从而提升用户观看体验。Please refer to Figure 1. The present invention provides a video image quality adjustment method, system, computer device and storage medium, which can analyze the difference between the viewer user-side video and the host user-side video according to the live broadcast situation on the viewer's device, and dynamically adjust the video image quality of the viewer user side in a personalized manner, thereby reducing the difference between the viewer user-side video and the host user-side video, thereby improving the user's viewing experience.

该视频图像画质调节方法,包括以下步骤:The video image quality adjustment method comprises the following steps:

M1、获取观众用户端视频以及对应的主播用户端视频;M1. Obtain the viewer user end video and the corresponding anchor user end video;

M2、将获取到的观众用户端视频以及对应的主播用户端视频输入预设的差异分析模型,输出差异类型,差异类型包括颜色差异、画面清晰度差异、运动特征差异和帧率差异;M2. Input the obtained viewer user end video and the corresponding anchor user end video into a preset difference analysis model, and output the difference type, which includes color difference, picture clarity difference, motion feature difference and frame rate difference;

M3、根据所述差异类型调节观众用户端的视频图像画质,包括颜色调节、清晰度调节、运动效果调节和帧率调节中的至少一种。M3. Adjust the video image quality of the viewer user terminal according to the difference type, including at least one of color adjustment, clarity adjustment, motion effect adjustment and frame rate adjustment.

进一步地,在步骤M3中,Further, in step M3,

所述颜色调节,包括:调整观众用户端视频的色彩饱和度、色调和亮度参数;The color adjustment includes: adjusting the color saturation, hue and brightness parameters of the viewer user end video;

所述清晰度调节,包括:调整观众用户端视频的锐度、对比度和分辨率参数;The clarity adjustment includes: adjusting the sharpness, contrast and resolution parameters of the viewer user end video;

所述运动效果调节,包括:调整观众用户端视频的运动补偿、帧间插值和动态模糊参数;The motion effect adjustment includes: adjusting the motion compensation, inter-frame interpolation and dynamic blur parameters of the viewer user end video;

所述帧率调节,包括:调整观众用户端视频的帧率参数。The frame rate adjustment includes: adjusting the frame rate parameters of the viewer user end video.

具体方法如下:The specific method is as follows:

针对颜色差异,可以通过调整观众用户端视频的色彩饱和度、色调和亮度等参数,使其与主播用户端视频的颜色保持一致。To address the color difference, you can adjust the color saturation, hue, brightness and other parameters of the viewer's user-side video to make it consistent with the color of the host's user-side video.

针对画面清晰度差异,可以根据差异分析模型输出的清晰度数据,调整观众用户端视频的锐度、对比度和分辨率等参数,以提高画面的清晰度。In view of the difference in picture clarity, the sharpness, contrast, resolution and other parameters of the viewer-end video can be adjusted according to the clarity data output by the difference analysis model to improve the picture clarity.

针对运动特征差异,可以根据视频中物体的移动情况,调整观众用户端视频的运动补偿、帧间插值和动态模糊等参数,以实现流畅的运动效果。In view of the differences in motion characteristics, parameters such as motion compensation, inter-frame interpolation and dynamic blur of the viewer-side video can be adjusted according to the movement of the objects in the video to achieve a smooth motion effect.

针对帧率差异,可以根据差异分析模型输出的帧率数据,调整观众用户端视频的帧率参数,使其与主播用户端视频的帧率一致,确保视频播放时不出现卡顿或画面不连贯的情况。In response to the frame rate difference, the frame rate parameters of the viewer's user-side video can be adjusted according to the frame rate data output by the difference analysis model to make it consistent with the frame rate of the host's user-side video, ensuring that there is no freeze or discontinuity in the video playback.

通过以上方法,可以根据不同的差异类型有针对性地调整观众用户端的视频图像参数,从而实现观众用户端视频的画质与主播用户端视频的画质一致。Through the above method, the video image parameters of the viewer user end can be adjusted in a targeted manner according to different difference types, so that the image quality of the viewer user end video is consistent with that of the host user end video.

在上述方案中,这种视频图像画质调节方法利用差异分析模型来根据观众用户端视频和主播用户端视频之间的差异类型,对观众用户端的视频图像画质进行调节,从而提升用户观看体验。具体分析如下:In the above scheme, this video image quality adjustment method uses a difference analysis model to adjust the video image quality of the viewer user end according to the difference type between the viewer user end video and the anchor user end video, thereby improving the user viewing experience. The specific analysis is as follows:

1.差异分析模型:通过预设的差异分析模型对观众用户端视频和主播用户端视频进行比较,能够精确地识别出颜色差异、画面清晰度差异、运动特征差异和帧率差异等不同类型的差异,为后续画质调节提供了准确的依据。1. Difference analysis model: By comparing the viewer user-side video and the anchor user-side video through the preset difference analysis model, different types of differences such as color difference, picture clarity difference, motion feature difference and frame rate difference can be accurately identified, providing an accurate basis for subsequent image quality adjustment.

2.颜色调节:通过调整观众用户端视频的色彩饱和度、色调和亮度参数,可以使画面色彩更加生动鲜明,保证视觉效果更加真实,增强用户观看的沉浸感。2. Color adjustment: By adjusting the color saturation, hue and brightness parameters of the viewer's end video, the picture colors can be made more vivid and bright, ensuring a more realistic visual effect and enhancing the user's immersive viewing experience.

3.清晰度调节:对观众用户端视频的锐度、对比度和分辨率参数进行调整,可以让画面更加清晰、细节更加丰富,提高画面的清晰度,有利于用户更清晰地观看视频内容。3. Clarity adjustment: Adjusting the sharpness, contrast and resolution parameters of the video on the viewer's end can make the picture clearer and more detailed, improve the clarity of the picture, and help users watch the video content more clearly.

4.运动效果调节:通过调整观众用户端视频的运动补偿、帧间插值和动态模糊参数,可以有效改善视频中的运动效果,减少画面抖动和模糊现象,提升视频播放的流畅性和稳定性。4. Motion effect adjustment: By adjusting the motion compensation, inter-frame interpolation and dynamic blur parameters of the viewer's end video, the motion effect in the video can be effectively improved, the jitter and blur of the picture can be reduced, and the smoothness and stability of the video playback can be improved.

5.帧率调节:对观众用户端视频的帧率参数进行调整,可以使视频播放更加流畅,避免画面卡顿或跳帧现象,提高用户的观看体验。5. Frame rate adjustment: Adjusting the frame rate parameters of the video on the viewer's end can make the video playback smoother, avoid screen freezes or frame skipping, and improve the user's viewing experience.

综合而言,这种视频图像画质调节方法通过综合调节颜色、清晰度、运动效果和帧率等参数,能够全面提升观众用户端视频的画质,使用户获得更好的观看体验,提高用户满意度和留存率。In general, this method of adjusting video image quality can comprehensively improve the image quality of the video on the viewer's end by comprehensively adjusting parameters such as color, clarity, motion effect and frame rate, so that users can have a better viewing experience and improve user satisfaction and retention rate.

进一步地,所述差异分析模型的构建步骤包括:Furthermore, the step of constructing the difference analysis model includes:

S1、通过直播系统自动收集历史时期观众用户端视频以及对应的主播用户端视频的数据,并进行分类存储;这一步骤的效果是实现了自动化数据采集和整理,为后续的模型构建提供了大量可用数据,有利于准确地分析观众与主播视频之间的差异。S1. Automatically collect data on audience user-end videos and corresponding anchor user-end videos in historical periods through the live broadcast system, and classify and store them. The effect of this step is to realize automated data collection and organization, providing a large amount of available data for subsequent model construction, which is conducive to accurately analyzing the differences between audience and anchor videos.

S2、对收集到的数据进行预处理;预处理数据可以消除噪声、填补缺失值、标准化数据等,有助于提高数据的质量和模型的稳定性,有效地减少模型训练过程中的干扰。S2. Preprocess the collected data; preprocessing data can eliminate noise, fill missing values, standardize data, etc., which helps to improve the quality of data and the stability of the model, and effectively reduce interference in the model training process.

S3、从预处理后的数据中提取特征,包括:颜色特征、清晰度特征、运动特征和帧率特征;提取各类特征有助于捕获视频内容的关键信息,帮助模型更好地理解数据,从而提高模型的准确性和泛化能力。S3. Extract features from the preprocessed data, including color features, clarity features, motion features, and frame rate features. Extracting various features helps capture key information of the video content and helps the model better understand the data, thereby improving the accuracy and generalization ability of the model.

S4、将数据集划分为训练集和测试集,训练集与测试集的比例为8:2;合理划分数据集可以有效评估模型的性能,同时也能防止模型过拟合。本实施例采用8:2的比例是因为本实施例的数据集数量较多,可以用更多数据作为训练集,提高训练的效果。训练集用于模型的学习,测试集用于评估模型的性能。S4. Divide the data set into a training set and a test set, with the ratio of training set to test set being 8:2. Reasonable division of the data set can effectively evaluate the performance of the model and prevent the model from overfitting. This embodiment adopts a ratio of 8:2 because the number of data sets in this embodiment is large, and more data can be used as training sets to improve the training effect. The training set is used to learn the model, and the test set is used to evaluate the performance of the model.

S5、基于随机森林算法构建差异分析模型;随机森林是一种集成学习算法,具有较高的准确性和鲁棒性,适合处理复杂数据集,能够有效应对高维数据和大规模数据,有助于构建出性能优异的差异分析模型。S5. Build a differential analysis model based on the random forest algorithm; random forest is an integrated learning algorithm with high accuracy and robustness. It is suitable for processing complex data sets and can effectively deal with high-dimensional data and large-scale data. It helps to build a differential analysis model with excellent performance.

S6、使用训练集对差异分析模型进行训练,优化模型参数以提高模型性能;具体地,使用训练集对差异分析模型进行训练,优化模型参数以提高模型性能;通过模型训练和参数优化,可以不断提升模型预测能力,使其更好地适应实际情况,提高模型在差异分析任务上的效果。S6. Use the training set to train the differential analysis model and optimize the model parameters to improve the model performance. Specifically, use the training set to train the differential analysis model and optimize the model parameters to improve the model performance. Through model training and parameter optimization, the model prediction ability can be continuously improved to better adapt to the actual situation and improve the effect of the model on the differential analysis task.

S7、使用测试集评估模型的性能,包括:通过混淆矩阵、准确率、召回率和F1值指标进行评估;通过综合评估多个指标,如混淆矩阵、准确率、召回率和F1值,可以更全面地了解模型的性能表现,有利于发现模型存在的问题并进一步优化。S7. Use the test set to evaluate the performance of the model, including evaluation through confusion matrix, accuracy, recall rate and F1 value indicators. By comprehensively evaluating multiple indicators, such as confusion matrix, accuracy, recall rate and F1 value, you can have a more comprehensive understanding of the performance of the model, which is conducive to discovering problems with the model and further optimizing it.

S8、通过AP I接口或集成SDK方式将经过训练和评估的差异分析模型部署到直播系统中;将差异分析模型成功部署到直播系统中,可以实现实时的视频内容差异分析,为直播系统提供更加智能化和个性化的服务,提升用户体验;其中,SDK(Software Deve lopment Ki t)是软件开发工具包的缩写。S8. Deploy the trained and evaluated difference analysis model to the live broadcast system through an API interface or an integrated SDK. Successfully deploying the difference analysis model to the live broadcast system can realize real-time video content difference analysis, provide more intelligent and personalized services for the live broadcast system, and improve user experience. SDK (Software Development Kit) is the abbreviation of software development kit.

进一步地,步骤S2包括:Further, step S2 includes:

对收集到的历史时期观众用户端视频以及对应的主播用户端视频的数据进行清洗,包括去除重复数据、处理缺失值和处理异常值;Clean the data of the audience user-end videos and the corresponding anchor user-end videos collected in the historical period, including removing duplicate data, processing missing values and processing outliers;

对不同来源的数据进行统一的格式转换和标准化处理;Perform unified format conversion and standardization on data from different sources;

根据需要对数据进行降维处理;Perform dimensionality reduction on the data as needed;

对数据集进行增强处理,增加数据的多样性和数量。Enhance the data set to increase the diversity and quantity of the data.

步骤S2涉及对观众用户端视频和主播用户端视频数据的清洗、格式转换、降维处理以及数据增强,其技术效果如下:Step S2 involves cleaning, format conversion, dimensionality reduction and data enhancement of the viewer user end video and the anchor user end video data, and its technical effects are as follows:

1.数据清洗:清洗历史时期观众用户端视频和主播用户端视频数据的过程可以有效去除重复数据、处理缺失值和异常值,提高数据的质量和可靠性;通过清洗,可减少数据噪音,保证数据准确性,有助于后续分析和建模工作的有效展开。1. Data cleaning: The process of cleaning the audience user-side video and anchor user-side video data in the historical period can effectively remove duplicate data, process missing values and outliers, and improve the quality and reliability of data; through cleaning, data noise can be reduced, data accuracy can be ensured, and subsequent analysis and modeling work can be effectively carried out.

2.数据格式转换和标准化处理:将来自不同来源的数据统一转换为相同格式,进行标准化处理可以提高数据的一致性和可比性;统一格式有利于后续数据整合和分析,提高数据的可操作性和效率。2. Data format conversion and standardization: Converting data from different sources into the same format and standardizing it can improve the consistency and comparability of the data; a unified format is conducive to subsequent data integration and analysis, and improves the operability and efficiency of the data.

3.数据降维处理:针对需要降低维度的数据,通过降维处理可以减少数据冗余信息,简化模型复杂度,加快数据处理和分析的速度;降维处理有助于发现数据间的潜在关联性,提高模型的泛化能力和预测准确性。3. Data dimensionality reduction: For data that need to be reduced in dimension, dimensionality reduction can reduce data redundant information, simplify model complexity, and speed up data processing and analysis. Dimensionality reduction helps discover potential correlations between data and improve the generalization ability and prediction accuracy of the model.

4.数据增强处理:通过对数据集进行增强处理,可以扩充数据样本,增加数据多样性和数量,提高模型的训练效果和泛化能力;增加数据多样性有助于模型更好地适应不同情况,提高模型的鲁棒性和稳定性,减少过拟合风险。4. Data enhancement processing: By enhancing the data set, you can expand the data samples, increase the data diversity and quantity, and improve the training effect and generalization ability of the model; increasing data diversity helps the model better adapt to different situations, improve the robustness and stability of the model, and reduce the risk of overfitting.

综合来看,步骤S2中的数据处理工作能够有效优化数据质量、提高数据可用性,并为后续的数据分析和建模工作奠定良好基础,从而提升整体数据处理和分析效率。In general, the data processing work in step S2 can effectively optimize data quality, improve data availability, and lay a good foundation for subsequent data analysis and modeling work, thereby improving the overall data processing and analysis efficiency.

进一步地,步骤S3包括:Further, step S3 includes:

清晰度特征提取:结合均方误差、结构相似性指数和峰值信噪比,综合评估视频帧的清晰度得分;这一步骤可以准确地反映出视频帧的清晰程度。有助于识别清晰度较低的帧,为后续处理和优化提供依据。Clarity feature extraction: Combine the mean square error, structural similarity index and peak signal-to-noise ratio to comprehensively evaluate the clarity score of the video frame; this step can accurately reflect the clarity of the video frame. It helps to identify frames with lower clarity and provide a basis for subsequent processing and optimization.

运动特征提取:通过光流法检测视频中的运动信息,通过帧间差分法计算相邻帧之间的运动情况;这一步骤可以有效地提取视频中的运动特征。Motion feature extraction: The motion information in the video is detected by the optical flow method, and the motion between adjacent frames is calculated by the inter-frame difference method; this step can effectively extract the motion features in the video.

帧率特征提取:统计视频的帧率信息,包括平均帧率和最大帧率;这一步骤可以帮助评估视频的流畅度和画面质量。Frame rate feature extraction: Count the frame rate information of the video, including the average frame rate and the maximum frame rate; this step can help evaluate the smoothness and picture quality of the video.

颜色特征提取,通过将视频数据转换到HSV色彩空间,并计算每一帧图像中各个颜色在色调、饱和度、亮度三个通道的分布情况,得到每帧图像的颜色直方图数据。通过K均值聚类算法对颜色直方图数据进行聚类和分类,可以实现颜色信息的提取和分类。具体包括以下步骤:Color feature extraction, by converting video data into HSV color space and calculating the distribution of each color in each frame of image in three channels: hue, saturation, and brightness, to obtain the color histogram data of each frame of image. The color histogram data is clustered and classified by the K-means clustering algorithm to achieve the extraction and classification of color information. Specifically, the following steps are included:

将预处理后的视频数据转换到HSV色彩空间,通过计算每一帧图像中各个颜色在色调、饱和度、亮度三个通道的分布情况,得到每帧图像的颜色直方图数据;The preprocessed video data is converted to the HSV color space, and the color histogram data of each frame of the image is obtained by calculating the distribution of each color in the three channels of hue, saturation, and brightness in each frame of the image;

利用K均值聚类算法,对颜色直方图数据进行聚类和分类,将相似颜色分为同一类别,实现颜色信息的提取和分类;The K-means clustering algorithm is used to cluster and classify the color histogram data, and similar colors are classified into the same category to achieve the extraction and classification of color information;

通过颜色关联性分析方法,探索不同颜色之间的关联关系,挖掘视频中不同颜色之间的搭配和变化规律特征。Through the color correlation analysis method, we explore the correlation between different colors and dig out the matching and change rules between different colors in the video.

本实施例具有以下技术效果:1.HSV色彩空间转换:将预处理后的视频数据转换为HSV色彩空间有助于更好地捕捉颜色信息,相比于RGB色彩空间,HSV色彩空间更符合人类感知方式,能够更准确地描述颜色的属性。2.颜色直方图数据提取:通过计算每一帧图像中各个颜色在色调、饱和度、亮度三个通道的分布情况,可以得到每帧图像的颜色直方图数据,这有利于对颜色进行量化描述和统计分析。3.K均值聚类算法:通过K均值聚类算法对颜色直方图数据进行聚类和分类,能够将相似颜色分为同一类别,实现颜色信息的提取和分类,有助于识别视频中不同颜色的分布情况及变化趋势。4.颜色关联性分析方法(颜色关联性分析是指通过对不同颜色之间的相关性进行研究和分析,以揭示它们之间可能存在的某种规律或联系。在图像处理领域,颜色关联性分析可用于图像分割、物体检测、目标跟踪等任务。通过分析不同颜色在图像中的空间分布和相互作用,可以帮助算法更好地理解图像内容,从而提高图像处理的准确性和效率):通过颜色关联性分析方法可以深入探索不同颜色之间的关联关系,挖掘视频中不同颜色之间的搭配和变化规律特征,帮助理解颜色在视频中的作用和表现形式。总之,这种颜色特征提取方法可以有效地从视频数据中提取出丰富的颜色信息,并通过对颜色特征的分析和分类,揭示视频中不同颜色的规律性和变化趋势,为进一步的视频内容分析和应用提供了重要的基础。This embodiment has the following technical effects: 1. HSV color space conversion: Converting the preprocessed video data into the HSV color space helps to better capture color information. Compared with the RGB color space, the HSV color space is more in line with human perception and can more accurately describe the properties of color. 2. Color histogram data extraction: By calculating the distribution of each color in each frame of the image in the three channels of hue, saturation, and brightness, the color histogram data of each frame of the image can be obtained, which is conducive to quantitative description and statistical analysis of the color. 3. K-means clustering algorithm: The color histogram data is clustered and classified by the K-means clustering algorithm, and similar colors can be classified into the same category, realizing the extraction and classification of color information, which is helpful to identify the distribution and change trend of different colors in the video. 4. Color correlation analysis method (color correlation analysis refers to the study and analysis of the correlation between different colors to reveal certain rules or connections that may exist between them. In the field of image processing, color correlation analysis can be used for image segmentation, object detection, target tracking and other tasks. By analyzing the spatial distribution and interaction of different colors in the image, it can help the algorithm better understand the image content, thereby improving the accuracy and efficiency of image processing): The color correlation analysis method can deeply explore the correlation between different colors, explore the matching and change rules between different colors in the video, and help understand the role and expression of color in the video. In short, this color feature extraction method can effectively extract rich color information from video data, and through the analysis and classification of color features, it can reveal the regularity and change trend of different colors in the video, providing an important foundation for further video content analysis and application.

一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行如以上所述的方法。A computer-readable storage medium stores a computer program, wherein the computer program is executed by a processor to perform the above method.

一种计算机设备,包括:程序,所述程序包括用于执行如以上所述的方法。A computer device comprises: a program, wherein the program comprises instructions for executing the method described above.

进一步,所述计算机设备包括:处理器和存储器,其中所述程序被存储在所述存储器中,并且被配置成由处理器执行。Furthermore, the computer device includes: a processor and a memory, wherein the program is stored in the memory and is configured to be executed by the processor.

一种视频图像画质调节系统,所述视频图像画质调节系统用于执行如以上所述的方法;所述视频图像画质调节系统包括:A video image quality adjustment system, the video image quality adjustment system is used to execute the method as described above; the video image quality adjustment system comprises:

数据采集模块,用于获取观众用户端视频以及对应的主播用户端视频;The data acquisition module is used to obtain the viewer user end video and the corresponding anchor user end video;

数据分析模块,用于将获取到的观众用户端视频以及对应的主播用户端视频输入预设的差异分析模型,输出差异类型,差异类型包括颜色差异、画面清晰度差异、运动特征差异和帧率差异;The data analysis module is used to input the obtained viewer user end video and the corresponding anchor user end video into a preset difference analysis model, and output the difference type, which includes color difference, picture clarity difference, motion feature difference and frame rate difference;

画质调节模块,用于根据所述差异类型调节观众用户端的视频图像画质,包括颜色调节、清晰度调节、运动效果调节和帧率调节中的至少一种。The image quality adjustment module is used to adjust the video image quality of the viewer user terminal according to the difference type, including at least one of color adjustment, clarity adjustment, motion effect adjustment and frame rate adjustment.

综上所述,本发明通过分析观众用户端视频与主播用户端视频之间的差异,实现个性化动态调节观众用户端视频图像画质,从而提升用户观看体验。具体技术效果如下:In summary, the present invention analyzes the difference between the viewer user end video and the anchor user end video to realize personalized dynamic adjustment of the viewer user end video image quality, thereby improving the user viewing experience. The specific technical effects are as follows:

1.差异分析模型:通过预设的差异分析模型,对观众用户端视频与主播用户端视频进行分析,可以输出不同类型的差异,包括颜色差异、画面清晰度差异、运动特征差异和帧率差异,从而准确捕捉到两者之间的具体区别。1. Difference analysis model: Through the preset difference analysis model, the viewer user-side video and the anchor user-side video are analyzed, and different types of differences can be output, including color difference, picture clarity difference, motion feature difference and frame rate difference, so as to accurately capture the specific difference between the two.

2.个性化调节:根据所得到的差异类型,对观众用户端的视频图像画质进行个性化调节,包括颜色、清晰度、运动效果和帧率等方面至少一种调节方式,以更好地适应观众设备与主播设备之间的差异,提高观看体验。2. Personalized adjustment: Based on the obtained difference type, the video image quality of the viewer user end is personalized adjusted, including at least one adjustment method in terms of color, clarity, motion effect and frame rate, so as to better adapt to the difference between the viewer's device and the anchor's device and improve the viewing experience.

3.动态调节:该方法能够实现动态调节观众用户端视频图像画质,即时响应不同场景下的需求变化,使观众在观看视频时获得更为流畅、清晰的画面效果,增强用户的沉浸感。3. Dynamic adjustment: This method can dynamically adjust the video image quality of the audience user end, and instantly respond to changes in demand in different scenarios, so that the audience can obtain a smoother and clearer picture effect when watching the video, enhancing the user's sense of immersion.

4.提升用户体验:通过缩小观众用户端视频与主播用户端视频之间的差异,该方法能够有效提升用户的观看体验,让用户获得更为真实、舒适的视听感受。4. Improve user experience: By narrowing the difference between the viewer's user-side video and the host's user-side video, this method can effectively improve the user's viewing experience and allow users to obtain a more realistic and comfortable audio-visual experience.

以上,仅为本发明的较佳实施例而已,并非对本发明作任何形式上的限制;凡本行业的普通技术人员均可按说明书附图所示和以上而顺畅地实施本发明;但是,凡熟悉本专业的技术人员在不脱离本发明技术方案范围内,利用以上所揭示的技术内容而做出的些许更动、修饰与演变的等同变化,均为本发明的等效实施例;同时,凡依据本发明的实质技术对以上实施例所作的任何等同变化的更动、修饰与演变等,均仍属于本发明的技术方案的保护范围之内。The above are only preferred embodiments of the present invention and are not intended to limit the present invention in any form. Any ordinary technician in the industry can smoothly implement the present invention as shown in the drawings and above. However, any equivalent changes, modifications and evolutions made by technicians familiar with the profession without departing from the scope of the technical solution of the present invention using the technical content disclosed above are all equivalent embodiments of the present invention. At the same time, any equivalent changes, modifications and evolutions made to the above embodiments based on the essential technology of the present invention are still within the protection scope of the technical solution of the present invention.

Claims (10)

1. A video image quality adjustment method, comprising the steps of:
M1, acquiring a video of a user side of a spectator and a corresponding video of a user side of a host;
M2, inputting the acquired audience user side video and the corresponding anchor user side video into a preset difference analysis model, and outputting a difference type, wherein the difference type comprises a color difference, a picture definition difference, a motion characteristic difference and a frame rate difference;
and M3, adjusting the video image quality of the audience user side according to the difference type, wherein the video image quality comprises at least one of color adjustment, definition adjustment, motion effect adjustment and frame rate adjustment.
2. The video image quality adjustment method according to claim 1, characterized in that: the construction step of the differential analysis model comprises the following steps:
S1, automatically collecting data of historical audience user side videos and corresponding anchor user side videos through a live broadcast system, and storing the data in a classified mode;
s2, preprocessing the collected data;
s3, extracting features from the preprocessed data, wherein the steps comprise: color features, sharpness features, motion features, and frame rate features;
s4, dividing the data set into a training set and a testing set;
s5, constructing a difference analysis model based on a random forest algorithm;
s6, training the differential analysis model by using a training set, and optimizing model parameters to improve model performance;
s7, evaluating the performance of the model by using the test set, wherein the method comprises the following steps: evaluating through confusion matrix, accuracy, recall rate and F1 value index;
and S8, deploying the trained and evaluated difference analysis model into a live broadcast system through an API interface or an integrated SDK mode.
3. The video image quality adjustment method according to claim 2, characterized in that: the step S2 comprises the following steps:
the collected data of the audience user side video and the corresponding anchor user side video in the historical period are cleaned, including repeated data removal, missing value processing and abnormal value processing;
Carrying out unified format conversion and standardization treatment on data from different sources;
performing dimension reduction treatment on the data according to the need;
and the data set is subjected to enhancement processing, so that the diversity and the number of the data are increased.
4. The video image quality adjustment method according to claim 2, characterized in that: the step S3 comprises the following steps:
and (3) extracting definition characteristics: comprehensively evaluating the definition score of the video frame by combining the mean square error, the structural similarity index and the peak signal-to-noise ratio;
motion feature extraction: detecting motion information in the video by an optical flow method, and calculating motion conditions between adjacent frames by an inter-frame difference method;
Frame rate feature extraction: counting frame rate information of the video, including average frame rate and maximum frame rate;
The color feature extraction specifically comprises the following steps:
Converting the preprocessed video data into HSV color space, and obtaining color histogram data of each frame of image by calculating distribution conditions of each color in three channels of tone, saturation and brightness in each frame of image;
clustering and classifying the color histogram data by using a K-means clustering algorithm, and classifying similar colors into the same category to realize extraction and classification of color information;
And exploring the association relation among different colors through a color association analysis method, and mining the collocation and change rule characteristics among different colors in the video.
5. The video image quality adjustment method according to claim 2, characterized in that: in step S4, the ratio of training set to test set is 8:2.
6. The video image quality adjustment method according to claim 1, characterized in that: in the step M3 of the process,
The color adjustment includes: adjusting color saturation, hue and brightness parameters of the video of the audience user side;
The sharpness adjustment includes: adjusting parameters of sharpness, contrast and resolution of the video of the user side of the audience;
The athletic performance adjustment includes: adjusting motion compensation, inter-frame interpolation and motion blur parameters of the video of the audience user side;
the frame rate adjustment, comprising: and adjusting the frame rate parameters of the video of the audience user side.
7. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program being executable by a processor for performing the method of any of claims 1-6.
8. A computer device, comprising: a program comprising instructions for performing the method of any one of claims 1-6.
9. A computer device according to claim 8, characterized by comprising: a processor and a memory, wherein the program is stored in the memory and configured to be executed by the processor.
10. A video image quality adjustment system, characterized by: the video image quality adjustment system for performing the method of any one of claims 1-6; the video image quality adjustment system includes:
the data acquisition module is used for acquiring the video of the audience user side and the corresponding video of the anchor user side;
The data analysis module is used for inputting the acquired audience user side video and the corresponding anchor user side video into a preset difference analysis model, and outputting a difference type, wherein the difference type comprises a color difference, a picture definition difference, a motion characteristic difference and a frame rate difference;
and the image quality adjusting module is used for adjusting the image quality of the video image at the user end of the audience according to the difference type, and comprises at least one of color adjustment, definition adjustment, motion effect adjustment and frame rate adjustment.
CN202410541608.9A 2024-04-30 2024-04-30 Video image quality adjustment method, system, computer equipment and storage medium Pending CN118450195A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410541608.9A CN118450195A (en) 2024-04-30 2024-04-30 Video image quality adjustment method, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410541608.9A CN118450195A (en) 2024-04-30 2024-04-30 Video image quality adjustment method, system, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118450195A true CN118450195A (en) 2024-08-06

Family

ID=92317476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410541608.9A Pending CN118450195A (en) 2024-04-30 2024-04-30 Video image quality adjustment method, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118450195A (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4034540A1 (en) * 1989-10-31 1991-05-16 Canon Kk Image processing with discrimination or black-white status
CN1452769A (en) * 2000-07-12 2003-10-29 大日精化工业株式会社 Image processing device for transmitting color with fidelity and image data providing method
CN111414842A (en) * 2020-03-17 2020-07-14 腾讯科技(深圳)有限公司 Video comparison method and device, computer equipment and storage medium
CN113301355A (en) * 2020-07-01 2021-08-24 阿里巴巴集团控股有限公司 Video transmission, live broadcast and play method, equipment and storage medium
CN113920037A (en) * 2021-12-14 2022-01-11 极限人工智能有限公司 Endoscope picture correction method, device, correction system and storage medium
US20220114647A1 (en) * 2020-10-13 2022-04-14 Trax Technology Solutions Pte Ltd. Visual product recognition based on shopping list
CN114697523A (en) * 2020-12-28 2022-07-01 深圳Tcl数字技术有限公司 Method and system for correcting shooting parameters of camera, display equipment and storage medium
CN115002554A (en) * 2022-05-13 2022-09-02 广州方硅信息技术有限公司 Live broadcast picture adjusting method, system and device and computer equipment
CN115086686A (en) * 2021-03-11 2022-09-20 北京有竹居网络技术有限公司 Video processing method and related device
CN115701627A (en) * 2021-08-02 2023-02-10 广州视源电子科技股份有限公司 Display device image quality debugging method and device, debugging device and debugging system
CN115802064A (en) * 2022-09-22 2023-03-14 广州方硅信息技术有限公司 Method and device for adjusting brightness of live broadcast wheat, computer equipment and medium
CN116095355A (en) * 2023-01-18 2023-05-09 百果园技术(新加坡)有限公司 Video display control method and device, equipment, medium and product thereof
CN117058030A (en) * 2023-08-14 2023-11-14 腾讯科技(深圳)有限公司 Image processing method, apparatus, device, readable storage medium, and program product
CN117376493A (en) * 2023-10-23 2024-01-09 神力视界(深圳)文化科技有限公司 Color adjustment method and device
CN117499608A (en) * 2023-10-20 2024-02-02 深圳创维-Rgb电子有限公司 White balance adjustment method, white balance adjustment device, server, display device and storage medium
CN117857842A (en) * 2024-03-07 2024-04-09 淘宝(中国)软件有限公司 Image quality processing method in live broadcast scene and electronic equipment

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4034540A1 (en) * 1989-10-31 1991-05-16 Canon Kk Image processing with discrimination or black-white status
CN1452769A (en) * 2000-07-12 2003-10-29 大日精化工业株式会社 Image processing device for transmitting color with fidelity and image data providing method
CN111414842A (en) * 2020-03-17 2020-07-14 腾讯科技(深圳)有限公司 Video comparison method and device, computer equipment and storage medium
CN113301355A (en) * 2020-07-01 2021-08-24 阿里巴巴集团控股有限公司 Video transmission, live broadcast and play method, equipment and storage medium
US20220114647A1 (en) * 2020-10-13 2022-04-14 Trax Technology Solutions Pte Ltd. Visual product recognition based on shopping list
CN114697523A (en) * 2020-12-28 2022-07-01 深圳Tcl数字技术有限公司 Method and system for correcting shooting parameters of camera, display equipment and storage medium
CN115086686A (en) * 2021-03-11 2022-09-20 北京有竹居网络技术有限公司 Video processing method and related device
CN115701627A (en) * 2021-08-02 2023-02-10 广州视源电子科技股份有限公司 Display device image quality debugging method and device, debugging device and debugging system
CN113920037A (en) * 2021-12-14 2022-01-11 极限人工智能有限公司 Endoscope picture correction method, device, correction system and storage medium
CN115002554A (en) * 2022-05-13 2022-09-02 广州方硅信息技术有限公司 Live broadcast picture adjusting method, system and device and computer equipment
CN115802064A (en) * 2022-09-22 2023-03-14 广州方硅信息技术有限公司 Method and device for adjusting brightness of live broadcast wheat, computer equipment and medium
CN116095355A (en) * 2023-01-18 2023-05-09 百果园技术(新加坡)有限公司 Video display control method and device, equipment, medium and product thereof
CN117058030A (en) * 2023-08-14 2023-11-14 腾讯科技(深圳)有限公司 Image processing method, apparatus, device, readable storage medium, and program product
CN117499608A (en) * 2023-10-20 2024-02-02 深圳创维-Rgb电子有限公司 White balance adjustment method, white balance adjustment device, server, display device and storage medium
CN117376493A (en) * 2023-10-23 2024-01-09 神力视界(深圳)文化科技有限公司 Color adjustment method and device
CN117857842A (en) * 2024-03-07 2024-04-09 淘宝(中国)软件有限公司 Image quality processing method in live broadcast scene and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何泽奇等: "人工智能", 31 May 2021, 航空工业出版社, pages: 226 *
钟大伟等: "数据应用工程 方法论与实践", 30 June 2022, 机械工业出版社, pages: 119 - 125 *

Similar Documents

Publication Publication Date Title
Ozcinar et al. Visual attention in omnidirectional video for virtual reality applications
CN102077580B (en) Display control device, display control method
CN101599179B (en) A method for automatically generating highlights of field sports highlights
CN108335307A (en) Adaptive tobacco leaf picture segmentation method and system based on dark primary
CN108010037B (en) Image processing method, device and storage medium
US8942469B2 (en) Method for classification of videos
WO2022041830A1 (en) Pedestrian re-identification method and device
CN110569773A (en) A two-stream network action recognition method based on spatio-temporal saliency action attention
CN104041063B (en) The related information storehouse of video makes and method, platform and the system of video playback
CN109670398A (en) Pig image analysis method and pig image analysis equipment
CN114900692A (en) Video stream frame rate adjustment method and its device, equipment, medium and product
CN113011399A (en) Video abnormal event detection method and system based on generation cooperative judgment network
CN113784171A (en) Video data processing method, device, computer system and readable storage medium
CN116912783B (en) State monitoring method and system of nucleic acid detection platform
WO2023138590A1 (en) Reference-free video quality determination method and apparatus, and device and storage medium
CN110852209B (en) Target detection method and device, medium and equipment
Li et al. Perceptual quality assessment of face video compression: A benchmark and an effective method
CN118450195A (en) Video image quality adjustment method, system, computer equipment and storage medium
CN118585964A (en) Video saliency prediction method and system based on audio-visual correlation feature fusion strategy
Fan et al. Inverse-tone-mapped HDR video quality assessment: A new dataset and benchmark
CN111866583A (en) Video monitoring resource adjusting method, device, medium and electronic equipment
CN118450194B (en) Video image quality inspection method and system
KR101337833B1 (en) Method for estimating response of audience concerning content
CN111327943B (en) Information management method, device, system, computer equipment and storage medium
Ellahi et al. A machine-learning framework to predict TMO preference based on image and visual attention features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination