CN115660950B - Color migration method, device and medium for generating LUT based on neural network - Google Patents
Color migration method, device and medium for generating LUT based on neural network Download PDFInfo
- Publication number
- CN115660950B CN115660950B CN202211546270.3A CN202211546270A CN115660950B CN 115660950 B CN115660950 B CN 115660950B CN 202211546270 A CN202211546270 A CN 202211546270A CN 115660950 B CN115660950 B CN 115660950B
- Authority
- CN
- China
- Prior art keywords
- content
- lut
- neural network
- color
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
本发明公开了一种基于神经网络生成LUT的颜色迁移方法、设备及介质,属于计算机图形学和计算机视觉领域,包括步骤:选取内容图和参考图,并将内容图和参考图均送入神经网络生成LUT;再将生成的LUT作用在内容图上生成颜色迁移后的图,通过约束风格损失和内容损失从而实现颜色迁移。本发明提高了颜色迁移的速度和稳定性。
The invention discloses a color migration method, device and medium for generating LUT based on a neural network, belonging to the field of computer graphics and computer vision, comprising the steps of: selecting a content map and a reference map, and sending both the content map and the reference map to the neural network The network generates a LUT; then the generated LUT is applied to the content image to generate a color-migrated image, and the color migration is realized by constraining the style loss and content loss. The invention improves the speed and stability of color migration.
Description
技术领域technical field
本发明涉及计算机图形学和计算机视觉领域,更为具体的,涉及一种基于神经网络生成LUT的颜色迁移方法、设备及介质。The present invention relates to the fields of computer graphics and computer vision, and more specifically, relates to a color migration method, device and medium for generating LUT based on a neural network.
背景技术Background technique
随着数字媒体的快速发展,特别是数字创意在制作的过程中,颜色迁移在视频编辑中成为非常重要的任务。常见的颜色迁移方法,将给定的参考图和内容图,通过计算参考图和内容图之间的均值和方差实现。这种方法在计算较大的图片时耗时较高,且仅能实现全局的颜色迁移。一些研究也使用局部的方法进行颜色迁移,但耗时会随着处理的区域的增加而增加。With the rapid development of digital media, especially in the process of digital creative production, color migration has become a very important task in video editing. The common color migration method, the given reference image and content image, is realized by calculating the mean and variance between the reference image and the content image. This method is time-consuming to calculate larger images, and can only achieve global color transfer. Some studies also use local methods for color transfer, but the time consumption will increase with the increase of the processed area.
发明内容Contents of the invention
本发明的目的在于克服现有技术的不足,提供一种基于神经网络生成LUT的颜色迁移方法、设备及介质,实现基于神经网络生成参考图和内容之间的颜色LUT,从而将颜色迁移的LUT应用在视频编辑中,为数字创意制作提供智能技术支持,提高了颜色迁移的速度和稳定性等。The purpose of the present invention is to overcome the deficiencies of the prior art, to provide a color migration method, device and medium based on a neural network to generate a LUT, to realize the generation of a color LUT between a reference image and content based on a neural network, so as to transfer the color to the LUT Applied in video editing, it provides intelligent technical support for digital creative production, and improves the speed and stability of color migration.
本发明的目的是通过以下方案实现的:The purpose of the present invention is achieved by the following scheme:
一种基于神经网络生成LUT的颜色迁移方法,包括以下步骤:A method for color migration based on neural network generation LUT, comprising the following steps:
选取内容图和参考图,并将内容图和参考图均送入神经网络生成LUT;再将生成的LUT作用在内容图上生成颜色迁移后的图,通过约束风格损失和内容损失从而实现颜色迁移。Select the content image and reference image, and send both the content image and reference image to the neural network to generate LUT; then apply the generated LUT to the content image to generate a color-migrated image, and achieve color migration by constraining style loss and content loss .
进一步地,在选取内容图和参考图之前,包括步骤:构建颜色数据集,在构建的颜色数据集上选取内容图和参考图。Further, before selecting the content map and the reference map, the steps include: constructing a color data set, and selecting the content map and the reference map on the constructed color data set.
进一步地,所述选取内容图和参考图包括步骤:给定一张风格图和一段内容视频,将内容视频的帧选取出来作为内容图,将风格图作为参考图;Further, the selection of the content map and the reference map includes the steps of: given a style map and a piece of content video, selecting a frame of the content video as the content map, and using the style map as the reference map;
所述将生成的LUT作用在内容图上包括步骤:将生成的LUT作用在视频中的帧上。Applying the generated LUT to the content map includes the step of: applying the generated LUT to frames in the video.
进一步地,所述将内容图和参考图均送入神经网络生成LUT;再将生成的LUT作用在内容图上生成颜色迁移后的图,通过约束风格损失和内容损失从而实现颜色迁移,包括子步骤:Further, the content image and the reference image are both sent to the neural network to generate LUT; then the generated LUT is applied to the content image to generate a color-migrated image, and the color migration is realized by constraining the style loss and content loss, including sub- step:
在构建的颜色数据集上随机选取参考图和内容图,并将参考图和内容图按照相同分辨率在图像空间随机裁剪,将裁剪后的参考图和内容图作为神经网络的输入,用于训练神经网络生成LUT;Randomly select the reference image and the content image on the constructed color data set, and randomly crop the reference image and the content image in the image space according to the same resolution, and the cropped reference image and content map As the input of the neural network, it is used to train the neural network to generate LUT;
在开始训练神经网络时,将神经网络生成的LUT作用在内容图上,得到颜色迁移之后的图片,这个过程表示为:When starting to train the neural network, apply the LUT generated by the neural network to the content image to obtain the image after color migration , this process is expressed as:
其中,(,)表示参考图和内容图输入到神经网络中得到的LUT;表示内容图通过神经网络生成的LUT对每个像素点的颜色进行查找,得到对应位置的颜色;in, ( , ) means the reference image and content map The LUT obtained by inputting into the neural network; Indicates the content map The color of each pixel is searched through the LUT generated by the neural network to obtain the color of the corresponding position;
在训练的过程中,提取参考图、内容图以及颜色迁移得到的图的深度特征、和,分别计算对应的风格损失和内容损失,表示为:During the training process, the reference image is extracted , content map And the graph obtained by color migration deep features of , and , respectively calculate the corresponding style loss and loss of content ,Expressed as:
=mse(mean()-mean())+(std()-std()) =mse(mean( )-mean( ))+(std( )-std( ))
=mse(mean()-mean())+(std()-std()) =mse(mean( )-mean( ))+(std( )-std( ))
其中,mean表示图像特征的均值;std表示图像特征的方差;mse表示矩阵的均方误差。Among them, mean represents the mean value of image features; std represents the variance of image features; mse represents the mean square error of the matrix.
进一步地,所述神经网络通过输入风格图和内容图进行卷积特征提取,再将提取到的特征进行融合,最终生成N个基本LUT线性组合的权重值,N为整数。Further, the neural network performs convolutional feature extraction by inputting style maps and content maps, and then fuses the extracted features to finally generate weight values of N basic LUT linear combinations, where N is an integer.
进一步地,所述神经网络中包含了N个初始化的基本LUT,这些基本LUT中的值在网络训练的过程中按照梯度反向传播进行更新,N为整数。Further, the neural network includes N initialized basic LUTs, values in these basic LUTs are updated according to gradient backpropagation during network training, and N is an integer.
进一步地,在所述将生成的LUT作用在视频中的帧上后,所述生成颜色迁移后的图包括步骤:将内容图的颜色按照风格图的颜色进行迁移。Further, after the generated LUT is applied to the frames in the video, the generating the color-shifted image includes a step of: migrating the color of the content image according to the color of the style image.
进一步地,所述将内容视频的帧选取出来作为内容图包括步骤:将视频的首帧作为内容图;Further, the selecting the frame of the content video as the content map includes the steps of: taking the first frame of the video as the content map;
在将生成的LUT作用在视频中的首帧上后,所述生成颜色迁移后的图包括步骤:根据风格图通过神经网络生成的颜色迁移LUT对视频的所有帧进行颜色迁移。After the generated LUT is applied to the first frame in the video, the generating the image after color migration includes the step of performing color migration on all frames of the video through the color migration LUT generated by the neural network according to the style image.
一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器加载并执行如上所述的方法。A computer device, the computer device includes a processor and a memory, a computer program is stored in the memory, when the computer program is loaded by the processor and executes the method as described above.
一种可读存储介质,在可读存储介质存储有程序,当所述程序被处理器加载时实现如上所述的方法。A readable storage medium stores a program on the readable storage medium, and when the program is loaded by a processor, the above method is implemented.
本发明的有益效果包括:The beneficial effects of the present invention include:
本发明技术方案能够利用基于神经网络生成风格图和内容图的颜色迁移的LUT,从而将该LUT使用在任意视频颜色迁移的任务上。The technical solution of the present invention can use the LUT for color migration of the style map and the content map generated based on the neural network, so that the LUT can be used in any video color migration task.
本发明技术方案通过生成颜色迁移的LUT使得视频颜色迁移更具有稳定性。The technical solution of the present invention makes video color migration more stable by generating a color migration LUT.
本发明技术方案结合神经网络生成参考图和内容图的颜色迁移LUT,并将LUT使用在视频色彩迁移上,既能利用神经网络的方法进行自动生成LUT,又能利用LUT快速校色的优点,将参考图的风格迁移快速迁移到内容视频上,提高了颜色迁移的速度和稳定性。The technical solution of the present invention combines the neural network to generate the color migration LUT of the reference image and the content image, and uses the LUT in the video color migration, which can not only use the neural network method to automatically generate the LUT, but also take advantage of the advantages of the LUT for rapid color correction. Quickly migrate the style transfer of the reference image to the content video, improving the speed and stability of color transfer.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those skilled in the art, other drawings can also be obtained according to these drawings without any creative effort.
图1为本发明实施例方法的流程图;Fig. 1 is the flowchart of the embodiment method of the present invention;
图2为本发明实施例方法进行颜色迁移的示意图;Fig. 2 is a schematic diagram of color migration by the method of the embodiment of the present invention;
图3为本发明实施例方法利用生成LUT进行视频颜色迁移的示意图。FIG. 3 is a schematic diagram of video color migration by using a generated LUT in a method according to an embodiment of the present invention.
具体实施方式Detailed ways
本说明书中所有实施例公开的所有特征,或隐含公开的所有方法或过程中的步骤,除了互相排斥的特征和/或步骤以外,均可以以任何方式组合和/或扩展、替换。All features disclosed in all embodiments in this specification, or steps in all implicitly disclosed methods or processes, except for mutually exclusive features and/or steps, can be combined and/or extended and replaced in any way.
鉴于背景中的问题,本发明的发明人发现:近年来,使用LUT对镜头或拍摄到的视频进行风格化成为电影制作中的常用手段,是由于LUT具有较快的计算能力。同时,LUT在视频中相比其他颜色迁移的方法更具有稳定性。然而,这些LUT的制作通常是通过调色师进行精心的调试得到的。In view of the problems in the background, the inventors of the present invention found that in recent years, using LUTs to stylize shots or captured videos has become a common method in film production because LUTs have faster computing power. At the same time, LUT is more stable in video than other methods of color transfer. However, the production of these LUTs is usually the result of careful tuning by colorists.
现有方法中无利用参考图和内容图生成LUT进行颜色迁移的,在经历了创造性的思考后,本发明技术方案结合神经网络生成参考图和内容图的颜色迁移LUT,并将LUT使用在视频色彩迁移上,既能利用神经网络的方法进行自动生成LUT,又能利用LUT快速校色的优点,将参考图的风格迁移快速迁移到内容视频上,提高了颜色迁移的速度和稳定性。In the existing methods, there is no use of the reference image and the content image to generate the LUT for color migration. After creative thinking, the technical solution of the present invention combines the neural network to generate the color migration LUT of the reference image and the content image, and uses the LUT in the video In terms of color migration, it can not only use the method of neural network to automatically generate LUT, but also take advantage of the advantages of fast color correction of LUT to quickly transfer the style of the reference picture to the content video, improving the speed and stability of color migration.
进一步的发明构思中,为了快速的通过参考图和内容图生成LUT,然后实现对视频的颜色迁移,本发明技术方案创新设计了一种神经网络生成LUT的颜色迁移方案,为数字创意制作提供智能技术支持。In a further inventive concept, in order to quickly generate LUTs through reference images and content images, and then realize color migration of videos, the technical solution of the present invention innovatively designs a color migration scheme for neural network generated LUTs to provide intelligent digital creative production. Technical Support.
在具体实施过程中,如图1所示,本发明实施例提供的一种基于神经网络生成LUT的颜色迁移方法,包括以下步骤:In the specific implementation process, as shown in Figure 1, a color migration method based on neural network generation LUT provided by the embodiment of the present invention includes the following steps:
S1,构建一个大规模的颜色数据集:该数据集需要颜色多样,内容丰富;S1, build a large-scale color data set: the data set needs to be diverse in color and rich in content;
S2,通过训练神经网络用于生成色彩迁移的LUT:在大数据集上随机选取内容图和参考图,送入网络生成LUT,将LUT作用在内容图上生成颜色迁移后的图,通过约束风格损失和内容损失从而实现颜色迁移;S2, by training the neural network to generate the LUT for color migration: randomly select the content image and reference image on the large data set, send them to the network to generate the LUT, apply the LUT to the content image to generate the image after color migration, and constrain the style loss and content loss to achieve color transfer;
S3,将训练好的神经网络用于视频色彩迁移:给定一张风格图和一段内容视频,将内容视频的首帧选取出来作为内容图,送入神经网络生成色彩迁移的LUT,再将该LUT作用在视频中,从而实现视频的色彩迁移。S3, use the trained neural network for video color migration: Given a style image and a content video, select the first frame of the content video as the content image, send it to the neural network to generate a color migration LUT, and then The LUT acts on the video to realize the color transfer of the video.
在实际应用过程中,步骤S1中,所述构建一个大规模的颜色数据集,需要颜色多样,内容丰富。In the actual application process, in step S1, the construction of a large-scale color data set requires various colors and rich content.
在实际应用过程中,步骤S2中,如图2所示,所述通过训练神经网络用于生成色彩迁移的LUT,包括如下子步骤:In the actual application process, in step S2, as shown in Figure 2, the LUT used to generate the color migration by training the neural network includes the following sub-steps:
S21:在构建的大规模颜色数据集上随机选取参考图和内容图,并将参考图和内容图按照相同分辨率在图像空间随机裁剪,将裁剪后的参考图和内容图作为神经网络的输入,用于训练神经网络生成LUT;S21: Randomly select the reference image and the content image from the constructed large-scale color data set, and randomly crop the reference image and the content image in the image space with the same resolution, and the cropped reference image and content map As the input of the neural network, it is used to train the neural network to generate LUT;
S22:在训练神经网络时,将神经网络生成的LUT作用在内容图上,得到颜色迁移之后的图片,这个过程可表示为:S22: When training the neural network, apply the LUT generated by the neural network to the content image to obtain the image after color migration , this process can be expressed as:
其中,(,)表示参考图和内容图输入到神经网络中得到的LUT;表示内容图通过神经网络生成的LUT对每个像素点的颜色进行查找,得到对应位置的颜色;in, ( , ) means the reference image and content map The LUT obtained by inputting into the neural network; Indicates the content map The color of each pixel is searched through the LUT generated by the neural network to obtain the color of the corresponding position;
S23:在训练的过程中,提取参考图、内容图以及颜色迁移得到的图的深度特征、和,分别计算对应的风格损失和内容损失,可表示为:S23: During the training process, extract the reference image , content map And the graph obtained by color migration deep features of ,and , respectively calculate the corresponding style loss and loss of content , which can be expressed as:
=mse(mean()-mean())+(std()-std()) =mse(mean( )-mean( ))+(std( )-std( ))
=mse(mean()-mean())+(std()-std()) =mse(mean( )-mean( ))+(std( )-std( ))
其中,mean表示图像特征的均值;std表示图像特征的方差;mse表示矩阵的均方误差。Among them, mean represents the mean value of image features; std represents the variance of image features; mse represents the mean square error of the matrix.
在实际应用过程中,步骤S2中,神经网络中包含了N个初始化的基本LUT,这些LUT中的值在网络训练的过程中按照梯度反向传播进行更新。In the actual application process, in step S2, the neural network contains N initialized basic LUTs, and the values in these LUTs are updated according to the gradient backpropagation during the network training process.
在实际应用过程中,步骤S2中,神经网络设计为一种通过输入风格图和内容图进行卷积特征提取,再将提取到的特征进行融合,最终生成N个基本LUT线性组合的权重值。In the actual application process, in step S2, the neural network is designed as a convolutional feature extraction through the input style map and content map, and then the extracted features are fused to finally generate the weight value of the linear combination of N basic LUTs.
在实际应用过程中,步骤S3中,在给定一张风格图和一张内容图,使用训练好的神经网络生成颜色迁移的LUT,将内容图的颜色按照风格图的颜色进行迁移。In the actual application process, in step S3, given a style map and a content map, the trained neural network is used to generate a color-transferred LUT, and the color of the content map is transferred according to the color of the style map.
在实际应用过程中,步骤S3中,如图3所示,将视频的首帧作为内容图,根据给定的风格图通过神经网络生成的颜色迁移LUT对视频的所有帧进行颜色迁移。In the actual application process, in step S3, as shown in Figure 3, the first frame of the video is used as the content map, and the color transfer LUT generated by the neural network is used to perform color transfer on all frames of the video according to the given style map.
需要说明的是,在本发明权利要求书中所限定的保护范围内,以下实施例均可以从上述具体实施方式中,例如公开的技术原理,公开的技术特征或隐含公开的技术特征等,以合乎逻辑的任何方式进行组合和/或扩展、替换。It should be noted that within the scope of protection defined in the claims of the present invention, the following embodiments can be obtained from the above specific implementation methods, such as disclosed technical principles, disclosed technical features or implicitly disclosed technical features, etc., Combining and/or extending, replacing in any logical way.
实施例1Example 1
一种基于神经网络生成LUT的颜色迁移方法,包括以下步骤:A method for color migration based on neural network generation LUT, comprising the following steps:
选取内容图和参考图,并将内容图和参考图均送入神经网络生成LUT;再将生成的LUT作用在内容图上生成颜色迁移后的图,通过约束风格损失和内容损失从而实现颜色迁移。Select the content image and reference image, and send both the content image and reference image to the neural network to generate LUT; then apply the generated LUT to the content image to generate a color-migrated image, and achieve color migration by constraining style loss and content loss .
实施例2Example 2
在实施例1的基础上,在选取内容图和参考图之前,包括步骤:构建颜色数据集,在构建的颜色数据集上选取内容图和参考图。On the basis of
实施例3Example 3
在实施例1的基础上,所述选取内容图和参考图包括步骤:给定一张风格图和一段内容视频,将内容视频的帧选取出来作为内容图,将风格图作为参考图;On the basis of
所述将生成的LUT作用在内容图上包括步骤:将生成的LUT作用在视频中的帧上。Applying the generated LUT to the content map includes the step of: applying the generated LUT to frames in the video.
实施例4Example 4
在实施例2的基础上,所述将内容图和参考图均送入神经网络生成LUT;再将生成的LUT作用在内容图上生成颜色迁移后的图,通过约束风格损失和内容损失从而实现颜色迁移,包括子步骤:On the basis of Embodiment 2, the content map and the reference map are sent to the neural network to generate the LUT; then the generated LUT is applied to the content map to generate a color-shifted map, which is realized by constraining the style loss and content loss Color migration, including substeps:
在构建的颜色数据集上随机选取参考图和内容图,并将参考图和内容图按照相同分辨率在图像空间随机裁剪,将裁剪后的参考图和内容图作为神经网络的输入,用于训练神经网络生成LUT;Randomly select the reference image and the content image on the constructed color data set, and randomly crop the reference image and the content image in the image space according to the same resolution, and the cropped reference image and content map As the input of the neural network, it is used to train the neural network to generate LUT;
在开始训练神经网络时,将神经网络生成的LUT作用在内容图上,得到颜色迁移之后的图片,这个过程表示为:When starting to train the neural network, apply the LUT generated by the neural network to the content image to obtain the image after color migration , this process is expressed as:
其中,(,)表示参考图和内容图输入到神经网络中得到的LUT;in, ( , ) means the reference image and content map The LUT obtained by inputting into the neural network;
在训练的过程中,提取参考图、内容图以及颜色迁移得到的图的深度特征、和,分别计算对应的风格损失和内容损失,表示为:During the training process, the reference image is extracted , content map And the graph obtained by color migration deep features of , and , respectively calculate the corresponding style loss and loss of content ,Expressed as:
=mse(mean()-mean())+(std()-std()) =mse(mean( )-mean( ))+(std( )-std( ))
=mse(mean()-mean())+(std()-std()) =mse(mean( )-mean( ))+(std( )-std( ))
其中,mean表示图像特征的均值;std表示图像特征的方差;mse表示矩阵的均方误差。Among them, mean represents the mean value of image features; std represents the variance of image features; mse represents the mean square error of the matrix.
实施例5Example 5
在实施例1或实施例4的基础上,所述神经网络通过输入风格图和内容图进行卷积特征提取,再将提取到的特征进行融合,最终生成N个基本LUT线性组合的权重值,N为整数。On the basis of
实施例6Example 6
在实施例1或实施例4的基础上,所述神经网络中包含了N个初始化的基本LUT,这些基本LUT中的值在网络训练的过程中按照梯度反向传播进行更新,N为整数。On the basis of
实施例7Example 7
在实施例3的基础上,在所述将生成的LUT作用在视频中的帧上后,所述生成颜色迁移后的图包括步骤:将内容图的颜色按照风格图的颜色进行迁移。On the basis of Embodiment 3, after the generated LUT is applied to the frames in the video, the generating the color-shifted map includes the step of: migrating the color of the content map according to the color of the style map.
实施例8Example 8
在实施例3的基础上,所述将内容视频的帧选取出来作为内容图包括步骤:将视频的首帧作为内容图;On the basis of embodiment 3, said selecting the frame of the content video as the content map includes the steps of: taking the first frame of the video as the content map;
在将生成的LUT作用在视频中的首帧上后,所述生成颜色迁移后的图包括步骤:根据风格图通过神经网络生成的颜色迁移LUT对视频的所有帧进行颜色迁移。After the generated LUT is applied to the first frame in the video, the generating the image after color migration includes the step of performing color migration on all frames of the video through the color migration LUT generated by the neural network according to the style image.
实施例9Example 9
一种计算机设备所述计算机设备包括处理器和存储器,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器加载并执行如实施例1~实施例4任一项所述的方法。A computer device. The computer device includes a processor and a memory, and a computer program is stored in the memory. When the computer program is loaded by the processor and executed as described in any one of
实施例10Example 10
一种可读存储介质,在可读存储介质存储有程序,当所述程序被处理器加载时实现如实施例1~实施例4任一项所述的方法。A readable storage medium, in which a program is stored, and when the program is loaded by a processor, the method described in any one of
描述于本发明实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。The units described in the embodiments of the present invention may be implemented by software or by hardware, and the described units may also be set in a processor. Wherein, the names of these units do not constitute a limitation of the unit itself under certain circumstances.
根据本发明实施例的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各种可选实现方式中提供的方法。According to an aspect of embodiments of the present invention, a computer program product or computer program is provided, the computer program product or computer program includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the methods provided in the various optional implementation manners above.
作为另一方面,本发明实施例还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现上述实施例中所述的方法。As another aspect, an embodiment of the present invention also provides a computer-readable medium, which may be included in the electronic device described in the above-mentioned embodiments; in electronic equipment. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device is made to implement the methods described in the above-mentioned embodiments.
本发明未涉及部分均与现有技术相同或可采用现有技术加以实现。The parts not involved in the present invention are the same as the prior art or can be realized by adopting the prior art.
上述技术方案只是本发明的一种实施方式,对于本领域内的技术人员而言,在本发明公开了应用方法和原理的基础上,很容易做出各种类型的改进或变形,而不仅限于本发明上述具体实施方式所描述的方法,因此前面描述的方式只是优选的,而并不具有限制性的意义。The above-mentioned technical solution is only an embodiment of the present invention. For those skilled in the art, on the basis of the application methods and principles disclosed in the present invention, it is easy to make various types of improvements or deformations, and is not limited to The methods described in the above specific embodiments of the present invention, therefore, the above-described methods are only preferred and not limiting.
除以上实例以外,本领域技术人员根据上述公开内容获得启示或利用相关领域的知识或技术进行改动获得其他实施例,各个实施例的特征可以互换或替换,本领域人员所进行的改动和变化不脱离本发明的精神和范围,则都应在本发明所附权利要求的保护范围内。In addition to the above examples, those skilled in the art obtain inspiration from the above disclosure or use knowledge or technology in the relevant field to make changes to obtain other embodiments. The features of each embodiment can be interchanged or replaced. The changes and changes made by those skilled in the art If they do not depart from the spirit and scope of the present invention, they should all be within the protection scope of the appended claims of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211546270.3A CN115660950B (en) | 2022-12-05 | 2022-12-05 | Color migration method, device and medium for generating LUT based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211546270.3A CN115660950B (en) | 2022-12-05 | 2022-12-05 | Color migration method, device and medium for generating LUT based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115660950A CN115660950A (en) | 2023-01-31 |
CN115660950B true CN115660950B (en) | 2023-04-07 |
Family
ID=85017109
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211546270.3A Active CN115660950B (en) | 2022-12-05 | 2022-12-05 | Color migration method, device and medium for generating LUT based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115660950B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08204973A (en) * | 1995-01-20 | 1996-08-09 | Fuji Xerox Co Ltd | Color picture processor and color picture processing method |
JP2001036760A (en) * | 1999-07-16 | 2001-02-09 | Dainippon Printing Co Ltd | Method and system for polychromatic resolution |
JP2001045317A (en) * | 1999-08-04 | 2001-02-16 | Fuji Photo Film Co Ltd | Color correction relation extracting method and color correcting method |
JP2006015514A (en) * | 2004-06-30 | 2006-01-19 | Canon Inc | Image processing method and image processing device |
CN103676579A (en) * | 2012-09-25 | 2014-03-26 | 柯尼卡美能达株式会社 | Image forming apparatus |
CN114331818A (en) * | 2021-12-27 | 2022-04-12 | 北京达佳互联信息技术有限公司 | Video image processing method and video image processing device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7199900B2 (en) * | 2000-08-30 | 2007-04-03 | Fuji Xerox Co., Ltd. | Color conversion coefficient preparation apparatus, color conversion coefficient preparation method, storage medium, and color conversion system |
US11023791B2 (en) * | 2019-10-30 | 2021-06-01 | Kyocera Document Solutions Inc. | Color conversion using neural networks |
-
2022
- 2022-12-05 CN CN202211546270.3A patent/CN115660950B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08204973A (en) * | 1995-01-20 | 1996-08-09 | Fuji Xerox Co Ltd | Color picture processor and color picture processing method |
JP2001036760A (en) * | 1999-07-16 | 2001-02-09 | Dainippon Printing Co Ltd | Method and system for polychromatic resolution |
JP2001045317A (en) * | 1999-08-04 | 2001-02-16 | Fuji Photo Film Co Ltd | Color correction relation extracting method and color correcting method |
JP2006015514A (en) * | 2004-06-30 | 2006-01-19 | Canon Inc | Image processing method and image processing device |
CN103676579A (en) * | 2012-09-25 | 2014-03-26 | 柯尼卡美能达株式会社 | Image forming apparatus |
CN114331818A (en) * | 2021-12-27 | 2022-04-12 | 北京达佳互联信息技术有限公司 | Video image processing method and video image processing device |
Also Published As
Publication number | Publication date |
---|---|
CN115660950A (en) | 2023-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110189253B (en) | Image super-resolution reconstruction method based on improved generation countermeasure network | |
CN109471900B (en) | Interaction method and system for chart data custom action data | |
WO2016078479A1 (en) | Method and device for monitoring web page changes | |
CN110458756A (en) | Fuzzy video super-resolution method and system based on deep learning | |
US9646398B2 (en) | Minimizing blur operations for creating a blur effect for an image | |
CN110706151B (en) | A Non-Uniform Style Transfer Method for Video | |
CN108874393B (en) | Rendering method, rendering device, storage medium and computer equipment | |
CN107220932B (en) | Panoramic image splicing method based on bag-of-words model | |
CN115660950B (en) | Color migration method, device and medium for generating LUT based on neural network | |
WO2018176207A1 (en) | Web theme switching method and system | |
CN114897711A (en) | Method, device and equipment for processing images in video and storage medium | |
CN114663603A (en) | Static object three-dimensional grid model generation method based on nerve radiation field | |
CN114241167A (en) | A template-free virtual dressing method and device from video to video | |
CN118505808A (en) | A transformer-based end-to-end multi-frame joint pose estimation method and device | |
Du et al. | Dense-connected residual network for video super-resolution | |
CN114998629A (en) | Satellite map and aerial image template matching method and unmanned aerial vehicle positioning method | |
CN114419482A (en) | Video identification method of mixed structure | |
US20250191122A1 (en) | Image generation method for eliminating splicing seams, computer device and storage medium | |
Shang et al. | Video stabilization based on low‐rank constraint and trajectory optimization | |
Cao et al. | Rolling shutter correction with intermediate distortion flow estimation | |
CN112860809A (en) | Data processing system, method, device, medium and equipment | |
CN118644756B (en) | Image processing method, device, computer equipment and readable storage medium | |
CN118628588A (en) | Color migration method, device and medium based on generating adaptive lookup table based on neural network | |
Liu et al. | Spatial-temporal integration network with self-guidance for robust video deraining | |
CN116011421A (en) | A table rendering method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |