[go: up one dir, main page]

CN113240599A - Image toning method and device, computer-readable storage medium and electronic equipment - Google Patents

Image toning method and device, computer-readable storage medium and electronic equipment Download PDF

Info

Publication number
CN113240599A
CN113240599A CN202110505184.7A CN202110505184A CN113240599A CN 113240599 A CN113240599 A CN 113240599A CN 202110505184 A CN202110505184 A CN 202110505184A CN 113240599 A CN113240599 A CN 113240599A
Authority
CN
China
Prior art keywords
image
processed
feature
information
statistical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110505184.7A
Other languages
Chinese (zh)
Other versions
CN113240599B (en
Inventor
汪路超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110505184.7A priority Critical patent/CN113240599B/en
Publication of CN113240599A publication Critical patent/CN113240599A/en
Application granted granted Critical
Publication of CN113240599B publication Critical patent/CN113240599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本公开涉及图像处理技术领域,提供了一种图像调色方法及装置、计算机可读存储介质、电子设备,该方法包括:获取待处理图像的统计特征,以及获取参考图像的统计特征;根据所述待处理图像的统计特征和所述参考图像的统计特征生成与所述待处理图像对应的目标映射关系;根据所述目标映射关系对所述待处理图像进行调色处理。本公开通过待处理图像和参考图像的统计特征动态生成目标映射关系,生成的目标映射关系具有针对性,并根据目标映射关系对待处理图像进行调色处理,提高了图像处理的准确率。

Figure 202110505184

The present disclosure relates to the technical field of image processing, and provides an image toning method and device, a computer-readable storage medium, and an electronic device. The method includes: acquiring statistical features of an image to be processed, and acquiring statistical features of a reference image; The statistical feature of the image to be processed and the statistical feature of the reference image generate a target mapping relationship corresponding to the to-be-processed image; and the to-be-processed image is toned according to the target mapping relationship. The present disclosure dynamically generates a target mapping relationship through statistical features of the image to be processed and the reference image, the generated target mapping relationship is targeted, and the to-be-processed image is toned according to the target mapping relationship, thereby improving the accuracy of image processing.

Figure 202110505184

Description

Image toning method and device, computer-readable storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a resource information obtaining method, a resource information obtaining apparatus, a computer-readable storage medium, and an electronic device.
Background
With the development of image processing technology, the adjustment of multiple styles can be performed on one image, and the transformation that the image of the same content has multiple styles is realized.
In the prior art, the color matching is often performed by using an LUT table, but the LUT table is manually made by a designer, usually one LUT table is made for one style, and different images to be processed are subjected to color matching by using the same LUT table. In the prior art, the method for toning by using the static LUT table has no pertinence and has poor toning effect.
Disclosure of Invention
The present disclosure is directed to an image color matching method, an image color matching apparatus, a computer-readable storage medium, and an electronic device, so as to solve the problem of low pertinence in the prior art at least to some extent.
According to a first aspect of the present disclosure, there is provided an image toning method including: acquiring the statistical characteristics of an image to be processed and the statistical characteristics of a reference image; generating a target mapping relation corresponding to the image to be processed according to the statistical characteristics of the image to be processed and the statistical characteristics of the reference image; and carrying out color matching processing on the image to be processed according to the target mapping relation.
According to a second aspect of the present disclosure, there is provided an image toning device including: the characteristic acquisition module is used for acquiring the statistical characteristics of the image to be processed and acquiring the statistical characteristics of the reference image; the mapping generation module is used for generating a target mapping relation corresponding to the image to be processed according to the statistical characteristics of the image to be processed and the statistical characteristics of the reference image; and the color matching processing module is used for performing color matching processing on the image to be processed according to the target mapping relation.
According to a third aspect of the present disclosure, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the image toning method as described in the above embodiments.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image toning method as described in the above embodiments.
According to the technical scheme, the image toning method, the image toning device, the image toning system, the computer readable storage medium and the electronic equipment in the exemplary embodiments of the disclosure have at least the following advantages and positive effects:
the image toning method identifies text information corresponding to voice information and acquires an entity fragment corresponding to the text information; firstly, acquiring the statistical characteristics of an image to be processed and the statistical characteristics of a reference image; then, generating a target mapping relation corresponding to the image to be processed according to the statistical characteristics of the image to be processed and the statistical characteristics of the reference image; and finally, carrying out color matching processing on the image to be processed according to the target mapping relation. According to the image toning method, the target mapping relation is dynamically generated through the statistical characteristics of the image to be processed and the reference image, the generated target mapping relation is targeted, toning processing is carried out on the image to be processed according to the target mapping relation, and the accuracy of image processing is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 schematically illustrates a schematic diagram of a system architecture of the present exemplary embodiment;
fig. 2 schematically shows a schematic view of an electronic device of the present exemplary embodiment;
FIG. 3 schematically shows a flow diagram of an image toning method according to an embodiment of the present disclosure;
FIG. 4 schematically shows a flowchart of a method for obtaining statistical characteristics of an image to be processed according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a flowchart of a method of generating a target mapping relationship according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a flowchart of a method for obtaining feature information corresponding to an input pixel value according to an embodiment of the disclosure;
FIG. 7 schematically illustrates a flowchart of a method of generating a target mapping relationship according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow diagram of an image toning method according to a specific embodiment of the present disclosure;
fig. 9 schematically shows a block diagram of an image toning device according to an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In the related art in the field, a designer manually modulates LUT tables of different styles, and determines that the LUT table of a target style performs LUT mapping on an image to be processed, so as to obtain an output image of the target style corresponding to the image to be processed. However, the LUT table obtained by manual modulation is static, has no expansibility, can only be used as a filter, and has low pertinence to color matching of different images to be processed.
Based on the problems in the related art, the embodiments of the present disclosure provide an image toning method, which is applied to the system architecture of the exemplary embodiments of the present disclosure. Fig. 1 shows a schematic diagram of a system architecture of an exemplary embodiment of the present disclosure, and as shown in fig. 1, the system architecture 100 may include: terminal 110, network 120, and server 130. The terminal 110 may be various electronic devices with audio acquisition functions, including but not limited to a mobile phone, a tablet computer, a personal computer, a smart wearable device, and the like. The medium used by network 120 to provide communications links between terminals 110 and server 130 may include various connection types, such as wired, wireless communications links, or fiber optic cables. It should be understood that the number of terminals, networks, and servers in fig. 1 are merely illustrative. There may be any number of terminals, networks, and servers, as desired for an implementation. For example, the server 130 may be a server cluster composed of a plurality of servers, and the like.
The image toning method provided by the embodiment of the disclosure can be executed by the terminal 110, for example, the terminal 110 acquires the image to be processed and the reference image, generates the target mapping relationship according to the statistical characteristics of the image to be processed and the reference image, and performs toning on the image to be processed according to the target mapping relationship.
In addition, the image toning method provided by the embodiment of the disclosure may also be executed by the server 130, for example, after the terminal 110 acquires the image to be processed and the reference image, the image to be processed and the reference image are uploaded to the server 130, so that the server 130 generates a target mapping relationship according to the statistical characteristics of the acquired image to be processed and the reference image, performs toning on the image to be processed according to the target mapping relationship, and returns the image to be processed after toning to the terminal 110, which is not limited by the disclosure.
An exemplary embodiment of the present disclosure provides an electronic device for implementing an image toning method, which may be the terminal 110 or the server 130 in fig. 1. The electronic device includes at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the image toning method via execution of the executable instructions.
The electronic device may be implemented in various forms, and may include, for example, a mobile device such as a mobile phone, a tablet computer, a notebook computer, a Personal Digital Assistant (PDA), a navigation device, a wearable device, an unmanned aerial vehicle, and a stationary device such as a desktop computer and a smart television.
The following takes the mobile terminal 200 in fig. 2 as an example, and exemplifies the configuration of the electronic device. It will be appreciated by those skilled in the art that the configuration of figure 2 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is only schematically illustrated and does not constitute a structural limitation of the mobile terminal 200. In other embodiments, the mobile terminal 200 may also interface differently than shown in fig. 2, or a combination of multiple interfaces.
As shown in fig. 2, the mobile terminal 200 may specifically include: the mobile terminal includes a processor 210, an internal memory 221, an external memory interface 222, a USB interface 230, a charging management Module 240, a power management Module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication Module 250, a wireless communication Module 260, an audio Module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor Module 280, a display screen 290, a camera Module 291, an indicator 292, a motor 293, a button 294, a Subscriber Identity Module (SIM) card interface 295, and the like. The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, a barometric pressure sensor 2804, and the like.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
The NPU is a Neural-Network (NN) computing processor, which processes input information quickly by using a biological Neural Network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the mobile terminal 200, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
A memory is provided in the processor 210. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and execution is controlled by processor 210.
The charge management module 240 is configured to receive a charging input from a charger. The power management module 241 is used for connecting the battery 242, the charging management module 240 and the processor 210. The power management module 241 receives the input of the battery 242 and/or the charging management module 240, and supplies power to the processor 210, the internal memory 221, the display screen 290, the camera module 291, the wireless communication module 260, and the like.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. Wherein, the antenna 1 and the antenna 2 are used for transmitting and receiving electromagnetic wave signals; the mobile communication module 250 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the mobile terminal 200; the modem processor may include a modulator and a demodulator; the Wireless communication module 260 may provide a solution for Wireless communication including a Wireless Local Area Network (WLAN) (e.g., a Wireless Fidelity (Wi-Fi) network), Bluetooth (BT), and the like, applied to the mobile terminal 200. In some embodiments, antenna 1 of the mobile terminal 200 is coupled to the mobile communication module 250 and antenna 2 is coupled to the wireless communication module 260, such that the mobile terminal 200 may communicate with networks and other devices via wireless communication techniques.
The mobile terminal 200 implements a display function through the GPU, the display screen 290, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The mobile terminal 200 may implement a photographing function through the ISP, the camera module 291, the video codec, the GPU, the display screen 290, the application processor, and the like. The ISP is used for processing data fed back by the camera module 291; the camera module 291 is used for capturing still images or videos; the digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals; the video codec is used to compress or decompress digital video, and the mobile terminal 200 may also support one or more video codecs.
The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the mobile terminal 200. The external memory card communicates with the processor 210 through the external memory interface 222 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 221 may be used to store computer-executable program code, which includes instructions. The internal memory 221 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the mobile terminal 200, and the like. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like. The processor 210 executes various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor.
The mobile terminal 200 may implement an audio function through the audio module 270, the speaker 271, the receiver 272, the microphone 273, the earphone interface 274, the application processor, and the like. Such as music playing, recording, etc.
The depth sensor 2801 is used to acquire depth information of a scene. In some embodiments, a depth sensor may be provided to the camera module 291.
The pressure sensor 2802 is used to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 2802 may be disposed on the display screen 290. Pressure sensor 2802 can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like.
The gyro sensor 2803 may be used to determine a motion gesture of the mobile terminal 200. In some embodiments, the angular velocity of the mobile terminal 200 about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensor 2803. The gyro sensor 2803 can be used to photograph anti-shake, navigation, body-feel game scenes, and the like.
In addition, other functional sensors, such as an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., may be provided in the sensor module 280 according to actual needs.
Other devices for providing auxiliary functions may also be included in mobile terminal 200. For example, the keys 294 include a power-on key, a volume key, and the like, and a user can generate key signal inputs related to user settings and function control of the mobile terminal 200 through key inputs. Further examples include indicator 292, motor 293, SIM card interface 295, etc.
The image toning method and the image toning apparatus according to the exemplary embodiments of the present disclosure are specifically described below. Fig. 3 shows a flow diagram of an image toning method, which, as shown in fig. 3, comprises at least the following steps:
step S310: acquiring the statistical characteristics of an image to be processed and the statistical characteristics of a reference image;
step S320: generating a target mapping relation corresponding to the image to be processed according to the statistical characteristics of the image to be processed and the statistical characteristics of the reference image;
step S330: and carrying out color matching treatment on the image to be processed according to the target mapping relation.
According to the image toning method, the target mapping relation is dynamically generated through the statistical characteristics of the image to be processed and the reference image, the generated target mapping relation is pertinent, and toning processing is performed on the image to be processed according to the target mapping relation, so that the accuracy of toning processing is improved.
In order to make the technical solution of the present disclosure clearer, each step of the image toning method is explained next.
In step S310, the statistical features of the image to be processed are acquired, and the statistical features of the reference image are acquired.
In an exemplary embodiment of the present disclosure, the image to be processed refers to a target image for image toning. The image input by the user can be acquired as the image to be processed, the image to be processed can also be obtained by shooting through the image acquisition unit, and the image to be processed can also be obtained by drawing through the image editing software.
In addition, the reference image refers to a source image for providing style information during image toning. For example, the reference image may be an image having a vintage style, or may be an image having a warm tone or a cool tone, which is not particularly limited in the present disclosure.
For example, based on the image a and the image B, by the image toning process, a target image C having the shape of the image a and the style of the image B is output. At this time, it can be considered that the image a is an image to be processed and the image B is a reference image.
In an exemplary embodiment of the present disclosure, before obtaining the statistical features of the image to be processed, the image to be processed is scaled to obtain a processed image to be processed, where a resolution of the processed image to be processed is smaller than a resolution of the image to be processed.
Specifically, the image to be processed is scaled by a scaling function to obtain an image to be processed with a smaller resolution. Wherein, the image to be processed can be scaled by a resize scaling function. For example, an image with a resolution of 1080 × 1920 is scaled by a resize scaling function, and then an image with a resolution of 640 × 360 is obtained. Of course, the scaling factor may be any factor, and this disclosure is not limited in this respect.
In an exemplary embodiment of the present disclosure, the statistical features include feature mean information and feature standard deviation information. The feature mean information comprises a feature mean corresponding to feature information of the image to be processed, and the feature standard deviation information comprises a feature standard deviation corresponding to the feature information of the image to be processed.
Specifically, fig. 4 is a schematic flowchart of a method for acquiring statistical characteristics of an image to be processed, and as shown in fig. 4, the flow at least includes steps S410 to S420, which are described in detail as follows:
in step S410, the image to be processed is input into the feature extraction model to obtain feature information corresponding to the image to be processed.
In an exemplary embodiment of the present disclosure, the feature extraction model may be a Convolutional Neural Network (CNN), which is a hierarchical model (hierarchical model) whose input is raw data (raw data), such as RGB image, raw audio data, and the like. The convolutional neural network extracts high-level semantic information from an original data input layer by layer through layer-by-layer stacking of a series of operations such as convolution (convolution) operation, pooling (Pooling) operation and non-linear activation function (non-linear activation function) mapping, and the process is feed-forward operation (feed-forward).
For example, the feature extraction model may be a convolutional neural network having 1 × 1 convolutional kernel, and performs upsampling on each pixel point in the input image to be processed, and upsampling on an RGB channel in each pixel point to obtain multi-channel feature information. For example, the pixel information of RGB channels may be converted into feature information of 64 channels, and the specific structure of the convolutional neural network is not specifically limited by the present disclosure.
Specifically, an image to be processed is input into a convolutional neural network, and feature extraction is performed on the image to be processed through the convolutional neural network, so that feature information of the image to be processed is obtained. In addition, the image to be processed after the scaling processing can be input into a convolutional neural network to obtain the characteristic information of the image to be processed.
In step S420, feature mean information and feature standard deviation information of the image to be processed are calculated according to the feature information of the image to be processed.
In the exemplary embodiment of the present disclosure, after the feature information of the image to be processed is acquired, a mean value and a standard deviation corresponding to the feature information of the image to be processed are calculated, and the mean value and the standard deviation corresponding to the feature information of the image to be processed are configured as statistical features of the image to be processed.
In an exemplary embodiment of the present disclosure, feature extraction may be performed on a reference image through a feature extraction model to obtain feature information of the reference image, a mean value and a standard deviation corresponding to the feature information of the reference image are calculated, and the mean value and the standard deviation corresponding to the feature information of the reference image are configured as statistical features of the reference image. The feature extraction model may be the same as or different from the feature extraction model for obtaining the feature information of the image to be processed, and this disclosure does not specifically limit this.
The feature extraction may be performed on the reference image in advance, the feature information of the reference image may be obtained, and the statistical feature of the reference image may be calculated according to the feature information of the reference image. The statistical characteristics corresponding to the reference image are stored in the database, and the statistical characteristics of the reference image can be directly obtained from the database. For example, if a certain style image is selected to perform toning on an image to be processed, statistical features corresponding to the style image are obtained in a database.
In addition, the feature extraction model can be used for extracting the features of the reference image in real time and acquiring the statistical features of the reference image in real time.
In step S320, a target mapping relationship corresponding to the image to be processed is generated according to the statistical features of the image to be processed and the statistical features of the reference image.
In an exemplary embodiment of the present disclosure, the target mapping relationship may be a LUT table, an index number of the LUT table being an input pixel value, and an index value of the LUT table being an output pixel value. The input pixel values and the output pixel values each include values of RGB channels, specifically including pixel values composed of a red channel R, a green channel G, and a blue channel B.
In an exemplary embodiment of the present disclosure, the statistical features of the image to be processed and the statistical features of the reference image may be stored in the ADAIN module, and the target mapping relationship may be generated by the ADAIN module. Fig. 5 is a schematic flowchart of a method for generating a target mapping relationship, and as shown in fig. 5, the flowchart at least includes steps S510 to S520, which are described in detail as follows:
in step S510, an input pixel value is acquired, and feature extraction is performed on the input pixel value to obtain feature information corresponding to the input pixel value.
In an exemplary embodiment of the present disclosure, the input pixel value may be a set of pixel values set in advance. For example, the red channel R, the green channel G, and the blue channel B may have pixel values of 0 to 255, respectively. For example, the input pixel values are 256 × 256 input pixel values such as (0,0,0), (0,0,1), (0,1,0), (1,0,0),. the. (0,0,255), (0,255,0), (255,0,0),. the. (255 ), and so on. For another example, the input pixel values may also be 33 × 33 input pixel values such as (0,0,0), (0,0,8), (0,8,0), (8,0,0),. (0,0,32), (0,32,0), (32,0,0),. (0,0,255), (0,255,0), (255,0,0),. (255 ), and so on.
In addition, the input pixel value can also be used for acquiring a pixel value corresponding to the image to be processed as the input pixel value by traversing the image to be processed. Specifically, fig. 6 is a schematic flow chart of a method for acquiring feature information corresponding to an input pixel value, and as shown in fig. 6, the flow chart at least includes step S610 to step S630, which is described in detail as follows:
in step S610, the image to be processed is traversed, a pixel value corresponding to the image to be processed is obtained, and the pixel value of the image to be processed is configured as an input pixel value.
In the exemplary embodiment of the present disclosure, all pixel points in the image to be processed are traversed, pixel values corresponding to all pixel points are obtained, and the pixel values corresponding to all pixel points are all used as input pixel values.
In addition, the pixel values of all the pixel points of the image to be processed can be traversed, the pixel range of the pixel values in the image to be processed can be obtained, and the input pixel values are configured according to the pixel range corresponding to the image to be processed. For example, if the pixel range of the pixel point in the to-be-processed image in the red channel R, the green channel G, and the blue channel B is 132 to 255, the input pixel values are configured as (0, 132), (0,132,0), (132,0,0), (0, 255), (255,0,0), (255 ), and the like.
In step S620, the input pixel values are matrix-transformed to obtain image information corresponding to the input pixel values.
In an exemplary embodiment of the present disclosure, the input pixel values are input into a reshape function to be matrix-transformed to obtain image information corresponding to the input pixel values. The image information may include W × H RGB channel values, where W divided by H, or H divided by W may result in any positive integer. For example, the input pixel values are input to a reshape function and matrix-transformed, so as to obtain 33 × 1089 image information.
In step S630, the image information is input into the feature extraction model to obtain feature information corresponding to the image information.
In an exemplary embodiment of the present disclosure, feature extraction is performed on image information through a feature extraction model to obtain feature information corresponding to the image information, that is, feature information corresponding to an input pixel value.
Further, an initial LUT table of 33 × 33 may be obtained, the initial LUT table having index numbers of 33 × 33 input pixel values, respectively, and the 33 × 33 input pixel values may include: (0,0,0 '), (0,0,8 '), (0,8,0 '), (8,0, 0.), (0,0,255,), (0,255,0,), and (255,0,0,). The index value of the initial LUT table is 33 × 33 initial output pixel values, which may be any pixel value, and this disclosure does not specifically limit this.
Inputting the 33 × 33 initial LUT table into a reshape function for matrix transformation to obtain 33 × 1089 image information, and inputting the image information corresponding to the initial LUT table into the feature extraction model to obtain feature information corresponding to the initial LUT table.
Continuing to refer to fig. 5, in step S520, a target mapping relationship corresponding to the image to be processed is generated according to the feature information of the input pixel values, the statistical features of the image to be processed, and the statistical features of the reference image.
In an exemplary embodiment of the present disclosure, output feature information corresponding to feature information of an input pixel value is calculated according to the feature information corresponding to the input pixel value, the statistical feature of the image to be processed, and the statistical feature of the reference image, and the output feature information is input into a feature mapping model to obtain an output pixel value.
Specifically, fig. 7 is a schematic flowchart of a method for generating a target mapping relationship, and as shown in fig. 7, the flowchart at least includes steps S710 to S730, which are described in detail as follows:
in step S710, output feature information corresponding to the feature information of the input pixel value is acquired according to the feature information of the input pixel value, the statistical feature of the image to be processed, and the statistical feature of the reference image.
In an exemplary embodiment of the present disclosure, the output characteristic information is calculated according to the following formula (1), and the formula (1) is as follows:
Figure BDA0003058081320000131
where P denotes output feature information, s denotes feature information of an input pixel value, μ (x) denotes feature mean information of an image to be processed, σ (x) denotes feature standard deviation information of the image to be processed, μ (y) denotes feature mean information of a reference image, and σ (y) denotes feature standard deviation information of the reference image.
In step S720, the output feature information is feature-mapped to obtain an output pixel value corresponding to the output feature information.
In an exemplary embodiment of the present disclosure, the output feature information is input into the feature mapping model to obtain an output pixel value corresponding to the output feature information. The feature mapping model may be a convolutional neural network, and the convolutional neural network may perform downsampling on the multi-channel output feature information, map the multi-channel output feature information to the RGB channels, and obtain output pixel values of the RGB channels.
In step S730, a target mapping relationship is generated from the input pixel values and the output pixel values.
In an exemplary embodiment of the present disclosure, a target mapping relationship is created for an input pixel value and an output pixel value to which the input pixel value corresponds. The target mapping relationship may be in the form of an index, where an input pixel value is used as an index number, and an output pixel value corresponding to the input pixel value is used as an index value corresponding to the index number.
After the feature information corresponding to the initial LUT table is obtained, the output feature information corresponding to the initial LUT table is obtained according to the above formula (1), and the feature mapping is performed on the output feature information by using the feature mapping model, so as to obtain the output pixel value corresponding to the initial LUT table. And correspondingly replacing the initial output pixel value in the initial LUT table according to the output pixel value to obtain a target LUT table.
In step S330, the image to be processed is subjected to color matching processing according to the target mapping relationship.
In the exemplary embodiment of the disclosure, the pixel value of each pixel point in the image to be processed is obtained, the pixel value of each pixel point is matched with the input pixel value in the target mapping relationship, the output pixel value corresponding to the input pixel value matched with the pixel value of each pixel point is obtained, and the output pixel value is used for replacing the pixel value of each corresponding pixel point, so as to obtain the target image after color mixing processing.
If the input pixel value corresponding to the pixel value of the pixel point in the image to be processed does not exist in the target mapping relation, two or more input pixel values closest to the pixel value of the pixel point in the image to be processed are obtained, and interpolation operation is performed on the two or more input pixel values to obtain an output pixel value corresponding to the pixel point.
Fig. 8 is a schematic flowchart of an image toning method according to the present embodiment, and as shown in fig. 8, the flowchart at least includes steps S810 to S890, which are described in detail as follows:
in step S810, an image to be processed is obtained, and the image to be processed is scaled to obtain a processed image to be processed.
The resolution of the image to be processed is 1080 × 1920, and the image resolution of the image to be processed is subjected to scaling processing to obtain an image with the resolution of 640 × 360.
In step S820, feature extraction is performed on the processed image to be processed through the feature extraction model to obtain feature information of the image to be processed, and feature mean information and feature standard deviation information corresponding to the feature information of the image to be processed are obtained.
In step S830, feature extraction is performed on the reference image through the feature extraction model to obtain feature information of the reference image, and feature mean information and feature standard deviation information corresponding to the feature information of the reference image are obtained.
In step S840, the feature mean information and the feature standard deviation information of the image to be processed, and the feature mean information and the feature standard deviation information of the reference image are stored in the ADAIN model.
Here, an ADAIN model is constructed according to formula (1) in the above embodiment, the input of the ADAIN model is the feature information of the input pixel value, and the output of the ADAIN model is the output feature information.
In step S850, an initial LUT table is obtained, the input pixel values in the initial LUT table are subjected to matrix transformation to obtain image information corresponding to the initial LUT table, and feature information corresponding to the image information of the initial LUT table is obtained by a feature extraction model.
The initial LUT table includes input pixel values and initial output pixel values, and the input pixel values are RGB pixel values of 33 × 33, which are preset in number.
In step S860, the feature information corresponding to the initial LUT table is input into the ADAIN model, and the output feature information corresponding to the initial LUT table is obtained according to formula (1) by the ADAIN model.
In step S870, the output feature information corresponding to the initial LUT table is input into the feature mapping model to obtain an output pixel value corresponding to the output feature information, and the initial output pixel value in the initial LUT table is replaced according to the output pixel value to obtain a target LUT table.
In step S880, the pixel value of each pixel point in the image to be processed is obtained, and the pixel value of each pixel point is used as an input pixel value to perform indexing in the target LUT table, so as to obtain an output pixel value corresponding to each pixel point.
In step S890, a target image after color matching is generated from the output pixel values corresponding to the respective pixel points.
In a specific embodiment of the present disclosure, the resolution of the image to be processed is scaled from 1080 × 1920 to 640 × 360, and the amount of calculation is reduced from 1080 × 1920 to 640 × 360+33 × 33 — 266337 by one order of magnitude. In addition, through experimental verification, no obvious progress loss is caused when the image with any resolution is reduced to 360P. Therefore, the image to be processed with a larger resolution will acquire a larger acceleration ratio.
In the image toning method of the embodiment, the target mapping relationship is generated according to the statistical characteristics of the image to be processed and the reference image, and then toning processing is performed on the image to be processed according to the target mapping relationship. On one hand, the method can dynamically generate the target mapping relation according to the difference between the image to be processed and the reference image, and the generated target mapping relation has pertinence with the image to be processed and the reference image, so that the color matching accuracy is higher; on the other hand, the resolution of the image to be processed is reduced through scaling processing, and the target mapping relation is generated according to the statistical information of the processed image to be processed, so that the calculation amount is greatly reduced, and the system consumption for generating the target mapping relation is reduced.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. The computer program, when executed by the CPU, performs the functions defined by the method provided by the present invention. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the method according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Embodiments of the disclosed apparatus are described below, which can be used to perform the above-described image toning methods of the present disclosure. For details that are not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the image toning method described above in the present disclosure.
Fig. 9 schematically shows a block diagram of an image toning apparatus according to one embodiment of the present disclosure.
Referring to fig. 9, an image toning device 900 according to one embodiment of the present disclosure, the image toning device 900 includes: a feature acquisition module 901, a mapping generation module 902, and a toning module 903. Specifically, the method comprises the following steps:
a feature obtaining module 901, configured to obtain statistical features of an image to be processed and obtain statistical features of a reference image;
a mapping generating module 902, configured to generate a target mapping relationship corresponding to the image to be processed according to the statistical features of the image to be processed and the statistical features of the reference image;
and the color matching processing module 903 is used for performing color matching processing on the image to be processed according to the target mapping relation.
In an exemplary embodiment of the present disclosure, the mapping generating module 902 may be further configured to obtain an input pixel value, and perform feature extraction on the input pixel value to obtain feature information corresponding to the input pixel value; and generating a target mapping relation corresponding to the image to be processed according to the characteristic information of the input pixel value, the statistical characteristic of the image to be processed and the statistical characteristic of the reference image.
In an exemplary embodiment of the present disclosure, the mapping generating module 902 may be further configured to obtain output feature information corresponding to the feature information of the input pixel value according to the feature information of the input pixel value, the statistical feature of the image to be processed, and the statistical feature of the reference image; performing feature mapping on the output feature information to obtain an output pixel value corresponding to the output feature information; and generating a target mapping relation according to the input pixel value and the output pixel value.
In an exemplary embodiment of the disclosure, the mapping generation module 902 may be further configured to calculate the output characteristic information according to the following formula (1), where the formula (1) is as follows:
Figure BDA0003058081320000161
where P denotes output feature information, s denotes feature information of an input pixel value, μ (x) denotes feature mean information of an image to be processed, σ (x) denotes feature standard deviation information of the image to be processed, μ (y) denotes feature mean information of a reference image, and σ (y) denotes feature standard deviation information of the reference image. The statistical characteristics comprise characteristic mean value information and characteristic standard deviation information.
In an exemplary embodiment of the present disclosure, the mapping generating module 902 may be further configured to traverse the image to be processed, obtain a pixel value corresponding to the image to be processed, and configure the pixel value of the image to be processed as an input pixel value; performing matrix transformation on the input pixel values to obtain image information corresponding to the input pixel values; and inputting the image information into the feature extraction model to obtain feature information corresponding to the image information.
In an exemplary embodiment of the present disclosure, the feature obtaining module 901 may be further configured to input the image to be processed into the feature extraction model, so as to obtain feature information corresponding to the image to be processed; and calculating the characteristic mean value information and the characteristic standard deviation information of the image to be processed according to the characteristic information of the image to be processed.
In an exemplary embodiment of the disclosure, the image toning device 900 may further include a scaling module (not shown in the figure) for scaling the image to be processed to obtain a processed image to be processed, where a resolution of the processed image to be processed is smaller than a resolution of the image to be processed.
The specific details of each module in the image color matching device are described in detail in the embodiment of the image color matching method, and the details that are not disclosed can be referred to the embodiment of the image color matching method, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 3 to 8 may be performed.
Exemplary embodiments of the present disclosure also provide a program product for implementing the above method, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1.一种图像调色方法,其特征在于,包括:1. an image toning method, is characterized in that, comprises: 获取待处理图像的统计特征,以及获取参考图像的统计特征;Obtain the statistical features of the image to be processed, and obtain the statistical features of the reference image; 根据所述待处理图像的统计特征和所述参考图像的统计特征生成与所述待处理图像对应的目标映射关系;generating a target mapping relationship corresponding to the to-be-processed image according to the statistical feature of the to-be-processed image and the statistical feature of the reference image; 根据所述目标映射关系对所述待处理图像进行调色处理。Toning processing is performed on the to-be-processed image according to the target mapping relationship. 2.根据权利要求1所述的图像调色方法,其特征在于,根据所述待处理图像的统计特征和所述参考图像的统计特征生成与所述待处理图像对应的目标映射关系,包括:2. The image toning method according to claim 1, wherein generating a target mapping relationship corresponding to the to-be-processed image according to the statistical feature of the to-be-processed image and the statistical feature of the reference image, comprising: 获取输入像素值,并对所述输入像素值进行特征提取,以获得与所述输入像素值对应的特征信息;Obtain input pixel values, and perform feature extraction on the input pixel values to obtain feature information corresponding to the input pixel values; 根据所述输入像素值的特征信息、所述待处理图像的统计特征以及所述参考图像的统计特征生成与所述待处理图像对应的目标映射关系。A target mapping relationship corresponding to the image to be processed is generated according to the feature information of the input pixel value, the statistical feature of the image to be processed, and the statistical feature of the reference image. 3.根据权利要求2所述的图像调色方法,其特征在于,根据所述输入像素值的特征信息、所述待处理图像的统计特征以及所述参考图像的统计特征生成与所述待处理图像对应的目标映射关系,包括:3. The image toning method according to claim 2, characterized in that, according to the feature information of the input pixel value, the statistical features of the image to be processed, and the statistical features of the reference image The target mapping relationship corresponding to the image, including: 根据所述输入像素值的特征信息、所述待处理图像的统计特征以及所述参考图像的统计特征获取与所述输入像素值的特征信息对应的输出特征信息;Obtain output feature information corresponding to the feature information of the input pixel value according to the feature information of the input pixel value, the statistical feature of the image to be processed, and the statistical feature of the reference image; 对所述输出特征信息进行特征映射,以得到与所述输出特征信息对应的输出像素值;performing feature mapping on the output feature information to obtain output pixel values corresponding to the output feature information; 根据所述输入像素值和所述输出像素值生成所述目标映射关系。The target mapping relationship is generated according to the input pixel value and the output pixel value. 4.根据权利要求3所述的图像调色方法,其特征在于,所述统计特征包括特征均值信息和特征标准差信息;4. The image toning method according to claim 3, wherein the statistical features comprise feature mean information and feature standard deviation information; 根据所述输入像素值的特征信息、所述待处理图像的统计特征以及所述参考图像的统计特征获取与所述输入像素值的特征信息对应的输出特征信息,包括:Obtain output feature information corresponding to the feature information of the input pixel value according to the feature information of the input pixel value, the statistical feature of the image to be processed, and the statistical feature of the reference image, including: 根据如下公式(1)计算所述输出特征信息,公式(1)如下所示:The output feature information is calculated according to the following formula (1), and the formula (1) is as follows:
Figure FDA0003058081310000011
Figure FDA0003058081310000011
其中,P表示所述输出特征信息,s表示所述输入像素值的特征信息,μ(x)表示所述待处理图像的特征均值信息,σ(x)表示所述待处理图像的特征标准差信息,μ(y)表示所述参考图像的特征均值信息,σ(y)表示所述参考图像的特征标准差信息。Among them, P represents the output feature information, s represents the feature information of the input pixel value, μ(x) represents the feature mean information of the to-be-processed image, σ(x) represents the feature standard deviation of the to-be-processed image information, μ(y) represents the feature mean information of the reference image, and σ(y) represents the feature standard deviation information of the reference image.
5.根据权利要求2所述的图像调色方法,其特征在于,在获取待处理图像的统计特征之前,所述方法还包括:5. The image toning method according to claim 2, wherein before acquiring the statistical features of the image to be processed, the method further comprises: 对所述待处理图像进行缩放处理,以得到处理后的待处理图像,其中,所述处理后的待处理图像的分辨率小于所述待处理图像的分辨率。The to-be-processed image is scaled to obtain a processed to-be-processed image, wherein the processed to-be-processed image has a resolution smaller than that of the to-be-processed image. 6.根据权利要求5所述的图像调色方法,其特征在于,获取输入像素值,并对所述输入像素值进行特征提取,以获得与所述输入像素值对应的特征信息,包括:6. The image toning method according to claim 5, wherein obtaining input pixel values, and performing feature extraction on the input pixel values to obtain feature information corresponding to the input pixel values, comprising: 遍历所述待处理图像,获取与所述待处理图像对应的像素值,将所述待处理图像的像素值配置为所述输入像素值;Traversing the to-be-processed image, acquiring the pixel value corresponding to the to-be-processed image, and configuring the pixel value of the to-be-processed image as the input pixel value; 将所述输入像素值进行矩阵变换,以得到与所述输入像素值对应的图像信息;performing matrix transformation on the input pixel value to obtain image information corresponding to the input pixel value; 将所述图像信息输入特征提取模型中,以得到与所述图像信息对应的特征信息。The image information is input into a feature extraction model to obtain feature information corresponding to the image information. 7.根据权利要求6所述的图像调色方法,其特征在于,获取待处理图像的统计特征,包括:7. The image toning method according to claim 6, wherein acquiring statistical features of the image to be processed comprises: 将所述待处理图像输入所述特征提取模型中,以得到与所述待处理图像对应的特征信息;Inputting the to-be-processed image into the feature extraction model to obtain feature information corresponding to the to-be-processed image; 根据待处理图像的特征信息计算所述待处理图像的特征均值信息和特征标准差信息。The feature mean information and feature standard deviation information of the to-be-processed image are calculated according to the feature information of the to-be-processed image. 8.一种图像调色装置,其特征在于,包括:8. An image toning device, comprising: 特征获取模块,用于获取待处理图像的统计特征,以及获取参考图像的统计特征;The feature acquisition module is used to acquire the statistical features of the image to be processed and the statistical features of the reference image; 映射生成模块,用于根据所述待处理图像的统计特征和所述参考图像的统计特征生成与所述待处理图像对应的目标映射关系;a mapping generation module, configured to generate a target mapping relationship corresponding to the to-be-processed image according to the statistical feature of the to-be-processed image and the statistical feature of the reference image; 调色处理模块,用于根据所述目标映射关系对所述待处理图像进行调色处理。A toning processing module, configured to perform toning processing on the to-be-processed image according to the target mapping relationship. 9.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现如权利要求1至7中任一项所述的图像调色方法。9 . A computer-readable storage medium on which a computer program is stored, wherein the program implements the image toning method according to any one of claims 1 to 7 when the program is executed by a processor. 10 . 10.一种电子设备,其特征在于,包括:10. An electronic device, comprising: 一个或多个处理器;one or more processors; 存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1至7中任一项所述的图像调色方法。A storage device for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement any one of claims 1 to 7 A method of image toning as described.
CN202110505184.7A 2021-05-10 2021-05-10 Image color adjustment method and device, computer readable storage medium, and electronic device Active CN113240599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110505184.7A CN113240599B (en) 2021-05-10 2021-05-10 Image color adjustment method and device, computer readable storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110505184.7A CN113240599B (en) 2021-05-10 2021-05-10 Image color adjustment method and device, computer readable storage medium, and electronic device

Publications (2)

Publication Number Publication Date
CN113240599A true CN113240599A (en) 2021-08-10
CN113240599B CN113240599B (en) 2024-09-24

Family

ID=77133247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110505184.7A Active CN113240599B (en) 2021-05-10 2021-05-10 Image color adjustment method and device, computer readable storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN113240599B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449199A (en) * 2021-08-12 2022-05-06 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN116433781A (en) * 2021-12-28 2023-07-14 北京小米移动软件有限公司 Toning method, device, electronic device and storage medium
CN117079102A (en) * 2023-09-06 2023-11-17 深圳地平线机器人科技有限公司 Image bit width conversion method, neural network training method, device and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180308269A1 (en) * 2017-04-24 2018-10-25 Intel Corporation Hdr enhancement with temporal multiplex
CN109285112A (en) * 2018-09-25 2019-01-29 京东方科技集团股份有限公司 Image processing method and image processing device based on neural network
CN109754375A (en) * 2018-12-25 2019-05-14 广州华多网络科技有限公司 Image processing method, system, computer equipment, storage medium and terminal
CN110070124A (en) * 2019-04-15 2019-07-30 广州小鹏汽车科技有限公司 A kind of image amplification method and system based on production confrontation network
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN110473141A (en) * 2019-08-02 2019-11-19 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111489322A (en) * 2020-04-09 2020-08-04 广州光锥元信息科技有限公司 Method and device for adding sky filter to static picture
CN111583165A (en) * 2019-02-19 2020-08-25 京东方科技集团股份有限公司 Image processing method, device, equipment and storage medium
US20210049468A1 (en) * 2018-11-14 2021-02-18 Nvidia Corporation Generative adversarial neural network assisted video reconstruction

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180308269A1 (en) * 2017-04-24 2018-10-25 Intel Corporation Hdr enhancement with temporal multiplex
CN109285112A (en) * 2018-09-25 2019-01-29 京东方科技集团股份有限公司 Image processing method and image processing device based on neural network
US20210049468A1 (en) * 2018-11-14 2021-02-18 Nvidia Corporation Generative adversarial neural network assisted video reconstruction
CN109754375A (en) * 2018-12-25 2019-05-14 广州华多网络科技有限公司 Image processing method, system, computer equipment, storage medium and terminal
CN111583165A (en) * 2019-02-19 2020-08-25 京东方科技集团股份有限公司 Image processing method, device, equipment and storage medium
CN110070124A (en) * 2019-04-15 2019-07-30 广州小鹏汽车科技有限公司 A kind of image amplification method and system based on production confrontation network
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN110473141A (en) * 2019-08-02 2019-11-19 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111489322A (en) * 2020-04-09 2020-08-04 广州光锥元信息科技有限公司 Method and device for adding sky filter to static picture

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449199A (en) * 2021-08-12 2022-05-06 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN116433781A (en) * 2021-12-28 2023-07-14 北京小米移动软件有限公司 Toning method, device, electronic device and storage medium
CN117079102A (en) * 2023-09-06 2023-11-17 深圳地平线机器人科技有限公司 Image bit width conversion method, neural network training method, device and medium

Also Published As

Publication number Publication date
CN113240599B (en) 2024-09-24

Similar Documents

Publication Publication Date Title
CN112562019A (en) Image color adjusting method and device, computer readable medium and electronic equipment
WO2022068487A1 (en) Styled image generation method, model training method, apparatus, device, and medium
WO2022068451A1 (en) Style image generation method and apparatus, model training method and apparatus, device, and medium
CN111369427A (en) Image processing method, image processing device, readable medium and electronic equipment
CN112785669B (en) Virtual image synthesis method, device, equipment and storage medium
CN114078083A (en) Hair transformation model generation method and device, and hair transformation method and device
CN113240599B (en) Image color adjustment method and device, computer readable storage medium, and electronic device
US20240386640A1 (en) Method, apparatus, device and storage medium for generating character style profile image
CN112581635A (en) Universal quick face changing method and device, electronic equipment and storage medium
CN111866483A (en) Color reproduction method and apparatus, computer readable medium and electronic device
CN111798385A (en) Image processing method and apparatus, computer readable medium and electronic device
CN113744286A (en) Virtual hair generation method and device, computer readable medium and electronic equipment
CN111950570B (en) Target image extraction method, neural network training method and device
CN113936089A (en) Interface rendering method and device, storage medium and electronic equipment
CN110070499A (en) Image processing method, device and computer readable storage medium
CN113658065A (en) Image noise reduction method and device, computer readable medium and electronic equipment
JP7700353B1 (en) Method, device and equipment for generating audio-driven 3D facial animation model
CN113284206B (en) Information acquisition method and device, computer readable storage medium, and electronic device
CN112034984A (en) A virtual model processing method, device, electronic device and storage medium
CN113409204A (en) Method and device for optimizing image to be processed, storage medium and electronic equipment
CN113537470A (en) Model quantization method and device, storage medium and electronic device
CN113362260A (en) Image optimization method and device, storage medium and electronic equipment
CN114418835B (en) Image processing method, device, equipment and medium
CN113610724B (en) Image optimization method and device, storage medium and electronic device
CN110619602A (en) Image generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant