CN114257730B - Image data processing method, device, storage medium and computer equipment - Google Patents
Image data processing method, device, storage medium and computer equipment Download PDFInfo
- Publication number
- CN114257730B CN114257730B CN202011003453.1A CN202011003453A CN114257730B CN 114257730 B CN114257730 B CN 114257730B CN 202011003453 A CN202011003453 A CN 202011003453A CN 114257730 B CN114257730 B CN 114257730B
- Authority
- CN
- China
- Prior art keywords
- filter
- data
- shooting
- model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 120
- 238000000034 method Methods 0.000 claims abstract description 61
- 230000002452 interceptive effect Effects 0.000 claims description 64
- 238000010606 normalization Methods 0.000 claims description 12
- 230000001960 triggered effect Effects 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 239000000203 mixture Substances 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 6
- 230000005012 migration Effects 0.000 claims description 4
- 238000013508 migration Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 21
- 230000008569 process Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 14
- 230000003993 interaction Effects 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 6
- 230000008485 antagonism Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Television Signal Processing For Recording (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a processing method and device of image data, a storage medium and computer equipment. The method comprises the steps of collecting shooting data of a shooting object, wherein the shooting data comprise at least one of shooting pictures and shooting videos, analyzing the shooting data, determining a filter model matched with the shooting data from a plurality of types of filter models, wherein the filter model is an countermeasure model, and performing filter processing on the shooting data by using the selected filter model to generate a filter image. The invention solves the technical problems of higher professional requirements and low processing efficiency of the image processing of the user side.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and apparatus for processing image data, a storage medium, and a computer device.
Background
In the field of image processing, after a user shoots an image, the imaging effect of the shot image and video is not full and vivid due to a plurality of external factors such as unobtrusive contrast of an image main body, insufficient color of the image, overexposure or underexposure of imaging equipment and the like. To improve the imaging effect, the user may process the image using image processing software or re-capture the image. However, in the above method, the former has high requirements on the professional skill literacy of the user, and is time-consuming and labor-consuming, while the latter has low efficiency, and reduces the enthusiasm of the user.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a processing method, a device, a storage medium and computer equipment for image data, which are used for at least solving the technical problems of higher professional requirements and low processing efficiency of processing images by a user side.
According to one aspect of the embodiment of the invention, a processing method of image data is provided, wherein the processing method comprises the steps of collecting shooting data of a shooting object, wherein the shooting data comprises at least one of shooting pictures and shooting videos, analyzing the shooting data, determining filter models matched with the shooting data from a plurality of types of filter models, wherein the filter models are countermeasure models, and performing filter processing on the shooting data by using the selected filter models to generate filter images.
According to another aspect of the embodiment of the invention, the image data processing method comprises the steps of displaying shooting data acquired by shooting equipment on an interactive interface, wherein the shooting data comprise at least one of shooting pictures and shooting videos, triggering and analyzing pictures of the shooting data to determine a filter model matched with the shooting data if a filter instruction is detected in any area of the interactive interface, wherein the filter model is an countermeasure model, and displaying a filter image on the interactive interface, wherein the filter image is an image generated by performing filter processing on the shooting data by using the selected filter model.
According to the embodiment of the invention, the method for processing the image data comprises the steps of displaying shooting data on an interactive interface, wherein the shooting data comprises at least one of shooting pictures and shooting videos, sensing filter instructions matched with the shooting data in the interactive interface, responding to the filter instructions, determining a filter model matched with the shooting data, wherein the filter model is an countermeasure model, outputting a selection page on the interactive interface, and providing at least one filter option for the selection page, wherein different filter options are used for representing the adoption of different levels of filter models for the shooting data, and displaying filter images on the interactive interface, wherein the filter images are images obtained by filtering the shooting data based on the selected filter model.
According to still another aspect of the embodiment of the invention, a processing method of image data is provided, wherein the front-end client uploads shooting data of a shooting object, the shooting data comprises at least one of shooting pictures and shooting videos, the front-end client transmits the shooting data to a background server, the front-end client receives a filter model matched with the shooting data and returned by the background server, the filter model is an countermeasure model determined from a plurality of types of filter models, and the front-end client uses the selected filter model to conduct filter processing on the shooting data to generate a filter image.
According to one aspect of the embodiment of the invention, the image data processing device comprises a first acquisition module and a first generation module, wherein the first acquisition module is used for acquiring shooting data of a shooting object, the shooting data comprise at least one of shooting pictures and shooting videos, the first determination module is used for analyzing the shooting data and determining filter models matched with the shooting data from a plurality of types of filter models, the filter models are countermeasure models, and the first generation module is used for performing filter processing on the shooting data by using the selected filter models to generate filter images.
According to another aspect of the embodiment of the invention, the image data processing device further comprises a first display module and a second display module, wherein the first display module is used for displaying shooting data acquired by shooting equipment on an interactive interface, the shooting data comprise at least one of shooting pictures and shooting videos, the second determination module is used for triggering and analyzing pictures of the shooting data when filter instructions are detected in any area of the interactive interface, and determining filter models matched with the shooting data, wherein the filter models are countermeasure models, and the second display module is used for displaying filter images on the interactive interface, wherein the filter images are images generated by performing filter processing on the shooting data by using the selected filter models.
According to the embodiment of the invention, the image data processing device comprises a third display module, a first sensing module, a third determining module and a first output module, wherein the third display module is used for displaying shooting data on an interactive interface, the shooting data comprises at least one of shooting pictures and shooting videos, the first sensing module is used for sensing filter instructions matched with the shooting data in the interactive interface, the third determining module is used for responding to the filter instructions and determining filter models matched with the shooting data, the filter models are countermeasure models, the first output module is used for outputting selection pages on the interactive interface, the selection pages provide at least one filter option, different filter options are used for representing the adoption of different levels of filter models for the shooting data, and the fourth display module is used for displaying filter images on the interactive interface, wherein the filter images are images obtained by conducting filter processing on the shooting data based on the selected filter models.
According to still another aspect of the embodiment of the invention, the image data processing device further comprises a first uploading module, a first transmitting module and a first receiving module, wherein the first uploading module is used for uploading shooting data of a shooting object by a front-end client, the shooting data comprise at least one of shooting pictures and shooting videos, the first transmitting module is used for transmitting the shooting data to a background server by the front-end client, the first receiving module is used for receiving a filter model matched with the shooting data and returned by the background server by the front-end client, the filter model is an countermeasure model determined from a plurality of types of filter models, and the second generating module is used for performing filter processing on the shooting data by the front-end client by using the selected filter model and generating a filter image.
According to an aspect of the embodiment of the present invention, there is also provided a storage medium including a stored program, where the program, when executed, controls a device in which the storage medium is located to execute the method for processing image data according to any one of the above.
According to another aspect of the embodiment of the present invention, there is also provided a computer device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program stored in the memory, where the computer program when executed causes the processor to execute the method for processing image data according to any one of the above.
In the embodiment of the invention, a mode of collecting the shooting data of the shooting object is adopted, and the purpose of generating the filter image processed by the filter model is achieved by determining the filter model matched with the shooting data and using the filter model to process the shooting data, so that the technical effect of improving the imaging effect of the shooting image is realized, and the technical problems of higher professional requirements of processing the image by a user side and low processing efficiency are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a block diagram of a hardware configuration of a computer terminal for implementing a processing method of image data according to an embodiment of the present invention;
Fig. 2 is a flowchart of a first method for processing image data according to an embodiment of the present invention;
Fig. 3 is a flowchart of a second method for processing image data according to an embodiment of the present invention;
Fig. 4 is a flowchart of a third method for processing image data according to an embodiment of the present invention;
fig. 5 is a flowchart of a fourth method for processing image data according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a method of processing image data provided in accordance with an alternative embodiment of the present invention;
fig. 7 is a block diagram of a first configuration of a processing apparatus for image data provided according to an embodiment of the present invention;
fig. 8 is a block diagram of a second configuration of a processing apparatus for image data provided according to an embodiment of the present invention;
Fig. 9 is a block diagram of a third configuration of a processing apparatus for image data provided according to an embodiment of the present invention;
fig. 10 is a block diagram of a fourth configuration of a processing apparatus for image data provided according to an embodiment of the present invention;
Fig. 11 is a block diagram of a computer terminal according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in the course of describing embodiments of the application are applicable to the following explanation:
A filter, a processing method of image data, processes the pixel value in the image, thus realize various special effects of the image.
The intelligent filter automatically adds a filter effect suitable for an image in a machine learning mode, does not need to manually select the filter and adjust the effect, generally needs to carry out joint processing on image elements such as an image channel (channel), a pixel (pixel), a layer (layer) and the like, strengthens certain parts of the image/video and weakens certain parts of the image/video, thereby obtaining visual effects such as gradual change, halation, tone and the like, enabling the whole image/video to meet the requirements of a human aesthetic perception system, and finally obtaining better artistic effect.
U-net, an algorithm for semantic segmentation using a full convolution network, refers to a network structure proposed in paper < Convolutional Networks for Biomedical Image Segmentation >.
An countermeasure Network (GENERATIVE ADVERSARIAL Network) is generated, an unsupervised learning algorithm is adopted, and a neural Network model (namely, a countermeasure model) meeting requirements is obtained through learning by a method that two neural networks game each other.
Batch standardization (batch normalization) layer, a theoretical technique for improving the performance and training stability of deep neural networks, overcomes the problem that the convergence speed is slow as the number of layers of the neural networks is increased. The technique is capable of presenting zero mean/unit variance input to any layer in the neural network.
Example 1
There is also provided, in accordance with an embodiment of the present invention, a method embodiment of processing image data, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown or described herein.
The method according to the first embodiment of the present application may be implemented in a mobile terminal, a computer terminal or a similar computing device. Fig. 1 shows a hardware block diagram of a computer terminal (or mobile device) for implementing a processing method of image data. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (102 a, 102b are shown here), 102 n), a processor 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), a memory 104 for storing data, and a transmission module 106 for communication functions. Among other things, a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS BUS), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated, in whole or in part, into any of the other elements in the computer terminal 10 (or mobile device). As referred to in embodiments of the application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the image data processing method in the embodiment of the present invention, and the processor 102 executes the software programs and modules stored in the memory 104, thereby executing various functional applications and data processing, that is, implementing the image data processing method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
In the above-described operating environment, the present application provides a processing method of image data as shown in fig. 2. Fig. 2 is a flowchart of a first method for processing image data according to an embodiment of the present application, as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring shooting data of a shooting object, wherein the shooting data comprises at least one of shooting pictures and shooting videos;
Step S204, analyzing the shooting data, and determining a filter model matched with the shooting data from a plurality of types of filter models, wherein the filter model is an countermeasure model;
step S206, performing filter processing on the captured data using the selected filter model, and generating a filter image.
Through the processing, the method of collecting the shooting data of the shooting object is adopted, the filter model matched with the shooting data is determined, and the filter model is used for carrying out filter processing on the shooting data, so that the purpose of generating the filter image processed by the filter model is achieved, the technical effect of improving the imaging effect of the shooting image is achieved, and the technical problems that the professional requirement of a user side on image processing is high and the processing efficiency is low are solved.
As an alternative embodiment, determining the filter model matched with the shooting data comprises classifying the shooting data to obtain a scene type to which the shooting data belongs, and calling the filter model matched with the shooting data from a plurality of types of filter models based on the scene type to which the shooting object belongs in the shooting data. The scene type to which the photographed data belongs is important for selecting the filter model, and the uniform image processing model cannot be applied because different requirements are set for factors such as image style, overall color tone, light rays and the like aiming at photographed objects in different scenes.
Alternatively, the embodiment may first perform scene classification on the shot data and obtain the scene classification. For example, scene categories may include food, clothing, outdoors, digital products, and the like. The identification and classification of the scene can comprise various methods, for example, an algorithm model capable of identifying and classifying the image scene can be trained in advance through an artificial intelligence learning method, the scene category of the shot image can be obtained through processing of the algorithm model in the image data, the characteristics in the image can be extracted through data processing of the image in an automatic classification mode, and when the extracted image characteristics are matched with the characteristic data in a preset scene characteristic library, the scene category of the shot image is determined.
The present embodiment may also invoke a filter model that matches the scene category based on it. For example, in a 'food' scene, a food filter model matched with food images is called, and the food filter model can improve the contrast and saturation of the foreground, so that food images in image frames after model processing are more gorgeous and full in color, more clear in detail, more outstanding in food theme and better in imaging effect. For example, in an outdoor scene, an outdoor filter model matched with an outdoor environment is called, and the outdoor filter model can strengthen blurring of a distant view and a background in an image, avoid too many interference elements in the outdoor environment and strengthen the existence sense of a shooting object in a picture.
As an alternative embodiment, the scene category to which the shooting data belongs can be obtained by extracting image features in the shooting data, wherein the image features comprise at least one of the features of a shooting object and the features of a background image, determining scene parameters of the shooting data based on the image features, wherein the scene parameters are used for representing the product category to which the shooting object recorded in the shooting data belongs, and constructing the scene category to which the shooting data belongs based on the scene parameters of the shooting data. Through the processing, by extracting the object image characteristics and the background image characteristics of the shooting data, the type of the shooting object can be considered, the environment type of the background where the shooting object is positioned can also be considered, and the technical effect of accurately determining the scene type of the shooting data is realized. The characteristics of the photographed object may include the class, color, size, shape of the object. For example, the article may be cake, white, 50% of the image area, circular, etc. and the background image may have light, brightness, background scene, background object, etc. for example, the background may be clear light, moderate brightness, indoor environment, table, chair, etc. And determining scene parameters of shooting data by extracting the characteristics, and constructing scene categories according to the scene parameters. For example, the scene category may be indoor close-up food photography with sufficient light. By determining the accurate scene type of the shot object, the subsequent filter model can have more pertinence to the processing of the image data, and a better filter effect is obtained.
As an alternative embodiment, the network structure of each filter model includes a U-Net model structure of global features and independent batch normalization layers corresponding to different scene types.
Optionally, filtering the shot data by using a selected filter model, wherein the filtering comprises extracting global elements from the shot data based on a U-Net model structure with global characteristics in the filter model, wherein the global elements comprise at least one of rays, composition, foreground and background, migrating pixel data distribution of the shot data to pixel data distribution of a scene type matched with the shot data according to the global elements, and generating a filter image based on migration results.
Each filter model may process image data matching a different scene type for that scene type. The U-Net model structure with global features can extract global elements of light rays, composition, foreground and background lamps from shooting data, and achieves more accurate and unified-style image data transformation. In the related art, the generation countermeasure network can extract image local features through a local feature U-Net network structure, and the local features are utilized to process the image at the pixel level, so that the filter processing of the image is realized. However, due to lack of extraction and processing of global features of an image, global factors such as overall light, composition, foreground, background and the like of the image are not considered, so that an image area needing to be enhanced in expression is not enhanced, and an image area needing to be weakened in expression is enhanced, and therefore a generated filter image is difficult to obtain a satisfactory artistic effect. Through the U-Net model structure with global characteristics, the characteristics of the scene category of the shot image can be extracted more accurately and comprehensively, and the pixel data of the shot data are processed according to the mode conforming to the scene category characteristics. Specifically, the pixel data distribution of the shooting data is migrated to the pixel data distribution of the scene type matched with the shooting data, the integrity of the image data processed by the filter model is enhanced, the processing of the filter model is more attached to the style of the scene type, and the inconsistency of the image style is avoided. The independent batch standardization layer can normalize sample characteristics in the training data set, so that the gradient direction when the model converges is more accurate and effective, the problem that the convergence speed is slow along with the deepening of the layer number of the neural network is further solved, the model can be subjected to gradient descent more in a larger way, the filter model is obtained more easily through training, and the model is more stable. For example, after the network structure of the filter model comprises independent batch standardization layers, the filter model aiming at different scene categories can be trained more quickly, and the filter model with better classification effect is obtained.
As an alternative embodiment, after determining the filter model matched with the shooting data from a plurality of types of filter models, the method comprises the steps of outputting a selection page on an interactive interface, wherein different filter options are used for representing the filter models with different levels for the shooting data, and acquiring the filter model matched with the filter options under the condition that any filter option is triggered. After the filter model matched with the shooting data is determined, the user can be allowed to autonomously select the determined degree of the filter model for processing the image data, the autonomous selectivity of the user is enhanced, and the user can obtain the image more meeting the requirements of the user. For example, after the filter model has been determined, the interactive interface may provide multiple options such as "preliminary filter processing", "moderate filter processing" and "high filter processing", where the filter model may adjust parameter settings inside the model to perform different degrees of processing on the original image data, corresponding to different filter options. When any one filter option is triggered, the corresponding filter model automatically changes the internal parameters according to the triggered filter option, and the call of the adjusted filter model is completed.
Fig. 3 is a flowchart of a second method for processing image data according to an embodiment of the present invention. As shown in fig. 3, the process includes the steps of:
Step S302, shooting data acquired by shooting equipment are displayed on an interactive interface, wherein the shooting data comprise at least one of shooting pictures and shooting videos;
Step S304, if a filter instruction is detected in any area of the interactive interface, triggering and analyzing a picture of shooting data, and determining a filter model matched with the shooting data, wherein the filter model is an countermeasure model;
Step S306, displaying a filter image on the interactive interface, wherein the filter image is an image generated by performing filter processing on the captured data using the selected filter model.
Through the processing, the method of displaying the shooting data on the interactive interface, analyzing the pictures of the shooting data and the filter images is adopted, and the filter model is determined according to the detected filter instructions and used for processing the shooting data, so that the process of displaying the process of processing the image data by using the filter model on the interactive interface is realized, the technical effect of intuitively displaying the filter effect is achieved, and better use experience is brought to users.
Fig. 4 is a flowchart of a third method for processing image data according to an embodiment of the present invention. As shown in fig. 4, the flow includes the steps of:
step S402, shooting data are displayed on the interactive interface, wherein the shooting data comprise at least one of shooting pictures and shooting videos;
Step S404, a filter instruction matched with shooting data is sensed in the interaction interface;
step S406, responding to the filter instruction, and determining a filter model matched with the shooting data, wherein the filter model is an countermeasure model;
Step S408, outputting a selection page on the interactive interface, wherein the selection page provides at least one filter option, and different filter options are used for representing filter models with different levels for shooting data;
in step S410, a filter image is displayed on the interactive interface, where the filter image is an image obtained by performing filter processing on the captured data based on the selected filter model.
Through the processing, the specific filter model is determined on the interactive interface according to the filter instructions corresponding to the filter model types and the filter options for representing the filter models of different levels, and the shooting data is subjected to filter processing according to the determined filter model, so that the technical effect of providing an interactive means for selecting the filter model types and the filter model levels is achieved.
As an alternative embodiment, if the picture quality of the photographed data is lower than the standard data, a filter instruction matching the photographed data is triggered. When the picture quality of the photographed data is too low, the conventional filter model may not efficiently process the photographed data. At this time, the above processing may trigger a filter instruction to call a filter model matching the image quality for the photographed data having the current image quality. For example, the called filter model may be a model that can perform filter data processing based on the picture quality of the current captured data, or may be a model that can improve the picture quality of the current captured data based on a predetermined algorithm and then perform filter data processing.
As an alternative embodiment, before displaying the filter image on the interactive interface, the method further comprises the steps of receiving a selection instruction in the area where the corresponding filter option is located on the selection page, triggering the corresponding filter option in response to the selection instruction, and calling the filter model of the corresponding level. Through the processing, an interaction means is provided for a user, so that the user can autonomously select the level of the filter model, and a filter image meeting the requirements is obtained.
Fig. 5 is a flowchart of a fourth method for processing image data according to an embodiment of the present invention. As shown in fig. 5, the process includes the steps of:
step S502, the front-end client uploads shooting data of a shooting object, wherein the shooting data comprises at least one of shooting pictures and shooting videos;
Step S504, the front-end client transmits shooting data to the background server;
Step S506, the front-end client receives a filter model matched with shooting data returned by the background server, wherein the filter model is an countermeasure model determined from a plurality of types of filter models;
In step S508, the front-end client performs filter processing on the captured data using the selected filter model, and generates a filter image.
Through the processing, the model call of the front-end client from the background server is realized, the background server returns a filter model matched with the shooting data transmitted by the front-end client based on the shooting data, and the front-end client processes the shooting data by using the filter model to generate a filter image.
Fig. 6 is a schematic diagram of a processing method of image data provided according to an alternative embodiment of the present invention. As shown in fig. 6, the image processing method according to the alternative embodiment of the present invention includes two major units, namely front-end interaction and background algorithm.
The flow in the front-end interaction unit in fig. 6 comprises the following steps:
S1, uploading an image or a video by a merchant, S2, displaying scene/category information, analyzing the scene/category of the image by using a scene classification module, displaying the scene category corresponding to the image, S3, processing image data by using a filter model corresponding to the scene category to which the image belongs in an intelligent filter module, and S4, obtaining an intelligent filter image.
The background algorithm unit in fig. 6 includes a scene classification module and an intelligent filter module. The scene classification module predicts the scene category to which the image belongs by analyzing the image uploaded by the user, and processes the image data by using different filter models according to the obtained scene category. The filter model is an countermeasure model, and comprises a countermeasure network generation step, wherein the countermeasure network generation step comprises a U-Net network structure with global characteristics and an independent batch standardization layer with specific styles.
The generation of the countermeasure network is a neural network learning algorithm, and an original image can be used as an input sample image to generate a brand new image with a filter effect. The generating countermeasure network comprises a generator and a discriminator, wherein the generator generates an image data set processed by a filter according to the original image data serving as a sample set through an encoder-decoder, and the discriminator is used for comparing images before and after processing to distinguish whether the image is an original real image or an image generated by the generator. The generator and the discriminator can continuously improve the respective performances in the countermeasure so as to achieve dynamic balance and obtain a countermeasure model meeting the requirements. Generating the antagonism network may employ max-min loss function to constrain the learning process of the model. By adding a U-Net network structure based on global features and an independent batch standardization layer with specific styles into the generated antagonism network, the antagonism network can be optimally generated, and the antagonism model is guided to learn the features with global features and specific styles. For example, the generated countermeasure network taking the food as the theme is trained to obtain the countermeasure model taking the food as the theme, and the model can carry out targeted processing on the image taking the food as the scene category, so that the overall style of the processed image is kept consistent and is very fit with the food theme.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the above description of the embodiments, it will be clear to those skilled in the art that the image data processing method according to the above embodiments may be implemented by means of software plus a necessary general hardware platform, or may be implemented by hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the various embodiments of the present invention.
Example 2
According to an embodiment of the present invention, there is further provided an apparatus for implementing the above-mentioned first image data processing method, and fig. 7 is a block diagram of a first image data processing apparatus according to an embodiment of the present invention, as shown in fig. 7, where the apparatus includes a first acquisition module 702, a first determination module 704, and a first generation module 706, and the apparatus is described below.
The first acquisition module 702 is configured to acquire shooting data of a shooting object, where the shooting data includes at least one of a shooting picture and a shooting video, the first determination module 704 is connected to the first acquisition module 702 and configured to analyze the shooting data and determine a filter model matching the shooting data from a plurality of types of filter models, where the filter model is an countermeasure model, and the first generation module 706 is connected to the first determination module 704 and configured to perform filter processing on the shooting data using the selected filter model and generate a filter image.
It should be noted that, the first acquisition module 702, the first determination module 704 and the first generation module 706 correspond to steps S202 to S206 in embodiment 1, and the three modules are the same as the corresponding steps and the examples and application scenarios, but are not limited to the disclosure in the first embodiment. It should be noted that the above-described module may be operated as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
According to an embodiment of the present invention, there is further provided an apparatus for implementing the above-mentioned image data processing method, and fig. 8 is a block diagram of an image data processing apparatus according to an embodiment of the present invention, as shown in fig. 8, where the apparatus includes a first display module 802, a second determination module 804 and a second display module 806, and the apparatus is described below.
The first display module 802 is configured to display, on an interactive interface, shooting data acquired by a shooting device, where the shooting data includes at least one of a shooting picture and a shooting video, the second determination module 804 is connected to the first display module 802 and configured to trigger, when a filter instruction is detected in any area of the interactive interface, analysis of a picture of the shooting data to determine a filter model matched with the shooting data, where the filter model is an countermeasure model, and the second display module 806 is connected to the second determination module 804 and configured to display, on the interactive interface, a filter image, where the filter image is an image generated by performing filter processing on the shooting data using the selected filter model.
Here, it should be noted that the first display module 802, the second determining module 804 and the second display module 806 correspond to steps S302 to S306 in embodiment 1, and the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above-described module may be operated as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
According to an embodiment of the present invention, there is further provided an apparatus for implementing the above-mentioned image data processing method III, and FIG. 9 is a block diagram of an image data processing apparatus III according to an embodiment of the present invention, where the apparatus includes a third display module 902, a first sensing module 904, a third determining module 906, a first output module 908 and a fourth display module 910, as shown in FIG. 9, and the description is given below.
The display device comprises a third display module 902, a first sensing module 904, a third determining module 906 and a first output module 908, wherein the shooting data comprises at least one of shooting pictures and shooting videos, the first sensing module 904 is connected to the third display module 902 and is used for sensing filter instructions matched with the shooting data in the interaction interface, the third determining module 906 is connected to the first sensing module 904 and is used for displaying filter images on the interaction interface, the filter images are images generated by performing filter processing on the shooting data through a selected filter model, the first output module 908 is connected to the third determining module 906 and is used for outputting a selection page on the interaction interface, the selection page provides at least one filter option, different filter options are used for representing the filter model with different levels for the shooting data, and the fourth display module 910 is connected to the first output module 908 and is used for displaying the filter images on the interaction interface, and the filter images are images obtained by performing filter processing on the shooting data on the basis of the selected filter model.
It should be noted that the third display module 902, the first sensing module 904, the third determining module 906, the first output module 908 and the fourth display module 910 correspond to steps S402 to S410 in embodiment 1, and the five modules are the same as the examples and the application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above-described module may be operated as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
According to an embodiment of the present invention, there is further provided an apparatus for implementing the above-mentioned image data processing method four, and fig. 10 is a block diagram of a structure of the image data processing apparatus four according to an embodiment of the present invention, as shown in fig. 10, where the apparatus includes a first uploading module 1002, a first transmitting module 1004, a first receiving module 1006 and a second generating module 1008, and the apparatus is described below.
The first uploading module 1002 is configured to upload, by a front-end client, shooting data of a shooting object, where the shooting data includes at least one of a shooting picture and a shooting video, the first transmitting module 1004 is connected to the first uploading module 1002 and is configured to transmit, by the front-end client, the shooting data to a background server, the first receiving module 1006 is connected to the first transmitting module 1004 and is configured to receive, by the front-end client, a filter model matching the shooting data returned by the background server, where the filter model is an countermeasure model determined from multiple types of filter models, and the second generating module 1008 is connected to the first receiving module 1006 and is configured to perform filter processing on the shooting data by using the selected filter model by the front-end client, so as to generate a filter image.
Here, the first uploading module 1002, the first transmitting module 1004, the first receiving module 1006 and the second generating module 1008 correspond to steps S502 to S508 in embodiment 1, and the four modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above-described module may be operated as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
Example 3
Embodiments of the present invention may provide a computer terminal, which may be any one of a group of computer terminals. Alternatively, in the present embodiment, the above-described computer terminal may be replaced with a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the above-mentioned computer terminal may be located in at least one network device among a plurality of network devices of the computer network.
In the embodiment, the computer terminal can execute the program code of the following steps of collecting shooting data of an application program, wherein the shooting data comprises at least one of shooting pictures and shooting videos, analyzing the shooting data, determining filter models matched with the shooting data from a plurality of types of filter models, wherein the filter models are countermeasure models, and performing filter processing on the shooting data by using the selected filter models to generate filter images.
Alternatively, fig. 11 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 11, the computer terminal may include one or more (only one is shown in the figure) processors 1102, memory 1104, and the like.
The memory may be used to store software programs and modules, such as program instructions/modules corresponding to the image data processing method and apparatus in the embodiments of the present invention, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the image data processing method described above. The memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located with respect to the processor, the remote memory being connectable to the terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor may call the information and the application program stored in the memory through the transmission device to collect photographing data of the photographing object, wherein the photographing data includes at least one of photographing a picture and photographing a video, analyze the photographing data, determine a filter model matching the photographing data from a plurality of kinds of filter models, wherein the filter model is an countermeasure model, and perform filter processing on the photographing data using the selected filter model to generate a filter image.
Optionally, the processor may further perform program code for analyzing the photographing data, determining a filter model matching the photographing data from among a plurality of types of filter models, including scene classification of the photographing data, obtaining a scene category to which the photographing data belongs, and calling the filter model matching the photographing data from among the plurality of types of filter models based on the scene category to which the photographing object belongs in the photographing data.
Optionally, the processor may further perform a program code for performing scene classification on the shot data to obtain a scene class to which the shot data belongs, where the program code includes extracting image features in the shot data, where the image features include at least one of features of a shot object and features of a background image, determining scene parameters of the shot data based on the image features, where the scene parameters are used to characterize a product class to which the shot object recorded in the shot data belongs, and constructing the scene class to which the shot data belongs based on the scene parameters of the shot data.
Optionally, the processor may further execute program code for each filter model network structure comprising a global feature U-Net model structure and independent batch normalization layers corresponding to different scene types.
Optionally, the processor may further execute program code for performing filter processing on the captured data using the selected filter model, including extracting global elements from the captured data based on a U-Net model structure having global features in the filter model, wherein the global elements include at least one of light, composition, foreground, and background, migrating pixel data distribution of the captured data to pixel data distribution of a scene type matching the captured data according to the global elements, and generating a filter image based on the migration result.
Optionally, the processor may further execute program code for, after determining a filter model matching the captured data from among a plurality of types of filter models, outputting a selection page on the interactive interface, the selection page providing at least one filter option, wherein different filter options are used for characterizing the filter model with different levels for the captured data, and acquiring the filter model matching the filter option if any one of the filter options is triggered.
The processor can call information and application programs stored in the memory through the transmission device to execute the following steps of displaying shooting data acquired by shooting equipment on an interactive interface, wherein the shooting data comprise at least one of shooting pictures and shooting videos, triggering and analyzing pictures of the shooting data to determine a filter model matched with the shooting data if a filter instruction is detected in any area of the interactive interface, wherein the filter model is an countermeasure model, and displaying filter images on the interactive interface, wherein the filter images are images generated by performing filter processing on the shooting data by using the selected filter model.
The processor can call information and application programs stored in the memory through the transmission device to execute the following steps of displaying shooting data on an interactive interface, wherein the shooting data comprises at least one of shooting pictures and shooting videos, sensing filter instructions matched with the shooting data in the interactive interface, responding to the filter instructions, determining filter models matched with the shooting data, wherein the filter models are countermeasure models, outputting a selection page on the interactive interface, selecting the page to provide at least one filter option, wherein different filter options are used for representing the filter models with different levels for the shooting data, and displaying filter images on the interactive interface, wherein the filter images are images obtained by performing filter processing on the shooting data based on the selected filter models.
Optionally, the processor may further execute program code for triggering a filter instruction matching the shot data if the picture quality of the shot data is lower than the standard data.
Optionally, the processor may further execute program code for, before displaying the filter image on the interactive interface, receiving a selection instruction in a region where a corresponding filter option on the selection page is located, triggering the corresponding filter option in response to the selection instruction, and calling a filter model of a corresponding level.
The processor can call information and application programs stored in the memory through the transmission device to execute the following steps that the front-end client side uploads shooting data of a shooting object, wherein the shooting data comprise at least one of shooting pictures and shooting videos, the front-end client side transmits the shooting data to the background server, the front-end client side receives a filter model matched with the shooting data and returned by the background server, the filter model is an countermeasure model determined from a plurality of types of filter models, and the front-end client side performs filter processing on the shooting data by using the selected filter model to generate a filter image.
According to the embodiment of the invention, the mode of collecting the shooting data of the shooting object is adopted, the filter model matched with the shooting data is determined, and the filter model is used for carrying out filter processing on the shooting data, so that the purpose of generating the filter image processed by the filter model is achieved, the technical effect of improving the imaging effect of the shooting image is realized, and the technical problems of higher professional requirements on image processing and low processing efficiency of a user side are solved.
It will be appreciated by those skilled in the art that the configuration shown in fig. 11 is merely illustrative, and the computer terminal may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile internet device (Mobile INTERNET DEVICES, MID), a PAD, etc. Fig. 11 is not limited to the structure of the electronic device. For example, the computer terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 11, or have a different configuration than shown in FIG. 11.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device related hardware, and the program may be stored in a computer readable storage medium, where the storage medium may include a flash disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, etc.
Example 4
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be used to store program codes executed by the processing method for image data provided in the first embodiment.
Alternatively, in this embodiment, the storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Optionally, in this embodiment the storage medium is arranged to store program code for acquiring shot data of the shot object, wherein the shot data comprises at least one of a shot picture and a shot video, analyzing the shot data, determining a filter model matching the shot data from a plurality of kinds of filter models, wherein the filter model is an countermeasure model, and performing filter processing on the shot data using the selected filter model, generating the filter image.
Alternatively, in the present embodiment, the storage medium is configured to store program code for analyzing the photographing data, determining a filter model matching the photographing data from among a plurality of kinds of filter models, including scene classification of the photographing data, obtaining a scene class to which the photographing data belongs, and calling the filter model matching the photographing data from among the plurality of kinds of filter models based on the scene class to which the photographing object belongs in the photographing data.
Optionally, in this embodiment the storage medium is arranged to store program code for performing the steps of scene classification of the shot data, obtaining a scene class to which the shot data belongs, comprising extracting image features in the shot data, wherein the image features comprise at least one of features of a shot object and features of a background image, determining scene parameters of the shot data based on the image features, wherein the scene parameters are used for characterizing a product class to which the shot object recorded in the shot data belongs, and constructing the scene class to which the shot data belongs based on the scene parameters of the shot data.
Optionally, in this embodiment, the storage medium is arranged to store program code for performing the steps of each filter model network structure comprising a globally characterized U-Net model structure and independent batch normalization layers corresponding to different scene types.
Optionally, in this embodiment the storage medium is arranged to store program code for performing a filter process on the shot data using the selected filter model, comprising extracting global elements from the shot data based on a U-Net model structure having global features in the filter model, wherein the global elements comprise at least one of rays, composition, foreground and background, migrating a pixel data distribution of the shot data to a pixel data distribution of a scene type matching the shot data according to the global elements, and generating a filter image based on the migration result.
Optionally, in this embodiment the storage medium is arranged to store program code for performing the steps of, after determining a filter model matching the shot data from a plurality of kinds of filter models, outputting a selection page on the interactive interface, the selection page providing at least one filter option, wherein different filter options are used for characterizing filter models of different levels for the shot data, and in case any one of the filter options is triggered, obtaining a filter model matching the filter option.
Optionally, in this embodiment the storage medium is arranged to store program code for displaying on the interactive interface shot data acquired by the shooting device, wherein the shot data comprises at least one of a shot picture and a shot video, triggering a picture analyzing the shot data if a filter instruction is detected in any one area of the interactive interface, determining a filter model matching the shot data, wherein the filter model is an countermeasure model, displaying on the interactive interface a filter image, wherein the filter image is an image generated by filter processing the shot data using the selected filter model.
Optionally, in this embodiment the storage medium is arranged to store program code for displaying the shot data on the interactive interface, wherein the shot data comprises at least one of a shot picture and a shot video, sensing a filter instruction matching the shot data in the interactive interface, determining a filter model matching the shot data in response to the filter instruction, wherein the filter model is an countermeasure model, outputting a selection page on the interactive interface, the selection page providing at least one filter option, wherein different filter options are used for characterizing the filter model with different levels for the shot data, and displaying a filter image on the interactive interface, wherein the filter image is an image obtained by filter processing the shot data based on the selected filter model.
Alternatively, in the present embodiment, the storage medium is arranged to store program code for executing the step of triggering a filter instruction matching the photographed data if the picture quality of the photographed data is lower than the standard data.
Optionally, in this embodiment, the storage medium is arranged to store program code for, before displaying the filter image on the interactive interface, receiving a selection instruction in a region of the selection page where the corresponding filter option is located, triggering the corresponding filter option in response to the selection instruction, and invoking the filter model of the corresponding level.
Optionally, in this embodiment the storage medium is arranged to store program code for a front-end client to upload shot data of the shot object, wherein the shot data comprises at least one of a shot picture and a shot video, the front-end client to transmit the shot data to a background server, the front-end client to receive a filter model matching the shot data returned by the background server, wherein the filter model is an countermeasure model determined from a plurality of kinds of filter models, and the front-end client performs filter processing on the shot data using the selected filter model to generate the filter image.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, etc. which can store the program code.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.
Claims (16)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011003453.1A CN114257730B (en) | 2020-09-22 | 2020-09-22 | Image data processing method, device, storage medium and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011003453.1A CN114257730B (en) | 2020-09-22 | 2020-09-22 | Image data processing method, device, storage medium and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114257730A CN114257730A (en) | 2022-03-29 |
CN114257730B true CN114257730B (en) | 2025-02-25 |
Family
ID=80788430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011003453.1A Active CN114257730B (en) | 2020-09-22 | 2020-09-22 | Image data processing method, device, storage medium and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114257730B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115623323A (en) * | 2022-11-07 | 2023-01-17 | 荣耀终端有限公司 | Shooting method and electronic equipment |
CN116681788B (en) * | 2023-06-02 | 2024-04-02 | 萱闱(北京)生物科技有限公司 | Image electronic dyeing method, device, medium and computing equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109068056A (en) * | 2018-08-17 | 2018-12-21 | Oppo广东移动通信有限公司 | Electronic equipment, filter processing method of image shot by electronic equipment and storage medium |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101272390B (en) * | 2008-05-12 | 2012-03-21 | 腾讯科技(深圳)有限公司 | Filter link establishing method and device for media file |
CN105323456B (en) * | 2014-12-16 | 2018-11-30 | 维沃移动通信有限公司 | Image preview method for photographing device and image photographing device |
CN104636759B (en) * | 2015-02-28 | 2019-01-15 | 成都品果科技有限公司 | A kind of method and picture filter information recommendation system for obtaining picture and recommending filter information |
CN105224950A (en) * | 2015-09-29 | 2016-01-06 | 小米科技有限责任公司 | The recognition methods of filter classification and device |
US10984286B2 (en) * | 2018-02-02 | 2021-04-20 | Nvidia Corporation | Domain stylization using a neural network model |
CN110309357B (en) * | 2018-02-27 | 2022-12-02 | 腾讯科技(深圳)有限公司 | Application data recommendation method, model training method, device and storage medium |
CN108960408B (en) * | 2018-06-12 | 2021-07-13 | 杭州米绘科技有限公司 | Stylization system and method for ultrahigh-definition resolution pattern |
CN108765295B (en) * | 2018-06-12 | 2019-11-26 | 腾讯科技(深圳)有限公司 | Image processing method, image processing apparatus and storage medium |
CN108985493A (en) * | 2018-06-22 | 2018-12-11 | 哈尔滨理工大学 | A kind of ground class variation prediction method based on self-adapting changeable filter |
CN109191403A (en) * | 2018-09-07 | 2019-01-11 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, computer-readable storage medium |
CN109325926B (en) * | 2018-09-30 | 2021-07-23 | 武汉斗鱼网络科技有限公司 | Automatic filter implementation method, storage medium, device and system |
CN111107424A (en) * | 2018-10-25 | 2020-05-05 | 武汉斗鱼网络科技有限公司 | Outdoor live broadcast filter implementation method, storage medium, device and system |
CN109379572B (en) * | 2018-12-04 | 2020-03-06 | 北京达佳互联信息技术有限公司 | Image conversion method, device, electronic device and storage medium |
CN109727208A (en) * | 2018-12-10 | 2019-05-07 | 北京达佳互联信息技术有限公司 | Filter recommendation method, device, electronic device and storage medium |
EP3709209A1 (en) * | 2019-03-15 | 2020-09-16 | Koninklijke Philips N.V. | Device, system, method and computer program for estimating pose of a subject |
CN110458060A (en) * | 2019-07-30 | 2019-11-15 | 暨南大学 | A vehicle image optimization method and system based on confrontational learning |
CN111275107A (en) * | 2020-01-20 | 2020-06-12 | 西安奥卡云数据科技有限公司 | Multi-label scene image classification method and device based on transfer learning |
CN111416950B (en) * | 2020-03-26 | 2023-11-28 | 腾讯科技(深圳)有限公司 | Video processing method and device, storage medium and electronic equipment |
-
2020
- 2020-09-22 CN CN202011003453.1A patent/CN114257730B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109068056A (en) * | 2018-08-17 | 2018-12-21 | Oppo广东移动通信有限公司 | Electronic equipment, filter processing method of image shot by electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114257730A (en) | 2022-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10924661B2 (en) | Generating image capture configurations and compositions | |
US11070717B2 (en) | Context-aware image filtering | |
KR101688352B1 (en) | Recommending transformations for photography | |
CN106161939B (en) | Photo shooting method and terminal | |
CN108198177A (en) | Image acquisition method, device, terminal and storage medium | |
CN110659581B (en) | Image processing method, device, equipment and storage medium | |
JP2017531950A (en) | Method and apparatus for constructing a shooting template database and providing shooting recommendation information | |
CN107424117B (en) | Image beautifying method and device, computer readable storage medium and computer equipment | |
CN114257730B (en) | Image data processing method, device, storage medium and computer equipment | |
CN114360018B (en) | Rendering method and device of three-dimensional facial expression, storage medium and electronic device | |
WO2015180684A1 (en) | Mobile terminal-based shooting simulation teaching method and system, and storage medium | |
CN106815803A (en) | The processing method and processing device of picture | |
CN110581950B (en) | Camera, system and method for selecting camera settings | |
CN107730461A (en) | Image processing method, apparatus, device and medium | |
CN109191371A (en) | A method of it judging automatically scenery type and carries out image filters processing | |
CN109829364A (en) | A kind of expression recognition method, device and recommended method, device | |
CN111127367A (en) | Method, device and system for face image processing | |
CN108540722B (en) | Method and device for controlling camera to shoot and computer readable storage medium | |
CN114170472A (en) | Image processing method, readable storage medium and computer terminal | |
WO2023217138A1 (en) | Parameter configuration method and apparatus, device, storage medium and product | |
CN109472230B (en) | Automatic athlete shooting recommendation system and method based on pedestrian detection and Internet | |
CN112019800A (en) | Image sharing method, device, range hood and storage medium | |
WO2016079609A1 (en) | Generation apparatus and method for evaluation information, electronic device and server | |
WO2016131226A1 (en) | Intelligent terminal and image processing method and apparatus therefor | |
CN114143429B (en) | Image shooting method, device, electronic equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230829 Address after: Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province Applicant after: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd. Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK Applicant before: ALIBABA GROUP HOLDING Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |