CN113469874A - Beauty treatment method and device, electronic equipment and storage medium - Google Patents
Beauty treatment method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113469874A CN113469874A CN202110726714.0A CN202110726714A CN113469874A CN 113469874 A CN113469874 A CN 113469874A CN 202110726714 A CN202110726714 A CN 202110726714A CN 113469874 A CN113469874 A CN 113469874A
- Authority
- CN
- China
- Prior art keywords
- template
- makeup
- face
- color space
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention provides a beauty treatment method and device, electronic equipment and a storage medium. The method comprises the following steps: collecting a target face image of a user; acquiring at least one makeup template and acquiring makeup effect information corresponding to each makeup template; determining a makeup processing area corresponding to each makeup template on the target face image; and adopting a fusion strategy corresponding to each makeup template, and fusing each makeup template, the corresponding makeup effect information and the corresponding makeup processing area to generate a virtual face image. According to the embodiment of the invention, aiming at different makeup templates, the fusion processing process can be realized by adopting the fusion strategy corresponding to the makeup template, so that the effect of virtual face makeup is improved.
Description
[ technical field ] A method for producing a semiconductor device
The embodiment of the invention relates to the technical field of image processing, in particular to a cosmetic processing method and device, electronic equipment and a storage medium.
[ background of the invention ]
The makeup is to render, draw and arrange the face, five sense organs and other parts of the human body so as to enhance the three-dimensional impression, adjust the shape and color, hide defects and express spiritual expression, thereby achieving the purpose of beautifying. With the steady increase of the scale of the makeup industry and the popularization of the AI technology, virtual face makeup is leading the change of the makeup industry. Virtual makeup not only can reduce cosmetic marketing cost, but also can greatly accelerate the popularization of new products through an online marketing system, so that makeup consumption has great potential.
At present, the virtual human face makeup art uses the same map fusion strategy for different makeup processing areas of a human face, and the problems of strong map feeling and unnatural makeup effect exist, so that the effect of the virtual human face makeup is poor.
[ summary of the invention ]
In view of this, the embodiment of the invention provides a makeup processing method, a makeup processing device, an electronic device and a storage medium, which are used for improving the makeup effect of a virtual face.
In a first aspect, an embodiment of the present invention provides a cosmetic treatment method, including:
collecting a target face image of a user;
acquiring at least one makeup template and acquiring makeup effect information corresponding to each makeup template;
determining a makeup processing area corresponding to each makeup template on the target face image;
and adopting a fusion strategy corresponding to each makeup template, and fusing each makeup template, the corresponding makeup effect information and the corresponding makeup processing area to generate a virtual face image.
In one possible implementation, the fusion policy includes one or more of a policy for fusion in a red, green, and blue color mode RGB color space domain, a policy for fusion in a YUV color space domain, or a policy for fusion in a hue saturation value color model HSV color space domain.
In one possible implementation, the at least one makeup template includes one or more of a lip template, a human face skin color template, or at least one preset template including one or more of an eye shadow template, a blush template, a brow-drawing template, a cosmetic pupil template, or a stereo enhancement template;
if the makeup template is the eye shadow template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain;
if the makeup template is a blush template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain;
if the makeup template is the eyebrow drawing template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain;
if the makeup template is the makeup pupil template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in the RGB color space domain;
if the makeup template is the face skin color template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain;
if the makeup template is a three-dimensional enhanced template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion on a Y channel of a YUV color space domain;
and if the makeup template is the lip template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain, a YUV color space domain and an HSV color space domain.
In one possible implementation, the at least one makeup template includes at least one preset template;
before performing fusion processing on each makeup template, the corresponding makeup effect information and the corresponding makeup processing area by adopting the fusion strategy corresponding to each makeup template, the method further comprises the following steps:
and aligning the at least one preset template with the corresponding makeup processing area.
In one possible implementation, the aligning the at least one preset template with the corresponding makeup processing area includes:
detecting the standard face image by adopting a face key point alignment technology to obtain standard key point information;
triangularization calculation processing is carried out on the standard key point information to generate a second triangularization array;
and deforming the preset templates in the color space domain corresponding to each preset template according to the second triangularization array so as to align with the makeup processing areas corresponding to the preset templates.
In a possible implementation manner, when the preset template is an eye shadow template, the color space domain corresponding to the preset template is an RGB color space domain; or,
when the preset template is the eyebrow drawing template, the color space domain corresponding to the preset template is a Y channel of the YUV color space domain; or when the preset template is a three-dimensional enhanced template, the color space domain corresponding to the preset template is a Y channel of a YUV color space domain; or,
and when the preset template is a blush template, the color space domain corresponding to the preset template is a Y channel of the YUV color space domain.
In one possible implementation, the at least one preset template includes a cosmetic pupil template:
the determining of the makeup processing area corresponding to each makeup template on the target face image comprises the following steps:
determining the pupil center and the pupil size of human eyes according to the key points of the human faces acquired from the target human face image so as to determine a makeup processing area corresponding to the makeup template, wherein the makeup processing area corresponding to the makeup template comprises a circular area which takes the pupil center of the human eyes as the center of a circle and takes the pupil size as the diameter;
aligning the at least one preset template with the corresponding makeup treatment area, including:
and deforming the cosmetic pupil template in an RGB color space domain so as to align the cosmetic pupil template with a cosmetic treatment area corresponding to the cosmetic pupil template.
In one possible implementation, the cosmetic template includes a human face skin color template, and the obtaining at least one cosmetic template includes:
and carrying out face skin color segmentation on the target face image to generate the face skin color template.
In one possible implementation, the cosmetic template includes a lip template, and the obtaining at least one cosmetic template includes:
triangularization calculation processing is carried out on lip key points in the face key points acquired from the target face image, and a first triangularization array is generated;
obtaining a triangular area of the target face image according to the first triangularization array;
filling the triangular area to obtain a lip binarization template image;
and performing edge gradient processing on the lip binarization template image by adopting a filtering technology to generate the lip template.
In one possible implementation, the at least one makeup template includes at least one preset template;
the obtaining at least one makeup template includes:
and acquiring the corresponding preset template from at least one preset makeup database.
In one possible implementation, the at least one preset template includes one or more of an eye shadow template, a blush template, a brow template, an aesthetic pupil template, or a stereo enhancement template;
the at least one makeup database includes one or more of an eye shadow database, a blush database, a brow database, a cosmetic pupil database, or a stereo augmentation database.
In a possible implementation manner, the obtaining of makeup effect information corresponding to each makeup template includes:
identifying the gender of the user according to the acquired face attribute information;
in response to the selection operation input by the user, determining the original makeup effect information corresponding to each makeup template;
and calculating the gender of the user and the original makeup effect information corresponding to each makeup template through an AI algorithm to generate the makeup effect information corresponding to each makeup template.
In a possible implementation manner, after the acquiring the target face image of the user, the method further includes:
carrying out face recognition on the target face image to obtain the area of a face, and obtaining a screen occupation ratio according to the area of the face and the area of a screen display area, wherein the screen occupation ratio is the ratio of the area of the face to the area of the screen display area;
detecting the target face image by adopting a high-precision face alignment technology to obtain a face posture angle;
judging whether the screen occupation ratio is larger than a first threshold and smaller than a second threshold and whether the face pose angle is smaller than a third threshold, wherein the first threshold is smaller than the second threshold;
and if the screen occupation ratio is judged to be larger than a first threshold value and smaller than a second threshold value, and the face posture angle is smaller than a third threshold value, continuing to execute the steps of obtaining at least one makeup template and obtaining makeup effect information corresponding to each makeup template.
In a second aspect, an embodiment of the present invention provides a makeup processing apparatus, including:
the acquisition module is used for acquiring a target face image of a user;
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring at least one makeup template and acquiring makeup effect information corresponding to each makeup template;
the determining module is used for determining a makeup processing area corresponding to each makeup template on the target face image;
and the fusion module is used for fusing each makeup template, the corresponding makeup effect information and the corresponding makeup processing area by adopting a fusion strategy corresponding to each makeup template to generate a virtual face image.
In one possible implementation, the at least one makeup template includes at least one preset template;
the device further comprises:
and the aligning module is used for aligning the at least one preset template with the corresponding makeup processing area.
In a possible implementation manner, the makeup template includes a face skin color template, and the obtaining module is specifically configured to perform face skin color segmentation on the target face image to generate the face skin color template.
In one possible implementation manner, the method further includes:
the calculation module is used for carrying out face recognition on the target face image to obtain the area of a face region, and obtaining a screen occupation ratio according to the area of the face region and the area of a screen display region, wherein the screen occupation ratio is the ratio of the area of the face region to the area of the screen display region;
the detection module is used for detecting the target face image by adopting a high-precision face alignment technology to obtain a face posture angle;
the judging module is used for judging whether the screen occupation ratio is larger than a first threshold and smaller than a second threshold and whether the face posture angle is smaller than a third threshold, wherein the first threshold is smaller than the second threshold, and if the screen occupation ratio is larger than the first threshold and smaller than the second threshold and the face posture angle is smaller than the third threshold, the acquiring module is triggered to continue to execute the steps of acquiring at least one makeup effect template and acquiring makeup effect information corresponding to each makeup effect template.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the apparatus, cause the electronic apparatus to perform the method of cosmetic treatment of the first aspect or any possible implementation of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where the computer-readable storage medium includes a stored program, where the program, when executed, controls an apparatus where the computer-readable storage medium is located to execute the cosmetic processing method in the first aspect or any possible implementation manner of the first aspect.
According to the technical scheme provided by the embodiment of the invention, at least one makeup template and makeup effect information corresponding to each makeup template are obtained, a makeup processing area corresponding to each makeup template is determined on the acquired target face image of the user, and a fusion strategy corresponding to each makeup template is adopted to perform fusion processing on each makeup template, the corresponding makeup effect information and the corresponding makeup processing area to generate the virtual face image.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a flowchart of a makeup processing method according to an embodiment of the present invention;
FIG. 2 is a diagram of a screen display area according to an embodiment of the present invention;
FIG. 3 is a flow chart of presetting a makeup database according to an embodiment of the present invention;
fig. 4 is a schematic diagram of selecting a makeup template and selecting makeup effect information according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for generating a lip template according to an embodiment of the present invention;
fig. 6 is a flowchart of obtaining makeup effect information according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating alignment of a preset template with a corresponding makeup processing area according to an embodiment of the present invention;
FIG. 8 is a flow chart of a fusion process provided by an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a makeup processing device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an alignment module according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an obtaining module according to an embodiment of the present invention;
fig. 12 is another schematic structural diagram of an obtaining module according to an embodiment of the present invention;
fig. 13 is a schematic diagram of an electronic device according to an embodiment of the present invention.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 is a flowchart of a makeup processing method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
and 102, collecting a target face image of the user.
The steps in the embodiments of the present invention may be performed by an electronic device, for example, the electronic device may include a mobile phone, a tablet computer, a notebook computer, a desktop computer, or the like.
In the embodiment of the invention, for example, a user holds electronic equipment, the user is shot through a front camera of the electronic equipment to acquire a target face image of the user, and then the electronic equipment performs makeup processing on the acquired target face image to generate a virtual face image.
In the embodiment of the invention, the target face image can be a static picture or a picture in a dynamic video.
And 104, acquiring at least one makeup template and acquiring makeup effect information corresponding to each makeup template.
In the embodiment of the invention, the makeup template is a makeup sample of a human face. As an alternative, the at least one makeup template includes one or more of a lip template, a human face skin color template, or at least one preset template, wherein the at least one preset template includes one or more of an eye shadow template, a blush template, a brow-tracing template, a cosmetic pupil template, or a stereo enhancement template. In the embodiment of the invention, the number of the makeup templates can be one or more. When the number of the makeup templates is one, the makeup template may be a lip template, a face skin color template, an eye shadow template, a blush template, a eyebrow tracing template, a pupil shaping template, or a stereo enhancement template. When the number of makeup templates is plural, the plural makeup templates may include at least two of a lip template, a face skin color template, an eye shadow template, a blush template, a brow template, a pupil template, and a stereoscopic enhancing template.
The makeup effect information of the makeup template may be preset. In the embodiment of the invention, the makeup effect information corresponding to the face skin color template comprises intensity and color, for example, the intensity comprises different levels of 2, 4, 6, 8, 10 and the like, the intensity is higher if the numerical value of the intensity is larger, the intensity of the corresponding skin color is higher, the color comprises different color numbers of 1, 2, 3, 4, 5 and the like, and the corresponding face skin color is whiter if the numerical value of the color is smaller; the makeup effect information corresponding to the lip template comprises intensity, color and texture, for example, the intensity comprises different grades of 2, 4, 6, 8, 10 and the like, the greater the value of the intensity, the thicker the corresponding lip makeup, the color comprises different color numbers of 001, 002, 003, 004, 005, 006 and the like, and the texture comprises different textures of matte, pearly, velvet, satin, cream and the like; the makeup effect information corresponding to the eye shadow template comprises intensity, color and texture, for example, the intensity comprises different levels of 2, 4, 6, 8, 10 and the like, the greater the numerical value of the intensity, the thicker the corresponding eye shadow makeup, the colors comprise different color numbers of 1, 2, 3, 4, 5, 6 and the like, and the texture comprises different textures of matte light, unreal light and the like; the makeup effect information corresponding to the blush template comprises intensities and colors, for example, the intensities comprise different levels of 2, 4, 6, 8 and 10, the numerical value of the intensity is larger, the corresponding blush makeup is stronger, and the colors comprise different color numbers of 1, 2, 3, 4, 5 and 6; the makeup effect information corresponding to the eyebrow template comprises intensities and colors, for example, the intensities comprise different levels of 2, 4, 6, 8, 10 and the like, the numerical value of the intensity is larger, the corresponding eyebrow makeup is thicker, and the colors comprise different color numbers of 1, 2, 3, 4, 5, 6 and the like; the cosmetic effect information corresponding to the cosmetic pupil template comprises intensities and colors, for example, the intensities comprise different levels of 2, 4, 6, 8, 10 and the like, the numerical value of the intensity is larger, the corresponding cosmetic pupil effect information is thicker, and the colors comprise different color numbers of 1, 2, 3, 4, 5, 6 and the like; the makeup effect information corresponding to the stereo enhancement template comprises intensity and color, for example, the intensity comprises different levels of 2, 4, 6, 8, 10 and the like, the greater the numerical value of the intensity, the higher the stereo degree of the corresponding face image, and the color comprises different color numbers of 1, 2, 3, 4, 5, 6 and the like.
And 106, determining a makeup processing area corresponding to each makeup template on the target face image.
In the embodiment of the present invention, the makeup processing area refers to an area range where makeup processing is performed. And if the makeup template is the eye shadow template, the makeup processing area corresponding to the makeup template is the eye shadow area. And if the makeup template is the blush template, the makeup processing area corresponding to the makeup template is the blush area. And if the makeup template is the eyebrow drawing template, the corresponding makeup processing area of the makeup template is the eyebrow drawing area. And if the cosmetic template is the cosmetic pupil template, the cosmetic processing area corresponding to the cosmetic template is a cosmetic pupil area. And if the makeup template is the face skin color template, the makeup processing area corresponding to the makeup template is the face skin color area. And if the makeup template is the three-dimensional enhancement template, the makeup processing area corresponding to the makeup template is the three-dimensional enhancement area. And if the makeup template is the lip template, the makeup processing area corresponding to the makeup template is the lip area.
And 108, adopting a fusion strategy corresponding to each makeup template, and fusing each makeup template, the corresponding makeup effect information and the corresponding makeup processing area to generate a virtual face image.
In the embodiment of the present invention, the fusion policy includes one or more of a policy of fusing in a Red Green Blue (RGB) color space domain, a policy of fusing in a brightness and chrominance (YUV) color space domain, or a policy of fusing in a Hue Saturation Value (HSV) color space domain. Where YUV is a color coding method, "Y" denotes brightness, i.e., a gray scale value, and "U" and "V" denote chroma.
In the embodiment of the invention, if the makeup template is the eye shadow template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain; if the makeup template is a blush template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain; if the makeup template is the eyebrow drawing template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in the RGB color space domain; if the cosmetic template is the cosmetic pupil template, the fusion strategy corresponding to the cosmetic template is a strategy for performing fusion in the RGB color space domain; if the makeup template is the face skin color template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain; if the makeup template is a three-dimensional enhanced template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion on a Y channel of a YUV color space domain; if the makeup template is the lip template, the fusion strategy corresponding to the makeup template is a strategy of fusion in an RGB color space domain, a YUV color space domain and an HSV color space domain.
As an alternative, step 102 is followed by:
Fig. 2 is a schematic diagram of a screen display area according to an embodiment of the present invention, and as shown in fig. 2, an area C1 where a human face is located in the middle of the screen display area C2. The area of the region C1 where the face is located is divided by the area of the screen display region C2 to obtain the screen occupation ratio.
And 1034, detecting the target face image by adopting a high-precision face alignment technology to obtain a face posture angle.
For example, the first threshold value is 0.05, the second threshold value is 1, and the third threshold value is 60 °.
In the embodiment of the invention, if the screen occupation ratio is judged to be larger than the first threshold value and smaller than the second threshold value, the distance between the face and the screen of the electronic equipment is moderate; and if the face posture angle is smaller than the third threshold value, the electronic equipment can shoot all parts of the face. If the screen occupation ratio is larger than the first threshold value and smaller than the second threshold value and the face pose angle is smaller than the third threshold value, it indicates that the makeup processing condition is satisfied, and then the process continues to be executed 104. In the embodiment of the invention, the third threshold value is set to be a larger value, so that the angle range met by the face pose angle is enlarged, and the face can obtain a good makeup effect under a large pose angle.
In the embodiment of the invention, if it is determined that the screen occupation ratio is smaller than the first threshold, it indicates that the distance between the face and the screen of the electronic device is too far, the face image shot by the electronic device is too small, and the part of the face cannot be clearly identified, and at this time, the makeup processing condition is not met, the makeup processing is not performed on the target face image, and the makeup processing is finished.
In the embodiment of the invention, if the screen occupation ratio is judged to be larger than or equal to the second threshold, the fact that the distance between the face and the screen of the electronic equipment is too short is indicated, the face image shot by the electronic equipment is too large, part of the face overflows the screen, and the condition for cosmetic treatment is not met, the cosmetic treatment is not performed on the target face image, and the cosmetic treatment is finished.
In the embodiment of the invention, if the face posture angle is judged to be larger than or equal to the third threshold, the electronic equipment cannot shoot all parts of the face, and the makeup processing condition is not met, the makeup processing is not performed on the target face image, and the makeup processing is finished.
In the embodiment of the present invention, before step 102, the method further includes: at least one makeup database is preset. As an alternative, the at least one makeup database includes one or more of an eye shadow database, a blush database, a brow database, a cosmetic pupil database, or a stereo augmentation database.
In the embodiment of the invention, if the makeup template is the eye shadow template, the makeup database corresponding to the makeup template is the eye shadow database; if the cosmetic template is the blush template, the cosmetic database corresponding to the cosmetic template is the blush database; if the makeup template is the eyebrow drawing template, the makeup database corresponding to the makeup template is the eyebrow drawing database; if the cosmetic template is the cosmetic pupil template, the cosmetic database corresponding to the cosmetic template is the cosmetic pupil database; and if the makeup template is the three-dimensional enhanced template, the makeup database corresponding to the makeup template is the three-dimensional enhanced database.
Fig. 3 is a flowchart of presetting a makeup database in the embodiment of the present invention, and as shown in fig. 3, the concrete steps of presetting the makeup database are as follows:
As an alternative, a face image with correct posture is selected as a standard face image.
And step 100b, drawing different types of templates on the standard face image by adopting image processing software, wherein each type of template can comprise at least one standard template.
In the embodiment of the invention, the different types of templates can comprise an eye shadow type template, a blush type template, a eyebrow tracing type template, a pupil beautifying type template and a three-dimensional enhancement type template. If the template of the type is an eye shadow type template, the standard template included in the eye shadow type template is a standard eye shadow template. If the type of template is a blush type template, the blush type template includes a standard template that is a standard blush template. If the template of the category is the template of the eyebrow tracing class, the standard template included in the template of the eyebrow tracing class is the standard template of the eyebrow tracing. If the template of the type is a beautiful pupil template, the standard template included in the beautiful pupil template is a standard beautiful pupil template. If the template of the type is a three-dimensional enhanced template, the standard template included in the three-dimensional enhanced template is a standard three-dimensional enhanced template.
In the embodiment of the invention, the image processing software can be Adobe Photoshop. In practical applications, the image processing software may also be other types of software, for example, drawing software provided by the Windows system, which is not listed here.
And step 100c, storing the standard templates of different types into a makeup database corresponding to the standard templates of the types.
In the embodiment of the invention, when the standard template is the standard eye shadow template, the makeup database corresponding to the standard template of the type is the eye shadow database, and the standard eye shadow template can be stored in the eye shadow database; when the standard template is a standard blush template, the beauty database corresponding to the standard template of the type is a blush database, and the standard blush template can be stored into the blush database; when the standard template is a standard eyebrow drawing template, the makeup database corresponding to the standard template of the type is an eyebrow drawing database, and the standard eyebrow drawing template can be stored in the eyebrow drawing database; when the standard template is a standard American pupil template, the cosmetic database corresponding to the standard template of the type is an American pupil database, and the standard American pupil template can be stored in the American pupil database; when the standard template is a standard three-dimensional enhanced template, the makeup database corresponding to the standard template of the type is a three-dimensional enhanced database, and the standard three-dimensional enhanced template can be stored in the three-dimensional enhanced database.
In the embodiment of the invention, the makeup databases of different types are independently constructed, so that abundant and freely combinable makeup templates can be provided for users.
In the embodiment of the present invention, the obtaining of the at least one makeup template in step 104 specifically includes: at least one makeup template is acquired in response to a setting operation input by a user.
Fig. 4 is a schematic diagram of selecting a makeup template and selecting makeup effect information according to an embodiment of the present invention, and as shown in fig. 4, the interactive interface includes a face skin color makeup operation button C3, a color selection list, and an intensity selection list.
As an alternative, when the makeup template includes a face skin color template, the obtaining at least one makeup template in step 104 includes: and carrying out face complexion segmentation on the target face image to generate a face complexion template. As an alternative, the setting operation input by the user is an operation of opening a face skin color, specifically, the user clicks the face skin color makeup opening button C3 to open the face skin color makeup operation, and then in step 104, in response to the operation of opening the face skin color makeup input by the user, the face skin color segmentation is performed on the target face image to generate the face skin color template. At this time, after the face skin color operation is started, a color selection list and an intensity selection list appear in the face skin color interactive interface. The intensity selection list is provided with 5 skin color intensity options, the 5 skin color intensity options in the intensity selection list are 2, 4, 6, 8 and 10 from top to bottom in sequence, and the larger the numerical value of the skin color intensity option is, the higher the corresponding skin color beauty intensity is; the color selection list is provided with 5 skin color options, the 5 skin color options in the color selection list are 1, 2, 3, 4 and 5 from top to bottom in sequence, and the smaller the numerical value of the skin color option is, the whiter the corresponding human face skin color is. The user can select the favorite skin color and skin color intensity in the interactive interface according to the favorite.
In the embodiment of the invention, the face skin color template is used, so that the problem that the makeup effect overflows the face can be solved, the modification of the image background part outside the face is avoided, the virtual makeup processing is only carried out on the face skin color area, and the better makeup effect is achieved.
Fig. 5 is a flowchart of a method for generating a lip template according to an embodiment of the present invention, and as shown in fig. 5, when the makeup template includes a lip template, the obtaining at least one makeup template in step 104 may specifically include:
and 1042a, performing triangularization calculation processing on lip key points in the face key points acquired from the target face image to generate a first triangularization array.
And 1042b, obtaining a triangular area of the target face image according to the first triangularization array.
And 1042c, filling the triangular area to obtain a lip binary template image.
1042d, performing edge gradient processing on the lip binarization template image by adopting a filtering technology to generate a lip template.
As an alternative, the setting operation input by the user is an operation of opening the lip makeup, specifically, the user clicks an operation button of opening the lip makeup to open the lip makeup, and then in step 104, at least one makeup template is obtained in response to the operation of opening the lip makeup input by the user. At this time, after the lip makeup operation is turned on, a color selection list, an intensity selection list, and a texture selection list appear in the lip interaction interface. 6 lip color options are arranged in the color selection list, and 5 lip color options in the color selection list are 001, 002, 003, 004, 005 and 006 from top to bottom in sequence; 5 lip strength options are arranged in the strength selection list, the 5 lip strength options in the strength selection list are 2, 4, 6, 8 and 10 from top to bottom in sequence, and the larger the numerical value of the lip strength option is, the thicker the corresponding lip makeup is; the texture selection list is provided with 5 lip texture options, the 5 lip texture options in the texture selection list are matte, pearlescent, velvet, satin and cream textures from top to bottom in sequence, and the larger the value of the lip strength option is, the thicker the corresponding lip makeup is. The user can select the favorite lip color, strength and texture in the interactive interface according to the favorite.
As another alternative, the at least one cosmetic template includes at least one preset template. The step 104 of obtaining at least one makeup template may specifically include: and acquiring a corresponding preset template from at least one preset makeup database.
In the embodiment of the invention, if the preset template is the eye shadow template and the makeup database is the eye shadow database, selecting a standard eye shadow template from the standard eye shadow templates in the eye shadow database, and determining the selected standard eye shadow template as the eye shadow template; if the preset template is a blush template and the makeup database is a blush database, selecting a standard blush template from standard blush templates in the blush database, and determining the selected standard blush template as the blush template; if the preset template is the eyebrow drawing template and the makeup database is the eyebrow drawing database, selecting a standard eyebrow drawing template from the standard eyebrow drawing templates in the eyebrow drawing database, and determining the selected standard eyebrow drawing template as the eyebrow drawing template; if the preset template is the beautiful pupil template and the beautiful makeup database is the beautiful pupil database, selecting a standard beautiful pupil template from standard beautiful pupil templates in the beautiful pupil database, and determining the selected standard beautiful pupil template as the beautiful pupil template; and if the preset template is a three-dimensional enhanced template and the makeup database is a three-dimensional enhanced database, selecting a standard three-dimensional enhanced template from the standard three-dimensional enhanced templates in the three-dimensional enhanced database, and determining the selected standard three-dimensional enhanced template as the three-dimensional enhanced template.
In an embodiment of the present invention, fig. 6 is a flowchart illustrating obtaining makeup effect information in an embodiment of the present invention, and as shown in fig. 6, obtaining makeup effect information corresponding to each makeup template in step 104 may specifically include:
and step 1044a, identifying the gender of the user according to the acquired face attribute information.
In the embodiment of the invention, the face attribute information comprises skin color, bone, age, whether mustache exists or not and the like.
And step 1044b, determining the original makeup effect information corresponding to each makeup template in response to the selection operation input by the user.
For example, if the makeup template is a face skin color template, the selection operation input by the user is to select intensity and color, as shown in fig. 4, the selection operation input by the user is to select color from a color selection list and to select intensity from an intensity selection list, and the determined original makeup effect information includes the color and intensity selected by the user. For example, if the makeup template is a lip template and the selection operation input by the user is to select intensity, color, and texture, the determined original makeup effect information includes the intensity, color, and texture selected by the user. For example, if the makeup template is an eye shadow template and the selection operation input by the user is to select intensity, color and texture, the determined original makeup effect information includes the intensity, color and texture selected by the user. For example, if the makeup template is a blush template and the selection operation input by the user is to select intensity and color, the determined original makeup effect information includes the intensity and color selected by the user. For example, if the makeup template is a eyebrow template and the selection operation input by the user is to select intensity and color, the determined original makeup effect information includes the intensity and color selected by the user. For example, if the cosmetic template is a cosmetic pupil template and the selection operation input by the user is to select intensity and color, the determined original cosmetic effect information includes the intensity and color selected by the user. For example, if the makeup template is a three-dimensional enhancement template and the selection operation input by the user is selection of intensity and color, the determined original makeup effect information includes the intensity and color selected by the user.
In the embodiment of the present invention, the AI algorithm may be a Convolutional Neural Network (CNN).
In the embodiment of the invention, the gender of the user can be identified through the detected face attribute information, the original makeup effect information is adjusted according to the gender of the user to obtain the makeup effect information, and the virtual makeup processing process is completed through the obtained makeup effect information, so that the face is subjected to self-adaptive makeup according to the face attribute information, and the effect of the virtual face makeup is further improved.
In the embodiment of the invention, if the original makeup effect information is not calculated through the AI algorithm, the original makeup effect information can be used as the makeup effect information.
As an alternative, the at least one makeup template includes at least one preset template, and step 108 further includes: and 107, aligning at least one preset template with the corresponding makeup processing area.
Step 107 may specifically include:
and 107c, deforming the preset templates in the color space domain corresponding to each preset template according to the second triangularization array so as to align with the makeup processing areas corresponding to the preset templates.
In the embodiment of the invention, when the preset template is the eye shadow template, the color space domain corresponding to the preset template is an RGB color space domain; or when the preset template is the eyebrow drawing template, the color space domain corresponding to the preset template is a Y channel of the YUV color space domain; or when the preset template is a three-dimensional enhanced template, the color space domain corresponding to the preset template is a Y channel of the YUV color space domain; or when the preset template is the blush template, the color space domain corresponding to the preset template is a Y channel of the YUV color space domain. As a specific implementation scheme, the number of the preset templates is multiple, and the multiple preset templates can comprise an eye shadow template, a blush template, a eyebrow tracing template and a three-dimensional enhancement template.
Fig. 7 is a flowchart illustrating alignment of a preset template with a corresponding cosmetic treatment area according to an embodiment of the present invention, and as shown in fig. 7, step 107 may specifically include:
and 107a, detecting the standard face image by adopting a face key point alignment technology to obtain standard key point information.
And step 107b, performing triangularization calculation processing on the standard key point information to generate a second triangularization array.
And 107c, deforming the eye shadow template in the RGB color space domain according to the second triangularization array so as to align with the makeup processing area corresponding to the eye shadow template.
And 107d, deforming the eyebrow drawing template in the Y channel of the YUV color space domain according to the second triangularization array so as to align with the makeup processing area corresponding to the eyebrow drawing template.
And 107e, deforming the three-dimensional enhancement template in the Y channel of the YUV color space domain according to the second triangulated array so as to align with the makeup processing area corresponding to the three-dimensional enhancement template.
And 107f, deforming the blush template in a Y channel of a YUV color space domain according to the second triangularization array so as to align with a makeup processing area corresponding to the blush template.
As an alternative, the at least one preset template comprises a cosmetic pupil template. Step 106 may specifically include: and determining the pupil center and the pupil size of human eyes according to the key points of the human faces acquired from the target human face image so as to determine a makeup processing area corresponding to the makeup template, wherein the makeup processing area corresponding to the makeup template comprises a circular area which takes the pupil center of the human eyes as the center of a circle and takes the pupil size as the diameter. Step 107 may specifically include: and deforming the cosmetic pupil template in the RGB color space domain so as to align the cosmetic pupil template with a cosmetic processing area corresponding to the cosmetic pupil template.
In a specific implementation, in step 108, the at least one makeup template includes an eye shadow template, a blush template, a eyebrow template, a cosmetic pupil template, a stereo enhancement template, a lip template, and a face skin color template.
Fig. 8 is a flowchart of the fusion processing in the embodiment of the present invention, and as shown in fig. 8, the performing the fusion processing on each makeup template, the corresponding makeup effect information, and the corresponding makeup processing area by using the fusion policy corresponding to each makeup template in step 108 may specifically include:
and 108a, carrying out fusion processing on the eye shadow template, the makeup effect information corresponding to the eye shadow template and the eye shadow area corresponding to the eye shadow template in an RGB color space domain.
And 108b, carrying out fusion processing on the blush template, the makeup effect information corresponding to the blush template and the blush area corresponding to the blush template in an RGB color space domain.
And 108c, fusing the eyebrow drawing template, the makeup effect information corresponding to the eyebrow drawing template and the eyebrow drawing area corresponding to the eyebrow drawing template in the RGB color space domain.
And 108d, carrying out fusion processing on the beautifying pupil template, the beautifying makeup effect information corresponding to the beautifying pupil template and the beautifying pupil area corresponding to the beautifying pupil template in an RGB color space domain.
And 108e, performing fusion processing on the three-dimensional enhancement template, the makeup effect information corresponding to the three-dimensional enhancement template and the three-dimensional enhancement area corresponding to the three-dimensional enhancement template on a Y channel of a YUV color space domain.
And 108f, fusing the face skin color template, the makeup effect information corresponding to the face skin color template and the face skin color area corresponding to the face skin color template in an RGB color space domain.
And 108g, performing fusion processing on the lip template, the makeup effect information corresponding to the lip template and the lip area corresponding to the lip template in an RGB color space domain, a YUV color space domain and an HSV color space domain.
It should be noted that: the execution sequence of each step in each flowchart in the embodiment of the present invention is only an example, and in an actual application, the execution sequence may be changed as needed. For example, steps 1044a and 1044 may change the execution order in fig. 6; for example, in fig. 7, the execution order of steps 107c to 107f may be arbitrarily changed; for example, in fig. 8, the execution order of steps 108a to 108g may be changed arbitrarily.
According to the technical scheme of the makeup processing method, at least one makeup template and makeup effect information corresponding to each makeup template are obtained, a makeup processing area corresponding to each makeup template is determined on a target face image of an acquired user, a fusion strategy corresponding to each makeup template is adopted, and each makeup template, the corresponding makeup effect information and the corresponding makeup processing area are subjected to fusion processing to generate a virtual face image.
In the embodiment of the invention, the face recognition technology and the high-precision face alignment technology are adopted to assist in virtual makeup of the face, so that the effect of virtual face makeup is further improved.
The makeup processing method in the embodiment of the invention can obtain a better virtual makeup effect under the condition of low algorithm complexity, and can be applied to electronic equipment with limited performance.
Fig. 9 is a schematic structural view of a makeup processing device according to an embodiment of the present invention, as shown in fig. 9, the device includes: the system comprises an acquisition module 11, an acquisition module 12, a determination module 13 and a fusion module 14.
The acquisition module 11 is used for acquiring a target face image of a user. The obtaining module 12 is configured to obtain at least one makeup template and obtain makeup effect information corresponding to each makeup template. The determining module 13 is configured to determine a makeup processing area corresponding to each makeup template on the target face image. The fusion module 14 is configured to perform fusion processing on each makeup template, the corresponding makeup effect information, and the corresponding makeup processing area by using a fusion policy corresponding to each makeup template, so as to generate a virtual face image.
In an embodiment of the invention, the at least one makeup template comprises at least one preset template. The device still includes: an alignment module 15. The alignment module 15 is used to align at least one preset template with the corresponding cosmetic treatment area.
Fig. 10 is a schematic structural diagram of an alignment module according to an embodiment of the present invention, and as shown in fig. 10, the alignment module 15 includes: a detection submodule 151, a first calculation submodule 152 and a deformation submodule 153. The detection sub-module 151 is configured to detect a standard face image by using a face key point alignment technique to obtain standard key point information. The first computation submodule 152 is configured to perform triangularization computation on the standard keypoint information to generate a second triangulated array. The deforming submodule 153 is configured to deform the preset templates in the color space domain corresponding to each preset template according to the second triangulated array, so as to align with the makeup processing areas corresponding to the preset templates.
In the embodiment of the present invention, the determining module 13 is specifically configured to determine a pupil center and a pupil size of a human eye according to a human face key point obtained from a target human face image, so as to determine a cosmetic processing area corresponding to a cosmetic pupil template, where the cosmetic processing area corresponding to the cosmetic pupil template includes a circular area taking the pupil center of the human eye as a center of a circle and the pupil size as a diameter. The alignment module 15 is specifically configured to deform the cosmetic pupil template in the RGB color space domain, so as to align the cosmetic pupil template with the cosmetic treatment area corresponding to the cosmetic pupil template.
In the embodiment of the invention, the makeup template comprises a face skin color template, and the acquisition module 12 is specifically used for carrying out face skin color segmentation on the target face image to generate the face skin color template.
Fig. 11 is a schematic structural diagram of an acquisition module according to an embodiment of the present invention, in which the makeup template includes a lip template, and the acquisition module 12 includes: a second computation submodule 121, a third computation submodule 122, a padding submodule 123 and a filtering submodule 124.
The second calculation submodule 121 is configured to perform triangularization calculation processing on lip key points in face key points acquired from the target face image, and generate a first triangularization array. The third computation submodule 122 is configured to obtain a triangular region of the target face image according to the first triangularization array. The filling submodule 123 is configured to perform filling processing on the triangular region to obtain a lip binarization template image. The filtering submodule 124 is configured to perform edge gradient processing on the lip binarization template image by using a filtering technique, so as to generate a lip template.
In an embodiment of the invention, the at least one makeup template comprises at least one preset template. The obtaining module 12 is specifically configured to obtain a corresponding preset template from at least one preset makeup database.
Fig. 12 is another schematic structural diagram of an acquisition module in the embodiment of the present invention, where the acquisition module 12 includes: a recognition submodule 125, a determination submodule 126 and a generation submodule 127. The identifying sub-module 125 is configured to identify the gender of the user according to the acquired face attribute information. The determining submodule 126 is configured to determine, in response to a selection operation input by the user, original makeup effect information corresponding to each makeup template. The generating submodule 127 is configured to calculate, through an AI algorithm, the gender of the user and the original makeup effect information corresponding to each makeup template, and generate the makeup effect information corresponding to each makeup template.
As shown in fig. 9, in the embodiment of the present invention, the apparatus further includes: a calculation module 16, a detection module 17 and a judgment module 18. The calculation module 16 is configured to perform face recognition on the target face image to obtain an area of a face, and obtain a screen occupation ratio according to the area of the face and an area of the screen display area, where the screen occupation ratio is a ratio of the area of the face to the area of the screen display area. The detection module 17 is configured to detect a target face image by using a high-precision face alignment technique to obtain a face pose angle. The judging module 18 is configured to judge whether the screen occupation ratio is greater than a first threshold and smaller than a second threshold, and whether the face pose angle is smaller than a third threshold, where the first threshold is smaller than the second threshold, and if it is judged that the screen occupation ratio is greater than the first threshold and smaller than the second threshold, and the face pose angle is smaller than the third threshold, the obtaining module 12 is triggered to continue to perform the steps of obtaining at least one makeup template and obtaining makeup effect information corresponding to each makeup template.
According to the technical scheme provided by the embodiment of the invention, at least one makeup template and makeup effect information corresponding to each makeup template are obtained, a makeup processing area corresponding to each makeup template is determined on the acquired target face image of the user, and a fusion strategy corresponding to each makeup template is adopted to perform fusion processing on each makeup template, the corresponding makeup effect information and the corresponding makeup processing area to generate the virtual face image.
The embodiment of the invention provides a computer-readable storage medium, which comprises a stored program, wherein when the program runs, the electronic equipment where the computer-readable storage medium is located is controlled to execute the embodiment of the cosmetic treatment method.
An embodiment of the present invention provides an electronic device, including: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the electronic device, cause the electronic device to perform embodiments of the cosmetic treatment method described above.
Fig. 13 is a schematic diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 13, the electronic apparatus 2 of the embodiment includes: the processor 21, the memory 22, and the computer program 23 stored in the memory 22 and capable of running on the processor 21, wherein the computer program 23 when executed by the processor 21 implements the cosmetic processing method in the embodiment, and therefore, in order to avoid repetition, details are not repeated herein.
The electronic device 2 includes, but is not limited to, a processor 21 and a memory 22. Those skilled in the art will appreciate that fig. 5 is merely an example of the electronic device 2 and does not constitute a limitation of the electronic device 2 and may include more or fewer components than shown, or combine certain components, or different components, e.g., a network device may also include an input-output device, a network access device, a bus, etc.
The Processor 21 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 22 may be an internal storage unit of the electronic device 2, such as a hard disk or a memory of the electronic device 2. The memory 22 may also be an external storage device of the electronic device 2, such as a plug-in hard disk provided on the electronic device 2, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 22 may also include both an internal storage unit and an external storage device of the electronic device 2. The memory 22 is used to store computer programs and other programs and data required by the network device. The memory 22 may also be used to temporarily store data that has been output or is to be output.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a computer readable storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (19)
1. A cosmetic treatment method, comprising:
collecting a target face image of a user;
acquiring at least one makeup template and acquiring makeup effect information corresponding to each makeup template;
determining a makeup processing area corresponding to each makeup template on the target face image;
and adopting a fusion strategy corresponding to each makeup template, and fusing each makeup template, the corresponding makeup effect information and the corresponding makeup processing area to generate a virtual face image.
2. The method of claim 1, wherein the fusing policy comprises one or more of a policy for fusing in a red, green, and blue color mode RGB color space domain, a policy for fusing in a YUV color space domain, or a policy for fusing in a hue saturation value color model HSV color space domain.
3. The method of claim 2, wherein the at least one makeup template comprises one or more of a lip template, a human face skin color template, or at least one preset template comprising one or more of an eye shadow template, a blush template, a brow template, a cosmetic pupil template, or a stereo enhancement template;
if the makeup template is the eye shadow template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain;
if the makeup template is a blush template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain;
if the makeup template is the eyebrow drawing template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain;
if the makeup template is the makeup pupil template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in the RGB color space domain;
if the makeup template is the face skin color template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain;
if the makeup template is a three-dimensional enhanced template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion on a Y channel of a YUV color space domain;
and if the makeup template is the lip template, the fusion strategy corresponding to the makeup template is a strategy for performing fusion in an RGB color space domain, a YUV color space domain and an HSV color space domain.
4. The method of claim 1, wherein the at least one cosmetic template comprises at least one preset template;
before performing fusion processing on each makeup template, the corresponding makeup effect information and the corresponding makeup processing area by adopting the fusion strategy corresponding to each makeup template, the method further comprises the following steps:
and aligning the at least one preset template with the corresponding makeup processing area.
5. The method of claim 4, wherein said aligning said at least one preset template with a corresponding cosmetic treatment area comprises:
detecting the standard face image by adopting a face key point alignment technology to obtain standard key point information;
triangularization calculation processing is carried out on the standard key point information to generate a second triangularization array;
and deforming the preset templates in the color space domain corresponding to each preset template according to the second triangularization array so as to align with the makeup processing areas corresponding to the preset templates.
6. The method according to claim 5, wherein when the preset template is an eye shadow template, the color space domain corresponding to the preset template is an RGB color space domain; or,
when the preset template is the eyebrow drawing template, the color space domain corresponding to the preset template is a Y channel of the YUV color space domain; or,
when the preset template is a three-dimensional enhanced template, the color space domain corresponding to the preset template is a Y channel of a YUV color space domain; or,
and when the preset template is a blush template, the color space domain corresponding to the preset template is a Y channel of the YUV color space domain.
7. The method of claim 4, wherein the at least one preset template comprises a cosmetic pupil template:
the determining of the makeup processing area corresponding to each makeup template on the target face image comprises the following steps:
determining the pupil center and the pupil size of human eyes according to the key points of the human faces acquired from the target human face image so as to determine a makeup processing area corresponding to the makeup template, wherein the makeup processing area corresponding to the makeup template comprises a circular area which takes the pupil center of the human eyes as the center of a circle and takes the pupil size as the diameter;
aligning the at least one preset template with the corresponding makeup treatment area, including:
and deforming the cosmetic pupil template in an RGB color space domain so as to align the cosmetic pupil template with a cosmetic treatment area corresponding to the cosmetic pupil template.
8. The method of claim 1, wherein the makeup templates include a human face skin color template, and wherein the obtaining at least one makeup template includes:
and carrying out face skin color segmentation on the target face image to generate the face skin color template.
9. The method of claim 1, wherein the makeup templates include lip templates, and the obtaining at least one makeup template includes:
triangularization calculation processing is carried out on lip key points in the face key points acquired from the target face image, and a first triangularization array is generated;
obtaining a triangular area of the target face image according to the first triangularization array;
filling the triangular area to obtain a lip binarization template image;
and performing edge gradient processing on the lip binarization template image by adopting a filtering technology to generate the lip template.
10. The method of claim 1, wherein the at least one cosmetic template comprises at least one preset template;
the obtaining at least one makeup template includes:
and acquiring the corresponding preset template from at least one preset makeup database.
11. The method of claim 7, wherein the at least one preset template comprises one or more of an eye shadow template, a blush template, a brow template, a cosmetic pupil template, or a stereo enhancement template;
the at least one makeup database includes one or more of an eye shadow database, a blush database, a brow database, a cosmetic pupil database, or a stereo augmentation database.
12. The method according to claim 1, wherein the obtaining of makeup effect information corresponding to each of the makeup templates comprises:
identifying the gender of the user according to the acquired face attribute information;
in response to the selection operation input by the user, determining the original makeup effect information corresponding to each makeup template;
and calculating the gender of the user and the original makeup effect information corresponding to each makeup template through an AI algorithm to generate the makeup effect information corresponding to each makeup template.
13. The method of claim 1, wherein after the acquiring the target face image of the user, further comprising:
carrying out face recognition on the target face image to obtain the area of a face, and obtaining a screen occupation ratio according to the area of the face and the area of a screen display area, wherein the screen occupation ratio is the ratio of the area of the face to the area of the screen display area;
detecting the target face image by adopting a high-precision face alignment technology to obtain a face posture angle;
judging whether the screen occupation ratio is larger than a first threshold and smaller than a second threshold and whether the face pose angle is smaller than a third threshold, wherein the first threshold is smaller than the second threshold;
and if the screen occupation ratio is judged to be larger than a first threshold value and smaller than a second threshold value, and the face posture angle is smaller than a third threshold value, continuing to execute the steps of obtaining at least one makeup template and obtaining makeup effect information corresponding to each makeup template.
14. A cosmetic treatment device, comprising:
the acquisition module is used for acquiring a target face image of a user;
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring at least one makeup template and acquiring makeup effect information corresponding to each makeup template;
the determining module is used for determining a makeup processing area corresponding to each makeup template on the target face image;
and the fusion module is used for fusing each makeup template, the corresponding makeup effect information and the corresponding makeup processing area by adopting a fusion strategy corresponding to each makeup template to generate a virtual face image.
15. The device of claim 14, wherein the at least one cosmetic template comprises at least one preset template;
the device further comprises:
and the aligning module is used for aligning the at least one preset template with the corresponding makeup processing area.
16. The apparatus of claim 14, wherein the makeup template comprises a face skin color template, and the obtaining module is specifically configured to perform face skin color segmentation on the target face image to generate the face skin color template.
17. The apparatus of claim 14, further comprising:
the calculation module is used for carrying out face recognition on the target face image to obtain the area of a face region, and obtaining a screen occupation ratio according to the area of the face region and the area of a screen display region, wherein the screen occupation ratio is the ratio of the area of the face region to the area of the screen display region;
the detection module is used for detecting the target face image by adopting a high-precision face alignment technology to obtain a face posture angle;
the judging module is used for judging whether the screen occupation ratio is larger than a first threshold and smaller than a second threshold and whether the face posture angle is smaller than a third threshold, wherein the first threshold is smaller than the second threshold, and if the screen occupation ratio is larger than the first threshold and smaller than the second threshold and the face posture angle is smaller than the third threshold, the acquiring module is triggered to continue to execute the steps of acquiring at least one makeup effect template and acquiring makeup effect information corresponding to each makeup effect template.
18. An electronic device, comprising:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the apparatus, cause the electronic apparatus to perform the cosmetic treatment method of any one of claims 1 to 13.
19. A computer-readable storage medium, comprising a stored program, wherein when the program runs, the computer-readable storage medium controls an electronic device to execute the cosmetic treatment method according to any one of claims 1 to 13.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110726714.0A CN113469874A (en) | 2021-06-29 | 2021-06-29 | Beauty treatment method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110726714.0A CN113469874A (en) | 2021-06-29 | 2021-06-29 | Beauty treatment method and device, electronic equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN113469874A true CN113469874A (en) | 2021-10-01 |
Family
ID=77873708
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110726714.0A Pending CN113469874A (en) | 2021-06-29 | 2021-06-29 | Beauty treatment method and device, electronic equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113469874A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114387157A (en) * | 2021-12-31 | 2022-04-22 | 曹芹 | An image processing method, device and computer-readable storage medium |
| CN114677386A (en) * | 2022-03-25 | 2022-06-28 | 北京字跳网络技术有限公司 | Special effect image processing method and device, electronic equipment and storage medium |
| CN116740778A (en) * | 2023-04-04 | 2023-09-12 | 奥比中光科技集团股份有限公司 | A method, device, equipment and medium for processing face image samples of people wearing glasses |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010073222A (en) * | 2010-01-07 | 2010-04-02 | Kao Corp | Makeup simulation method |
| CN108288248A (en) * | 2018-01-02 | 2018-07-17 | 腾讯数码(天津)有限公司 | A kind of eyes image fusion method and its equipment, storage medium, terminal |
| CN109583385A (en) * | 2018-11-30 | 2019-04-05 | 深圳市脸萌科技有限公司 | Face image processing process, device, electronic equipment and computer storage medium |
| CN109829930A (en) * | 2019-01-15 | 2019-05-31 | 深圳市云之梦科技有限公司 | Face image processing process, device, computer equipment and readable storage medium storing program for executing |
| CN111784568A (en) * | 2020-07-06 | 2020-10-16 | 北京字节跳动网络技术有限公司 | Face image processing method and device, electronic equipment and computer readable medium |
| CN112766234A (en) * | 2021-02-23 | 2021-05-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
-
2021
- 2021-06-29 CN CN202110726714.0A patent/CN113469874A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010073222A (en) * | 2010-01-07 | 2010-04-02 | Kao Corp | Makeup simulation method |
| CN108288248A (en) * | 2018-01-02 | 2018-07-17 | 腾讯数码(天津)有限公司 | A kind of eyes image fusion method and its equipment, storage medium, terminal |
| CN109583385A (en) * | 2018-11-30 | 2019-04-05 | 深圳市脸萌科技有限公司 | Face image processing process, device, electronic equipment and computer storage medium |
| CN109829930A (en) * | 2019-01-15 | 2019-05-31 | 深圳市云之梦科技有限公司 | Face image processing process, device, computer equipment and readable storage medium storing program for executing |
| CN111784568A (en) * | 2020-07-06 | 2020-10-16 | 北京字节跳动网络技术有限公司 | Face image processing method and device, electronic equipment and computer readable medium |
| CN112766234A (en) * | 2021-02-23 | 2021-05-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114387157A (en) * | 2021-12-31 | 2022-04-22 | 曹芹 | An image processing method, device and computer-readable storage medium |
| CN114677386A (en) * | 2022-03-25 | 2022-06-28 | 北京字跳网络技术有限公司 | Special effect image processing method and device, electronic equipment and storage medium |
| CN116740778A (en) * | 2023-04-04 | 2023-09-12 | 奥比中光科技集团股份有限公司 | A method, device, equipment and medium for processing face image samples of people wearing glasses |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112529999B (en) | A training method, device, equipment and storage medium for parameter estimation model | |
| US11900557B2 (en) | Three-dimensional face model generation method and apparatus, device, and medium | |
| US11043011B2 (en) | Image processing method, apparatus, terminal, and storage medium for fusing images of two objects | |
| US8908904B2 (en) | Method and system for make-up simulation on portable devices having digital cameras | |
| CN103914699B (en) | A kind of method of the image enhaucament of the automatic lip gloss based on color space | |
| CN108701217A (en) | A kind of face complexion recognition methods, device and intelligent terminal | |
| JP2020526809A (en) | Virtual face makeup removal, fast face detection and landmark tracking | |
| CN109784281A (en) | Products Show method, apparatus and computer equipment based on face characteristic | |
| CN111062891A (en) | Image processing method, device, terminal and computer readable storage medium | |
| CN113469874A (en) | Beauty treatment method and device, electronic equipment and storage medium | |
| CN113344837B (en) | Face image processing method and device, computer readable storage medium and terminal | |
| CN113379623B (en) | Image processing method, device, electronic equipment and storage medium | |
| KR20200107957A (en) | Image processing method and device, electronic device and storage medium | |
| CN111767817B (en) | Clothing collocation method, device, electronic equipment and storage medium | |
| CN108463823A (en) | A reconstruction method, device and terminal of a user's hair model | |
| CN109684959A (en) | The recognition methods of video gesture based on Face Detection and deep learning and device | |
| CN106570909A (en) | Skin color detection method, device and terminal | |
| CN108734126B (en) | Beautifying method, beautifying device and terminal equipment | |
| CN112150387B (en) | Method and device for enhancing stereoscopic impression of five sense organs on human images in photo | |
| CN108230297A (en) | A kind of collocation of colour appraisal procedure replaced based on clothes | |
| CN117132711A (en) | Digital portrait customizing method, device, equipment and storage medium | |
| CN116580445B (en) | Large language model face feature analysis method, system and electronic equipment | |
| CN114155569B (en) | Cosmetic progress detection method, device, equipment and storage medium | |
| CN115936796A (en) | A virtual makeup changing method, system, device and storage medium | |
| CN114972014A (en) | Image processing method and device and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211001 |
|
| RJ01 | Rejection of invention patent application after publication |