[go: up one dir, main page]

CN110473295B - Method and equipment for carrying out beautifying treatment based on three-dimensional face model - Google Patents

Method and equipment for carrying out beautifying treatment based on three-dimensional face model Download PDF

Info

Publication number
CN110473295B
CN110473295B CN201910726111.3A CN201910726111A CN110473295B CN 110473295 B CN110473295 B CN 110473295B CN 201910726111 A CN201910726111 A CN 201910726111A CN 110473295 B CN110473295 B CN 110473295B
Authority
CN
China
Prior art keywords
face
graph
model
beautifying
effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910726111.3A
Other languages
Chinese (zh)
Other versions
CN110473295A (en
Inventor
徐博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Spiritplume Interactive Entertainment Technology Co ltd
Original Assignee
Chongqing Spiritplume Interactive Entertainment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Spiritplume Interactive Entertainment Technology Co ltd filed Critical Chongqing Spiritplume Interactive Entertainment Technology Co ltd
Priority to CN201910726111.3A priority Critical patent/CN110473295B/en
Publication of CN110473295A publication Critical patent/CN110473295A/en
Application granted granted Critical
Publication of CN110473295B publication Critical patent/CN110473295B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and equipment for carrying out face beautifying treatment based on a three-dimensional face model, wherein the method comprises the following steps: scanning to obtain a three-dimensional face model corresponding to the real face; detecting the three-dimensional face model, and determining a preset number of key feature points; determining a target beautifying effect according to the selection of a user, and taking key feature points corresponding to the target beautifying effect as beautifying feature points; and adjusting the characteristic value of the beauty characteristic point according to a preset adjusting range to obtain a three-dimensional face model after the beauty treatment, so that the three-dimensional face model is subjected to efficient beauty treatment, and a user obtains better beauty experience.

Description

Method and equipment for carrying out beautifying treatment based on three-dimensional face model
Technical Field
The invention relates to the technical field of portrait processing, in particular to a method and equipment for carrying out beauty treatment based on a three-dimensional face model.
Background
In the prior art, various beautifying technologies and beautifying software exist, but the beautifying effect is uniform for the real face based on the unified default beautifying parameters, and for the three-dimensional face model generated by scanning the target real face, the wiring of the generated three-dimensional model is different due to different characteristics of each person, so that the beautifying is difficult to be performed for different three-dimensional face models.
Disclosure of Invention
The invention provides a method for carrying out face beautifying based on a three-dimensional face model, which is used for solving the problem that in the prior art, a real face can only be beautified based on unified default face beautifying parameters, and the face is difficult to beautify aiming at different three-dimensional face models, and comprises the following steps:
scanning to obtain a three-dimensional face model corresponding to the real face;
detecting the three-dimensional face model, and determining a preset number of key feature points;
determining a target beautifying effect according to the selection of a user, and taking key feature points corresponding to the target beautifying effect as beautifying feature points;
and adjusting the characteristic value of the beautifying characteristic point according to a preset adjusting range to obtain the three-dimensional face model after the beautifying treatment.
Preferably, the scanning obtains a three-dimensional face model corresponding to the real face, specifically:
scanning the real face to obtain an original graph comprising the front face, the left face and the right face of the real face;
and taking the original graph as the three-dimensional face model, and or generating a model grid based on the original graph and then taking the model grid as the three-dimensional face model.
Preferably, when the target beautifying effect is whitening and skin-polishing, the characteristic value of the beautifying characteristic point is adjusted according to a preset adjusting range, specifically:
processing the original graph based on an optimized surface fuzzy filtering algorithm to obtain a second graph, wherein the optimization specifically reduces algorithm complexity;
carrying out high contrast retention processing on the original graph and the second graph to obtain a third graph;
performing high-gloss operation on the third graph and amplifying contrast to obtain a fourth graph;
performing brightness adjustment operation on the shadow part of the fourth graph through tone scale adjustment, and selecting the spot part of the facial skin to obtain a fifth graph;
fusing the second graph and the fifth graph, and then expanding according to the UV texture mapping coordinates of the original graph to obtain three UV mapping;
synthesizing the three UV maps to obtain a sixth graph;
removing nose shadows in the sixth pattern based on a fixed mask;
selecting a skin color region mask based on a skin color detection smoothing algorithm of an HSV color space;
skin tone is determined based on a linear magnification operation of the HSV color space.
Preferably, when the target beautifying effect is a large eye, the characteristic value of the beautifying characteristic point is adjusted according to a preset adjusting range, specifically:
step A, determining the human eye layout in the model grid;
step B, determining a large eye action area according to the human eye layout and the distribution rule of eyes on a human face;
step C, selecting grid vertexes in the large eye action area;
step D, processing the grid vertexes based on a local scaling distortion algorithm;
and E, adjusting the processed characteristic values of the grid vertexes according to the preset adjusting range of the large-eye parameters, wherein the steps A to D are repeatedly executed based on different large-eye parameters to adjust the large-eye effect, and the preset adjusting range of the large-eye parameters is determined according to the result of the large-eye effect adjustment.
Preferably, when the target beautifying effect is a face thinning effect, the characteristic value of the beautifying characteristic point is adjusted according to a preset adjusting range, specifically:
step a, generating a lean face model blendcope based on the model mesh;
step b, mixing the blendshape and the model mesh into a thin face model mesh by using a mixing coefficient;
and c, adjusting the characteristic values of the thin face model grids according to the preset adjustment range of the thin face parameters, wherein the steps a to b are repeatedly executed based on different thin face parameters to adjust the thin face effect, and the preset adjustment range of the thin face parameters is determined according to the result of the thin face effect adjustment.
Correspondingly, the application also provides equipment for carrying out face beautifying processing based on the three-dimensional face model, which comprises the following steps:
the scanning module is used for scanning and acquiring a three-dimensional face model corresponding to the real face;
the detection module is used for detecting the three-dimensional face model and determining a preset number of key feature points;
the determining module is used for determining a target beautifying effect according to the selection of a user, and taking key feature points corresponding to the target beautifying effect as beautifying feature points;
and the adjusting module is used for adjusting the characteristic value of the beautifying characteristic point according to a preset adjusting range to obtain the three-dimensional face model after the beautifying treatment.
Preferably, the scanning module is specifically configured to:
scanning the real face to obtain an original graph comprising the front face, the left face and the right face of the real face;
and taking the original graph as the three-dimensional face model, and or generating a model grid based on the original graph and then taking the model grid as the three-dimensional face model.
Preferably, when the target beautifying effect is whitening and skin-grinding, the adjusting module is specifically configured to:
processing the original graph based on an optimized surface fuzzy filtering algorithm to obtain a second graph, wherein the optimization specifically reduces algorithm complexity;
carrying out high contrast retention processing on the original graph and the second graph to obtain a third graph;
performing high-gloss operation on the third graph and amplifying contrast to obtain a fourth graph;
performing brightness adjustment operation on the shadow part of the fourth graph through tone scale adjustment, and selecting the spot part of the facial skin to obtain a fifth graph;
fusing the second graph and the fifth graph, and then expanding according to the UV texture mapping coordinates of the original graph to obtain three UV mapping;
synthesizing the three UV maps to obtain a sixth graph;
removing nose shadows in the sixth pattern based on a fixed mask;
selecting a skin color region mask based on a skin color detection smoothing algorithm of an HSV color space;
skin tone is determined based on a linear magnification operation of the HSV color space.
Preferably, when the target beautifying effect is a large eye, the adjusting module is specifically configured to:
step A, determining the human eye layout in the model grid;
step B, determining a large eye action area according to the human eye layout and the distribution rule of eyes on a human face;
step C, selecting grid vertexes in the large eye action area;
step D, processing the grid vertexes based on a local scaling distortion algorithm;
and E, adjusting the processed characteristic values of the grid vertexes according to the preset adjusting range of the large-eye parameters, wherein the steps A to D are repeatedly executed based on different large-eye parameters to adjust the large-eye effect, and the preset adjusting range of the large-eye parameters is determined according to the result of the large-eye effect adjustment.
Preferably, when the target beautifying effect is a face thinning effect, the adjusting module is specifically configured to:
step a, generating a lean face model blendcope based on the model mesh;
step b, mixing the blendshape and the model mesh into a thin face model mesh by using a mixing coefficient;
and c, adjusting the characteristic values of the thin face model grids according to the preset adjustment range of the thin face parameters, wherein the steps a to b are repeatedly executed based on different thin face parameters to adjust the thin face effect, and the preset adjustment range of the thin face parameters is determined according to the result of the thin face effect adjustment.
Therefore, by applying the technical scheme, the three-dimensional face model corresponding to the real face is obtained through scanning; detecting the three-dimensional face model, and determining a preset number of key feature points; determining a target beautifying effect according to the selection of a user, and taking key feature points corresponding to the target beautifying effect as beautifying feature points; and adjusting the characteristic value of the beauty characteristic point according to a preset adjusting range to obtain a three-dimensional face model after the beauty treatment, so that the three-dimensional face model is subjected to efficient beauty treatment, and a user obtains better beauty experience.
Drawings
Fig. 1 is a schematic flow chart of a method for performing a face beautifying process based on a three-dimensional face model according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a device for performing a face-beautifying process based on a three-dimensional face model according to an embodiment of the present application;
FIG. 3 is an original image for whitening and peeling obtained in an embodiment of the present application;
FIG. 4 is a model mesh for the large eye obtained in an embodiment of the present application;
FIG. 5 is a model mesh for a thin face acquired in an embodiment of the present application;
FIG. 6 is an effect diagram of an optimized surface blur filtering algorithm in an embodiment of the present application;
FIG. 7 is a graph showing the effect of the high contrast preservation treatment in an embodiment of the present application;
FIG. 8 is a diagram of a face shadow effect after highlight operation processing in an embodiment of the present application;
FIG. 9 is a graph showing the effect of the tone adjustment process according to an embodiment of the present application;
FIG. 10 is a graph showing the effects of skin peeling and freckle removing in the embodiment of the present application;
FIG. 11 is a first UV map in an embodiment of the present application;
FIG. 12 is a second UV map in an embodiment of the present application;
FIG. 13 is a third UV map in an embodiment of the present application;
FIG. 14 is a graph showing the effects of three UV maps synthesized in accordance with one embodiment of the present application;
FIG. 15 is a graph showing the effect of removing the nose shadow in an embodiment of the present application;
FIG. 16 is a graph showing the effect of selecting a skin color region mask in an embodiment of the present application;
FIG. 17 is a graph showing the effect of skin tone determination in an embodiment of the present application;
FIG. 18 is an effect diagram of a three-dimensional model of a face after skin whitening and peeling treatment in an embodiment of the present application;
FIG. 19 is a graph showing the effect of determining the region of action of the large eye in an embodiment of the present application;
FIG. 20 is a graph showing the effect of selecting mesh vertices in the large eye region of action in an embodiment of the present application;
FIG. 21 is an effect diagram of a three-dimensional model of a face after large-eye processing in an embodiment of the present application;
fig. 22 is an effect diagram of a face three-dimensional model after face thinning processing in an embodiment of the present application.
Detailed Description
As described in the background art, in the prior art, the real face can be beautified only based on the unified default beautifying parameters, and it is difficult to beautify different three-dimensional face models.
In order to solve the above problems, the embodiment of the application provides a method for performing beauty treatment based on a three-dimensional face model, which comprises the steps of detecting key feature points of a face, and adjusting feature values of the key feature points according to a preset adjusting range, so that real-time beauty treatment is performed on different three-dimensional face models, and a better beauty treatment effect is obtained.
As shown in fig. 1, a flow chart of a method for performing a face beautifying process based on a three-dimensional face model according to the present application includes the following steps:
s101, scanning to obtain a three-dimensional face model corresponding to a real face;
specifically, a three-dimensional scanning device such as a camera or a scanner is used for carrying out three-dimensional scanning on the real face, and a three-dimensional face model corresponding to the real face is obtained.
It should be noted that, a person skilled in the art may adopt different scanning devices to scan the real face according to actual situations, which does not affect the protection scope of the present application.
Considering that different beautifying effects need to be obtained based on different three-dimensional face models, in a preferred embodiment of the present application, the scanning obtains a three-dimensional face model corresponding to a real face, specifically:
scanning the real face to obtain an original graph comprising the front face, the left face and the right face of the real face;
and taking the original graph as the three-dimensional face model, and or generating a model grid based on the original graph and then taking the model grid as the three-dimensional face model.
Specifically, by scanning a real face, an original figure on the front, left and right sides of the real face can be obtained, the original figure is used as a three-dimensional face model, beautifying treatments such as whitening and skin polishing can be performed based on the original figure, or a model grid is generated based on the original figure and then used as the three-dimensional face model, and beautifying treatments such as large eyes and thin faces can be performed based on the model grid.
When generating the model grid based on the original graph, a person skilled in the art can flexibly select different modes, such as an ARKit mode or a depth map mode of an AR development platform, according to actual conditions, and the different modes do not affect the protection scope of the application.
S102, detecting the three-dimensional face model, and determining a preset number of key feature points.
Specifically, the three-dimensional face model obtained in the previous step is detected, and a preset number of key feature points can be determined, where the key feature points can be feature points corresponding to positions having a major influence on the shape of the three-dimensional face model, such as feature points corresponding to main positions of a face, such as a contour, an eye corner, a nose tip, a nose wing, an eye corner, a mouth corner, an eyebrow, and the like.
The person skilled in the art can determine different preset numbers of key feature points according to actual needs, which does not affect the protection scope of the application.
S103, determining a target beautifying effect according to the selection of a user, and taking key feature points corresponding to the target beautifying effect as beautifying feature points.
Specifically, the user can select different target beautifying effects according to own needs, for example, when the user wants to whiten and abrade the three-dimensional face model, the user can select the whiten and abrade the skin as the target beautifying effect, and different key feature points can be processed based on different target beautifying effects, so that the key feature points corresponding to the target beautifying effect can be used as the beautifying feature points, and the beautifying feature points can be processed subsequently.
The person skilled in the art can select different key feature points to be corresponding to the same target beautifying effect according to actual needs, and the protection scope of the application is not affected.
And S104, adjusting the characteristic value of the feature point of the beauty according to a preset adjusting range to obtain the three-dimensional face model after the beauty treatment.
Specifically, the adjustment range can be preset by adjusting the characteristic value of the feature point of the beauty treatment to perform the corresponding beauty treatment, and the three-dimensional face model after the beauty treatment can be obtained by adjusting the characteristic value of the feature point of the beauty treatment according to the adjustment range.
In order to ensure the beautifying effect of whitening and skin-polishing on the three-dimensional face model, in the preferred embodiment of the present application, when the target beautifying effect is whitening and skin-polishing, the characteristic values of the beautifying characteristic points are adjusted according to the preset adjustment range, specifically:
processing the original graph based on an optimized surface fuzzy filtering algorithm to obtain a second graph, wherein the optimization specifically reduces algorithm complexity;
carrying out high contrast retention processing on the original graph and the second graph to obtain a third graph;
performing high-gloss operation on the third graph and amplifying contrast to obtain a fourth graph;
performing brightness adjustment operation on the shadow part of the fourth graph through tone scale adjustment, and selecting the spot part of the facial skin to obtain a fifth graph;
fusing the second graph and the fifth graph, and then expanding according to the UV texture mapping coordinates of the original graph to obtain three UV mapping;
synthesizing the three UV maps to obtain a sixth graph;
removing nose shadows in the sixth pattern based on a fixed mask;
selecting a skin color region mask based on a skin color detection smoothing algorithm of an HSV color space;
skin tone is determined based on a linear magnification operation of the HSV color space.
When the target beautifying effect selected by the user is whitening and skin-polishing, and the characteristic value of the beautifying characteristic point is adjusted in the preset adjusting range, a series of processes of acquiring the surface ambiguity wave of the original graph, performing high-contrast retention, highlight operation, contrast amplification, tone adjustment, splitting and synthesizing of the UV texture mapping, removing nose shadows based on a fixed mask, selecting a skin color region mask based on an HSV color space, determining skin color and the like are utilized to perform adjustment corresponding to the whitening and skin-polishing.
It should be noted that, the solution of the above preferred embodiment is only one specific implementation solution provided in the present application, and other ways of adjusting the feature value of the feature point of beauty in the preset adjustment range all belong to the protection scope of the present application.
In order to ensure the beautifying effect of the large eye on the three-dimensional face model, in the preferred embodiment of the present application, when the target beautifying effect is the large eye, the characteristic value of the beautifying characteristic point is adjusted according to the preset adjustment range, specifically:
step A, determining the human eye layout in the model grid;
step B, determining a large eye action area according to the human eye layout and the distribution rule of eyes on a human face;
step C, selecting grid vertexes in the large eye action area;
step D, processing the grid vertexes based on a local scaling distortion algorithm;
and E, adjusting the processed characteristic values of the grid vertexes according to the preset adjusting range of the large-eye parameters, wherein the steps A to D are repeatedly executed based on different large-eye parameters to adjust the large-eye effect, and the preset adjusting range of the large-eye parameters is determined according to the result of the large-eye effect adjustment.
As described above, when the target beauty effect selected by the user is a large eye, and the feature value of the beauty feature point is adjusted in the preset adjustment range, a, determining the human eye layout in the model grid, wherein the human eye layout represents the position and rotation of the human eye; B. determining a large eye action area through human eye layout and a distribution rule of eyes on a human face; C. selecting grid vertexes in a large eye action area; D. processing the grid vertexes by utilizing a local scaling distortion algorithm; E. the processed grid vertex is adjusted by utilizing the preset adjusting range of the large-eye parameter, wherein the process of determining the preset adjusting range of the large-eye parameter can be as follows: and repeatedly executing the operations from A to D based on different large-eye parameters to adjust the large-eye effect, and determining the preset adjusting range of the large-eye parameters according to the result of the large-eye effect adjustment.
It should be noted that, the solution of the above preferred embodiment is only one specific implementation solution provided in the present application, and other ways of adjusting the feature value of the feature point of beauty in the preset adjustment range all belong to the protection scope of the present application.
In order to ensure the face beautifying effect of face thinning on the three-dimensional face model, in the preferred embodiment of the present application, when the target face beautifying effect is face thinning, the feature values of the face beautifying feature points are adjusted according to a preset adjustment range, specifically:
a thin face model generated based on the model mesh is used as a fusion deformation blendcope;
processing the blendrope according to the mixing coefficient to obtain a thin face model grid;
and adjusting the thin face model grid according to the adjustment range of the preset thin face parameters.
As described above, when the target beautifying effect selected by the user is a thin face, and the characteristic value of the beautifying characteristic point is adjusted in the preset adjustment range, a, generating a thin face model blendhape based on the model grid; b. mixing the blendrope and the model mesh into a thin face model mesh by using a mixing coefficient; c. adjusting the characteristic values of the thin face model grids according to the preset adjusting range of the thin face parameters, wherein the determining process of the preset adjusting range of the thin face parameters can be as follows: and repeatedly executing a to b based on different face thinning parameters to adjust the face thinning effect, and determining a preset adjusting range of the face thinning parameters according to the result of the face thinning effect adjustment.
It should be noted that, the solution of the above preferred embodiment is only one specific implementation solution provided in the present application, and other ways of adjusting the feature value of the feature point of beauty in the preset adjustment range all belong to the protection scope of the present application.
By applying the technical scheme, scanning to obtain a three-dimensional face model corresponding to the real face; detecting the three-dimensional face model, and determining a preset number of key feature points; determining a target beautifying effect according to the selection of a user, and taking key feature points corresponding to the target beautifying effect as beautifying feature points; and adjusting the characteristic value of the beauty characteristic point according to a preset adjusting range to obtain a three-dimensional face model after the beauty treatment, so that the three-dimensional face model is subjected to efficient beauty treatment, and a user obtains better beauty experience.
In order to further explain the technical idea of the invention, the technical scheme of the invention is described with specific application scenarios.
The embodiment of the application provides a method for carrying out beauty treatment based on a three-dimensional face model, which comprises the steps of generating the three-dimensional face model by scanning a real face, determining a plurality of key feature points of the three-dimensional face model, determining corresponding beauty feature points according to a target beauty effect selected by a user, and adjusting feature values corresponding to the beauty feature points in a preset range, so that real-time beauty treatment is carried out on different three-dimensional face models, and a better beauty treatment effect is obtained.
The three-dimensional face model may be obtained by: the original figures on the front, the left and the right of the real face are obtained by scanning the real face through scanning equipment such as a camera and the like, as shown in fig. 3, a model grid of the original figures is obtained according to an AR development platform ARKit, as shown in fig. 4 and 5, a person skilled in the art can also obtain the model grid through other modes such as a depth map technology, and the original figures or the grid model is used as a three-dimensional face model to be processed.
When the target beautifying effect selected by the user is whitening and skin grinding, the beautifying treatment can be carried out by the following steps:
in order to ensure the edge information of the original graph, the edge protection filter operation is generally carried out on the original graph through a surface blurring filter algorithm, and the principle of the formula of the surface blurring filter algorithm is as follows:
Figure BDA0002158988510000101
wherein:
r represents the neighborhood radius;
y represents a threshold value ranging from 0 to 255;
x represents the current pixel value;
x i representing the ith pixel value in the neighborhood of radius r;
X out output result values representing surface blurring.
However, the algorithmic complexity of surface blur filtering is O (n 2 ) In order to improve the processing efficiency, the embodiment of the application optimizes the surface fuzzy filtering algorithm by specifically optimizing the surface fuzzy filtering algorithm through the addition and subtraction O (1) operation based on the vector of the instruction set of the x86/ARM architecture and the O (1) level operation based on the convolution histogram, the specific optimization process is the prior art, and details are not repeated here, so that the edge protection filter operation is performed on the original graph through the optimized surface fuzzy filtering algorithm, the processing efficiency is improved, and the processed effect graph is shown in fig. 6.
And secondly, carrying out high contrast retaining operation on the original pattern and the pattern processed in the first step.
High contrast preservation is to filter out some high contrast portions for processing, mainly the edge/contour portions of the image. The portions of the image where the contrast between the pixels and the surroundings is large remain, and the other portions are gray. For example, the portrait retains eyes, mouth, skin spots and contours, and the effect of the treatment is shown in fig. 7.
The principle formula of the high contrast preservation is as follows:
f(x)=x-blur(x)+0.5
wherein the x range is between [0,1] for each pixel value, wherein the blur function is the surface blurring function in the previous step.
And thirdly, performing high-gloss operation on the graph processed in the second step, and amplifying contrast to obtain the shadow of the face, wherein the processed effect graph is shown in fig. 8.
The principle formula of the highlight operation is as follows:
f(x)=2*x*x x<=0.5
f(x)==1-2*(1-x)(1-x) x>0.5
for each pixel value x range between 0,1, if x is less than or equal to 0.5, performing f (x) =2×x operation, if x is greater than 0.5, performing f (x) =1-2×1-x (1-x) operation for making the shadow or edge of the graph more clear.
The contrast refers to the measurement of different brightness levels between the brightest white and darkest black of a bright and dark region in an image, the larger the difference range is, the larger the contrast is represented, the smaller the difference range is, the smaller the contrast is represented, and the sharpness, detail performance and gray level performance of the image can be improved by amplifying the contrast.
Fourth, the shadow part processed in the third step is subjected to brightness adjustment through tone adjustment, the spot part of the facial skin is selected, histogram adjustment is performed, and the processed effect diagram is shown in fig. 9.
The tone scale is an index standard representing the intensity of the image, and the color fullness and fineness of the image are determined by the tone scale. The gradation refers to brightness, independent of color, but brightest only white and least bright only black.
The principle formula of the histogram adjustment curve of the tone scale is as follows:
f(x)=curve(x)
the specific form of the cut (x) can refer to a trigonometric function or a power function.
Fifthly, fusing the graph processed in the first step with the graph processed in the fourth step to achieve the effects of skin grinding and freckle removal, wherein the processed effect graph is shown in fig. 10, and a specific principle formula is as follows:
f(u,v)=src(u,v)*a+dest(u,v)(1-a)
wherein, for texture map coordinates u, v belonging to [0,1], f (u, v) =src (u, v) ×a+dest (u, v) (1-a) operations are performed based on u and v, respectively, where src refers to the color at the u, v value in the original map, dest refers to the color of the target map at the u, v value, and a is a constant.
And sixthly, expanding the graph processed in the fifth step according to the UV texture mapping coordinates of the original graph to obtain three UV mapping, wherein the processed effect graph is shown in fig. 11-13.
UV refers to the abbreviation of u, v texture map coordinates, which defines information of the position of each point in the graph, which is interrelated with the three-dimensional model to determine the position of the surface texture map.
Seventh, the three UV maps in the sixth step are synthesized, and the effect diagram after the treatment is shown in fig. 14.
Eighth, the nose shadows in the pattern processed in the seventh step are removed based on the fixed mask, and the processed effect diagram is shown in fig. 15.
Because of the ever-changing face, the algorithm for finding shadows for photos is complex, and for UV mapping, the information of the five sense organs of the person is relatively fixed, so that the shadows can be optimized by using a fixed mask, and the skin color of the adjacent area can be supplemented for the picture of the mask part, and the principle formula is as follows:
mask=tex2d(mask,(u,v))
ret=tex2d(src,((u,v)+mask.a*offset))
wherein, f (u, v) =tex2d (mask, (u, v)) is performed on u, v belonging to [0,1], and the color value of the mask is taken out.
The operation of f (u, v) =tex2d (src, ((u, v) +mask. A. Offset)) is performed on u, v belonging to [0,1], the original color is first extracted, the final color is calculated by shifting the value of the transparent channel of the mask, wherein the tex2d function is a function of extracting the mapped color.
Ninth, a skin color region mask is selected by a skin color detection smoothing algorithm based on HSV (Hue, saturation, brightness, a color model) color space, and the processed effect diagram is shown in fig. 16.
The color space of RGB (red, green, blue, a color standard) is discontinuous for color change, the HSV color space is in a relatively stable interval for color detection, so after RGB is converted into HSV, the judgment of normal human skin color becomes simple, the color in a certain interval is skin color, the edge is softened by utilizing the offset of the color to the interval, and the principle formula is as follows:
inverselerp(a,b,x)=(x-a)/(b-a);
f(x)=saturate(min(inverselerp(7-lenth,7,x),inverselerp(20+lenth,20,x)))
f(y)=saturate(inverselerp(28,28+lenth,y))
f(z)=saturate(inverselerp(20,20+lenth,z))
skin(x,y,z)=min(f(x),f(y),f(z))
wherein, invertelerp is f (x) = (x-a)/(b-a), and represents the ratio of x in the ab interval
The formula represents the value range of the color value of a certain point in the graph in the HSV space.
For H channel, let a=7-length, b=length, find the value of f (x), where length is a constant;
for the H channel, let a=20+length, b=length, find the value of f (x), where length is a constant;
for the S channel, let a=28, b=28+length, find the value of f (y), where length is a constant;
for the V channel, let a=20, b=20+length, find the value of f (z), where length is a constant;
and obtaining the minimum value in all the values, so that the result is convenient to cut off.
Tenth, skin tone is determined based on the linear amplification operation of the HSV color space, and the effect diagram after the processing is shown in fig. 17.
For skin color selection, traversing to calculate the average color of HSV space, comparing the HSV color of a target, and calculating the multiplying power which should be enlarged or reduced for whitening, wherein the principle formula is as follows:
color=tex2d(main,uv);
hsv=RGB2HSV(color.rgb);
value=skin(hsv);
light=hsv*Whitening;
result(r,g,b)=HSV2RGB(lerp(hsv,light,value)
wherein, f (u, v) =src (u, v) is carried out on u, v belonging to [0,1] to obtain the color of the original image;
performing a color conversion operation of f (h, s, v) = Rgb2hsv (x, y, z) on the color values of x, y, z belonging to [0,1 ];
performing an amplifying operation of f (h, s, v) =a (h, s, v) on color values of h, s, v belonging to [0,1 ];
detecting skin color of h, s and v belonging to [0,1] by adopting the formula of the previous step to obtain a value of skin color;
and performing the lerp function operation of the value on the amplified hsv and the original hsv.
The three-dimensional model of the face after the whitening and skin-grinding treatment is obtained after the series of treatments, and the effect diagram is shown in figure 18.
When the target beauty effect selected by the user is a large eye, the beauty treatment can be performed by:
a first step of determining a human eye layout in the model mesh;
secondly, setting xy plane coordinates and radius as a large-eye acting area according to the human eye layout and the rule of eyes at the face position, wherein the processed effect diagram is shown in fig. 19;
thirdly, selecting grid vertexes of the model grid based on the area filtering set in the second step, wherein a processed effect diagram is shown in fig. 20;
fourthly, processing the grid vertexes based on a local scaling distortion algorithm, wherein the principle formula is as follows:
Figure BDA0002158988510000141
wherein, r: the distance from the pixel to the center of the circle;
r max : maximum radius of deformation;
a: scaling the coefficients;
f s (r): and (5) the distance from the zoom lens to the circle center.
Fifth step: and (3) repeating the first to fourth steps to adjust by changing different parameter values, and setting the minimum parameter value and the maximum parameter value of the large eye.
Sixth step: the parameter value is converted into the ratio of 0 to 100, 0 is the initial value of the model eye, 100 is the maximum value of the model eye after the large eye, the user adjusts the parameter to influence the original parameter of the previous step, the user can adjust the parameter value into different ratios to carry out the large eye beautifying treatment, the minimum value adjustable by the user can be preset to the maximum value, the user can carry out the beautifying operation in the range by himself so as to achieve the best effect approved by the user, and the effect diagram of the face three-dimensional model after the large eye treatment is shown in figure 21.
When the target beautifying effect selected by the user is a thin face, the beautifying process can be performed by the following steps:
firstly, taking a model grid as a reference head, and manufacturing a thin face model blendcope by using the reference head;
and secondly, mixing the blendrope with the mixing coefficient to the model grid in the first step, wherein the principle formula is as follows:
V target =V src +dV blendshape *t
wherein V is target : mixed vertex position
V src : model mesh primitive vertex position
dV blendshape : thin face blendmap vertex displacement data
t: mixing coefficient
And thirdly, repeating the first step to the second step for effect adjustment by changing different parameter values, and setting the minimum parameter value and the maximum parameter value of the thin face.
And fourthly, converting the parameter value into the proportion of 0 to 100, wherein 0 is the initial value of the model face, 100 is the maximum value of the model face after face thinning, influencing the original parameter of the previous step by adjusting the parameter by a user, carrying out face thinning and beautifying processing by adjusting the parameter value into different proportions, and presetting the minimum value to the maximum value which can be adjusted by the user, wherein the user can carry out face beautifying operation automatically within the range so as to achieve the optimal effect approved by the user, and the effect diagram of the face three-dimensional model after face thinning processing is shown in figure 22.
The above description is made on the related beautifying processing procedures of whitening and skin grinding, large eyes and face thinning based on the three-dimensional face model, and the embodiment of the application adjusts the characteristic values of the key characteristic points according to the preset adjusting range by detecting the key characteristic points, and can continuously expand and increase other beautifying functions, such as eyebrow drawing, lip gloss, blush, pupil beautifying and other beautifying operations, so that efficient beautifying processing is carried out on the three-dimensional face model, and a user obtains better beautifying experience.
In order to achieve the above technical objective, the present application proposes a device for performing a face-beautifying process based on a three-dimensional face model, as shown in fig. 2, including:
the scanning module 201 is used for scanning and acquiring a three-dimensional face model corresponding to the real face;
the detection module 202 is configured to detect the three-dimensional face model, and determine a preset number of key feature points;
a determining module 203, configured to determine a target beauty effect according to a selection of a user, and use a key feature point corresponding to the target beauty effect as a beauty feature point;
and the adjusting module 204 is configured to adjust the feature value of the feature point according to a preset adjusting range, and obtain a three-dimensional face model after the face beautifying process.
In a specific application scenario, the scanning module 201 is specifically configured to:
scanning the real face to obtain an original graph comprising the front face, the left face and the right face of the real face;
and taking the original graph as the three-dimensional face model, and or generating a model grid based on the original graph and then taking the model grid as the three-dimensional face model.
In a specific application scenario, when the target beautifying effect is whitening and skin polishing, the adjusting module 204 is specifically configured to:
processing the original graph based on an optimized surface fuzzy filtering algorithm to obtain a second graph, wherein the optimization specifically reduces algorithm complexity;
carrying out high contrast retention processing on the original graph and the second graph to obtain a third graph;
performing high-gloss operation on the third graph and amplifying contrast to obtain a fourth graph;
performing brightness adjustment operation on the shadow part of the fourth graph through tone scale adjustment, and selecting the spot part of the facial skin to obtain a fifth graph;
fusing the second graph and the fifth graph, and then expanding according to the UV texture mapping coordinates of the original graph to obtain three UV mapping;
synthesizing the three UV maps to obtain a sixth graph;
removing nose shadows in the sixth pattern based on a fixed mask;
selecting a skin color region mask based on a skin color detection smoothing algorithm of an HSV color space;
skin tone is determined based on a linear magnification operation of the HSV color space.
In a specific application scenario, when the target beautifying effect is a large eye, the adjusting module 204 is specifically configured to:
step A, determining the human eye layout in the model grid;
step B, determining a large eye action area according to the human eye layout and the distribution rule of eyes on a human face;
step C, selecting grid vertexes in the large eye action area;
step D, processing the grid vertexes based on a local scaling distortion algorithm;
and E, adjusting the processed characteristic values of the grid vertexes according to the preset adjusting range of the large-eye parameters, wherein the steps A to D are repeatedly executed based on different large-eye parameters to adjust the large-eye effect, and the preset adjusting range of the large-eye parameters is determined according to the result of the large-eye effect adjustment.
In a specific application scenario, when the target beautifying effect is a thin face, the adjusting module 204 is specifically configured to:
step a, generating a lean face model blendcope based on the model mesh;
step b, mixing the blendshape and the model mesh into a thin face model mesh by using a mixing coefficient;
and c, adjusting the characteristic values of the thin face model grids according to the preset adjustment range of the thin face parameters, wherein the steps a to b are repeatedly executed based on different thin face parameters to adjust the thin face effect, and the preset adjustment range of the thin face parameters is determined according to the result of the thin face effect adjustment.
From the above description of the embodiments, it will be clear to those skilled in the art that the present invention may be implemented in hardware, or may be implemented by means of software plus necessary general hardware platforms. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes a method for executing the method described in each implementation scenario of the present invention by a computer device (may be a personal computer, a server, or a network device, etc.) in the form of several instructions.
Those skilled in the art will appreciate that the drawing is merely a schematic illustration of a preferred implementation scenario and that the modules or flows in the drawing are not necessarily required to practice the invention.
Those skilled in the art will appreciate that the modules in the apparatus may be distributed in the apparatus of the implementation scenario according to the implementation scenario description, or that corresponding changes may be located in one or more apparatuses different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned inventive sequence numbers are merely for description and do not represent advantages or disadvantages of the implementation scenario.
The foregoing disclosure is merely illustrative of some embodiments of the invention, and the invention is not limited thereto, as modifications may be made by those skilled in the art without departing from the scope of the invention.

Claims (6)

1. A method for performing a face-beautifying process based on a three-dimensional face model, the method comprising:
scanning to obtain a three-dimensional face model corresponding to a real face, which specifically comprises the following steps:
scanning the real face to obtain an original graph comprising the front face, the left face and the right face of the real face;
taking the original graph as the three-dimensional face model, and or generating a model grid based on the original graph and then taking the model grid as the three-dimensional face model;
detecting the three-dimensional face model, and determining a preset number of key feature points;
determining a target beautifying effect according to the selection of a user, and taking key feature points corresponding to the target beautifying effect as beautifying feature points;
adjusting the characteristic value of the beauty characteristic point according to a preset adjusting range to obtain a three-dimensional face model after beauty treatment;
when the target beautifying effect is whitening and skin grinding, the characteristic value of the beautifying characteristic point is adjusted according to a preset adjusting range, specifically:
processing the original graph based on an optimized surface fuzzy filtering algorithm to obtain a second graph, wherein the optimization specifically reduces algorithm complexity;
carrying out high contrast retention processing on the original graph and the second graph to obtain a third graph;
performing high-gloss operation on the third graph and amplifying contrast to obtain a fourth graph;
performing brightness adjustment operation on the shadow part of the fourth graph through tone scale adjustment, and selecting the spot part of the facial skin to obtain a fifth graph;
fusing the second graph and the fifth graph, and then expanding according to the UV texture mapping coordinates of the original graph to obtain three UV mapping;
synthesizing the three UV maps to obtain a sixth graph;
removing nose shadows in the sixth pattern based on a fixed mask;
selecting a skin color region mask based on a skin color detection smoothing algorithm of an HSV color space;
skin tone is determined based on a linear magnification operation of the HSV color space.
2. The method of claim 1, wherein when the target beauty effect is a large eye, adjusting the feature value of the beauty feature point according to a preset adjustment range, specifically:
step A, determining the human eye layout in the model grid;
step B, determining a large eye action area according to the human eye layout and the distribution rule of eyes on a human face;
step C, selecting grid vertexes in the large eye action area;
step D, processing the grid vertexes based on a local scaling distortion algorithm;
and E, adjusting the processed characteristic values of the grid vertexes according to the preset adjusting range of the large-eye parameters, wherein the steps A to D are repeatedly executed based on different large-eye parameters to adjust the large-eye effect, and the preset adjusting range of the large-eye parameters is determined according to the result of the large-eye effect adjustment.
3. The method of claim 1, wherein when the target beautifying effect is a face thinning effect, adjusting the feature value of the beautifying feature point according to a preset adjustment range, specifically:
step a, generating a lean face model blendcope based on the model mesh;
step b, mixing the blendshape and the model mesh into a thin face model mesh by using a mixing coefficient;
and c, adjusting the characteristic values of the thin face model grids according to the preset adjustment range of the thin face parameters, wherein the steps a to b are repeatedly executed based on different thin face parameters to adjust the thin face effect, and the preset adjustment range of the thin face parameters is determined according to the result of the thin face effect adjustment.
4. An apparatus for performing a face-beautifying process based on a three-dimensional face model, the apparatus comprising:
the scanning module is used for scanning and acquiring a three-dimensional face model corresponding to the real face;
the detection module is used for detecting the three-dimensional face model and determining a preset number of key feature points;
the determining module is used for determining a target beautifying effect according to the selection of a user, and taking key feature points corresponding to the target beautifying effect as beautifying feature points;
the adjustment module is used for adjusting the characteristic value of the beautifying characteristic point according to a preset adjustment range to obtain a three-dimensional face model after beautifying treatment;
the scanning module is specifically configured to:
scanning the real face to obtain an original graph comprising the front face, the left face and the right face of the real face;
taking the original graph as the three-dimensional face model, and or generating a model grid based on the original graph and then taking the model grid as the three-dimensional face model;
when the target beautifying effect is whitening and skin grinding, the adjusting module is specifically used for:
processing the original graph based on an optimized surface fuzzy filtering algorithm to obtain a second graph, wherein the optimization specifically reduces algorithm complexity;
carrying out high contrast retention processing on the original graph and the second graph to obtain a third graph;
performing high-gloss operation on the third graph and amplifying contrast to obtain a fourth graph;
performing brightness adjustment operation on the shadow part of the fourth graph through tone scale adjustment, and selecting the spot part of the facial skin to obtain a fifth graph;
fusing the second graph and the fifth graph, and then expanding according to the UV texture mapping coordinates of the original graph to obtain three UV mapping;
synthesizing the three UV maps to obtain a sixth graph;
removing nose shadows in the sixth pattern based on a fixed mask;
selecting a skin color region mask based on a skin color detection smoothing algorithm of an HSV color space;
skin tone is determined based on a linear magnification operation of the HSV color space.
5. The apparatus of claim 4, wherein when the target cosmetic effect is a large eye, the adjustment module is specifically configured to:
step A, determining the human eye layout in the model grid;
step B, determining a large eye action area according to the human eye layout and the distribution rule of eyes on a human face;
step C, selecting grid vertexes in the large eye action area;
step D, processing the grid vertexes based on a local scaling distortion algorithm;
and E, adjusting the processed characteristic values of the grid vertexes according to the preset adjusting range of the large-eye parameters, wherein the steps A to D are repeatedly executed based on different large-eye parameters to adjust the large-eye effect, and the preset adjusting range of the large-eye parameters is determined according to the result of the large-eye effect adjustment.
6. The apparatus of claim 4, wherein when the target beauty effect is a thin face, the adjusting module is specifically configured to:
step a, generating a lean face model blendcope based on the model mesh;
step b, mixing the blendshape and the model mesh into a thin face model mesh by using a mixing coefficient;
and c, adjusting the characteristic values of the thin face model grids according to the preset adjustment range of the thin face parameters, wherein the steps a to b are repeatedly executed based on different thin face parameters to adjust the thin face effect, and the preset adjustment range of the thin face parameters is determined according to the result of the thin face effect adjustment.
CN201910726111.3A 2019-08-07 2019-08-07 Method and equipment for carrying out beautifying treatment based on three-dimensional face model Active CN110473295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910726111.3A CN110473295B (en) 2019-08-07 2019-08-07 Method and equipment for carrying out beautifying treatment based on three-dimensional face model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910726111.3A CN110473295B (en) 2019-08-07 2019-08-07 Method and equipment for carrying out beautifying treatment based on three-dimensional face model

Publications (2)

Publication Number Publication Date
CN110473295A CN110473295A (en) 2019-11-19
CN110473295B true CN110473295B (en) 2023-04-25

Family

ID=68510355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910726111.3A Active CN110473295B (en) 2019-08-07 2019-08-07 Method and equipment for carrying out beautifying treatment based on three-dimensional face model

Country Status (1)

Country Link
CN (1) CN110473295B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381729B (en) * 2020-11-12 2024-06-18 广州繁星互娱信息科技有限公司 Image processing method, device, terminal and storage medium
CN112634126B (en) * 2020-12-22 2024-10-18 厦门美图之家科技有限公司 Portrait age-reducing processing method, training method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513007A (en) * 2015-12-11 2016-04-20 惠州Tcl移动通信有限公司 Mobile terminal based photographing beautifying method and system, and mobile terminal
CN108447026A (en) * 2018-01-31 2018-08-24 上海思愚智能科技有限公司 Acquisition methods, terminal and the computer readable storage medium of U.S. face parameter attribute value
CN108765273A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Virtual face-lifting method and device for taking pictures of faces
CN109190503A (en) * 2018-08-10 2019-01-11 珠海格力电器股份有限公司 beautifying method, device, computing device and storage medium
CN109584146A (en) * 2018-10-15 2019-04-05 深圳市商汤科技有限公司 U.S. face treating method and apparatus, electronic equipment and computer storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2560340A (en) * 2017-03-07 2018-09-12 Eyn Ltd Verification method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513007A (en) * 2015-12-11 2016-04-20 惠州Tcl移动通信有限公司 Mobile terminal based photographing beautifying method and system, and mobile terminal
CN108447026A (en) * 2018-01-31 2018-08-24 上海思愚智能科技有限公司 Acquisition methods, terminal and the computer readable storage medium of U.S. face parameter attribute value
CN108765273A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Virtual face-lifting method and device for taking pictures of faces
CN109190503A (en) * 2018-08-10 2019-01-11 珠海格力电器股份有限公司 beautifying method, device, computing device and storage medium
CN109584146A (en) * 2018-10-15 2019-04-05 深圳市商汤科技有限公司 U.S. face treating method and apparatus, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN110473295A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN109859098B (en) Face image fusion method, device, computer equipment and readable storage medium
CN108229279B (en) Face image processing method and device and electronic equipment
JP4294371B2 (en) Portrait image enhancement method and system
JP4804660B2 (en) Noise reduction method using color information for digital image processing
JP5487610B2 (en) Image processing apparatus and method, and program
JP4085058B2 (en) A general-purpose image enhancement algorithm that enhances visual recognition of details in digital images
CN107369133B (en) Face image beautifying method and device
CN110738732A (en) three-dimensional face model generation method and equipment
JP2004265406A (en) Method and system for improving portrait image processed in batch mode
CN113039576A (en) Image enhancement system and method
CN110248242B (en) Image processing and live broadcasting method, device, equipment and storage medium
CN107204034A (en) A kind of image processing method and terminal
JP2001313844A (en) Method and device for executing local color correction
CN113344836A (en) Face image processing method and device, computer readable storage medium and terminal
CN118571161B (en) Display control method, device and equipment of LED display screen and storage medium
CN110473295B (en) Method and equipment for carrying out beautifying treatment based on three-dimensional face model
JP4345366B2 (en) Image processing program and image processing apparatus
JP5203159B2 (en) Image processing method, image processing system, and image processing program
JP4742068B2 (en) Image processing method, image processing system, and image processing program
CN114596213B (en) Image processing method and device
CN116612263A (en) Method and device for sensing consistency dynamic fitting of latent vision synthesis
CN110751078B (en) Method and equipment for determining non-skin color region of three-dimensional face
JP2005094452A (en) Method, system, and program for processing image
CN110097511B (en) Image noise reduction method
CN112581394B (en) Image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant