[go: up one dir, main page]

CN110895789B - Face beautifying method and device - Google Patents

Face beautifying method and device Download PDF

Info

Publication number
CN110895789B
CN110895789B CN201811066782.3A CN201811066782A CN110895789B CN 110895789 B CN110895789 B CN 110895789B CN 201811066782 A CN201811066782 A CN 201811066782A CN 110895789 B CN110895789 B CN 110895789B
Authority
CN
China
Prior art keywords
beautified
pixel
image
face image
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811066782.3A
Other languages
Chinese (zh)
Other versions
CN110895789A (en
Inventor
张彩红
刘刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201811066782.3A priority Critical patent/CN110895789B/en
Publication of CN110895789A publication Critical patent/CN110895789A/en
Application granted granted Critical
Publication of CN110895789B publication Critical patent/CN110895789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a face beautifying method and a face beautifying device, wherein the face beautifying method comprises the following steps: acquiring a face image to be beautified; carrying out texture complexity weight calculation on the face image to be beautified by using a preset texture complexity analysis algorithm to obtain a weight distribution diagram, wherein the weight distribution diagram comprises texture complexity weights of all pixel points of the face image to be beautified; based on the texture complexity weight of each pixel point in the weight distribution diagram, carrying out face grinding weighting treatment on the face image to be beautified by utilizing a preset face grinding algorithm to obtain the beautified image. Through this scheme, can improve the face and beautify the effect.

Description

Face beautifying method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a face image beautifying method and device.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, music listening, game playing, online chat, photographing, image editing and the like can be achieved through the intelligent terminal. Face beautification is a specific Application of image editing, and most of image editing APP (Application program) has a face beautification function.
Currently, a face beautifying method is to position a face region in a face image to be beautified by acquiring the face image to be beautified and utilizing a skin color positioning method or a face recognition rectangular frame, and to perform face beautifying treatment such as whitening, skin grinding and the like on the positioned face region so as to achieve the beautifying effect.
The skin grinding treatment is to grind the noise information on the face image to be beautified to make the face appear smooth. However, since the face region includes a detailed region such as eyes, nose, mouth, and a flat region such as face, forehead, hair, and the like, the information included in the different regions is greatly different, and the information to be retained may be identified as noise information when the face peeling process is performed. The face area is uniformly subjected to skin-polishing treatment, so that the area which does not need to be polished is polished, the images after face beautification do not accord with the actual images, even serious distortion occurs, and the face beautification effect is poor.
Disclosure of Invention
The embodiment of the invention aims to provide a face image beautifying method and device for improving face beautifying effect. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a face beautifying method, where the method includes:
Acquiring a face image to be beautified;
carrying out texture complexity weight calculation on the face image to be beautified by using a preset texture complexity analysis algorithm to obtain a weight distribution diagram, wherein the weight distribution diagram comprises texture complexity weights of all pixel points of the face image to be beautified;
and carrying out face peeling weighting treatment on the face image to be beautified by utilizing a preset face peeling algorithm based on the texture complexity weight of each pixel point in the weight distribution diagram to obtain a beautified image.
In a second aspect, an embodiment of the present invention provides a face beautifying device, including:
the acquisition module is used for acquiring the face image to be beautified;
the computing module is used for carrying out texture complexity weight computation on the face image to be beautified by utilizing a preset texture complexity analysis algorithm to obtain a weight distribution diagram, wherein the weight distribution diagram comprises texture complexity weights of all pixel points of the face image to be beautified;
and the face model Yan Mokuai is used for carrying out face grinding weighting treatment on the face image to be beautified by utilizing a preset face grinding algorithm based on the texture complexity weight of each pixel point in the weight distribution diagram to obtain a beautified image.
In a third aspect, an embodiment of the present invention provides an electronic device comprising a processor and a computer-readable storage medium, wherein,
the computer readable storage medium is used for storing a computer program;
the processor is configured to implement any of the method steps described in the first aspect of the embodiment of the present invention when executing the computer program stored on the computer readable storage medium.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements any of the method steps of the first aspect of the embodiments of the present invention.
According to the face image beautifying method and device, the face image to be beautified is obtained, the texture complexity weight calculation is conducted on the face image to be beautified through the preset texture complexity analysis algorithm, the weight distribution diagram is obtained, the weighting treatment of face grinding is conducted on the face image to be beautified through the preset face grinding algorithm based on the texture complexity weight of each pixel point in the weight distribution diagram, and the face image to be beautified is obtained. The method comprises the steps of calculating texture complexity weights of face images to be beautified by using a preset texture complexity analysis algorithm, wherein different areas of the face images to be beautified have different texture complexity, so that the texture complexity weights distributed by pixel points of different areas in an obtained weight distribution diagram are different, and when the face skin is abraded by using the preset face skin abrasion algorithm, weighting treatment of face skin abrasion is performed on the face images to be beautified based on the texture complexity weights of all the pixel points, so that the different areas of the face images to be beautified have different skin abrasion degrees, partial detail information can be reserved, and the face beautification effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a face beautifying method according to an embodiment of the invention;
fig. 2 is an acquired face image to be beautified according to an embodiment of the present invention;
fig. 3 is a weight distribution diagram obtained based on the maximum texture complexity of the whole face image to be beautified according to the embodiment of the invention;
FIG. 4 is a weight distribution diagram obtained based on the maximum texture complexity of a face region according to an embodiment of the present invention;
fig. 5 is a flow chart of a face beautifying method according to another embodiment of the invention;
FIG. 6 is a schematic diagram of the parameter and image gain linkage according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a face beautifying device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to improve the face beautifying effect, the embodiment of the invention provides a face beautifying method, device electronic equipment and a computer readable storage medium.
Next, a face beautifying method provided by the embodiment of the invention is first described.
The main execution body of the face beautifying method provided by the embodiment of the invention can be electronic equipment for executing an intelligent algorithm, and the electronic equipment can be a mobile terminal with an image editing function such as a computer, a mobile phone, an image editor, an intelligent camera and the like. The manner of implementing the face beautifying method provided by the embodiment of the invention can be at least one of software, a hardware circuit and a logic circuit arranged in an execution main body.
As shown in fig. 1, the face beautifying method provided by the embodiment of the invention may include the following steps:
s101, acquiring a face image to be beautified.
The face image to be beautified is an image which is obtained by shooting the face and contains the face characteristics, and the face of the image needs to be beautified, so the face image to be beautified is also called as the face image to be beautified. In the embodiment of the invention, the face beautifying refers to the process of grinding the skin of the face flat area of the face image to be beautified and reserving the face detail area.
S102, carrying out texture complexity weight calculation on the face image to be beautified by using a preset texture complexity analysis algorithm to obtain a weight distribution diagram.
The weight distribution diagram comprises texture complexity weights of all pixel points of the face image to be beautified. The preset texture complexity analysis algorithm is an algorithm for calculating texture complexity information of each pixel point in the face image to be beautified, and can be a box filter algorithm, an image gray level histogram calculation method, a direction filter algorithm and the like. The box filter algorithm is used as a domain operation algorithm, the value of each element is the sum of pixels (or the square sum of pixels) in the pixel domain, and the pixel values of all the pixels in the pixel domain are integrated in each element, so that the pixel information can be effectively integrated, and the jump of the pixel values is avoided.
In the face image to be beautified, the texture complexity among the pixel points is different, and through texture complexity analysis, a flat area capable of grinding the skin and a detail area needing to be reserved can be distinguished.
By calculating the texture complexity weight of the face image to be beautified, in the obtained weight distribution diagram, the texture complexity weight of each pixel point in the detail area is larger due to the fact that the texture complexity of the detail area is larger, and the texture complexity of the flat area is smaller, and the texture complexity weight of each pixel point in the flat area is smaller.
Optionally, S102 may specifically include the following steps:
according to the face image to be beautified, calculating a pixel mean value diagram corresponding to the face image to be beautified by using a first Box Filter algorithm.
The BoxFilter algorithm is an operation that performs a fast additive summation of pixel values within each window given a sliding window size. After the face image F to be beautified is obtained, the first box filter may be used to perform the calculation shown in the formula (1) to obtain a pixel mean value map corresponding to the face image to be beautified.
meanF=BoxFilter(F) (1)
The means F is a pixel mean value diagram corresponding to the face image F to be beautified.
And secondly, calculating a pixel mean square error diagram corresponding to the face image to be beautified according to the face image to be beautified and the pixel mean value diagram.
After the pixel mean value diagram is obtained, according to the face image to be beautified and the pixel mean value diagram, the pixel mean value diagram corresponding to the face image to be beautified can be obtained through calculation by utilizing the formula (2).
valF=(F-meanF) 2 (2)
Wherein valF is a pixel mean square error diagram corresponding to the face image to be beautified.
Optionally, the step of calculating the pixel mean square error map corresponding to the face image to be beautified according to the face image to be beautified and the pixel mean value map may specifically be:
and carrying out mean filtering on the mean square error of the face image to be beautified and the pixel mean graph by using a first Box Filter algorithm according to the face image to be beautified and the pixel mean graph, and obtaining the pixel mean square error graph corresponding to the face image to be beautified.
Because the mean square error of the face image to be beautified and the pixel mean value image is directly calculated, the mean square error of the edge area is larger, and the mean square error of the points adjacent to the edge is smaller, so that the value jump of the pixel mean square error image near the edge is obvious, the image has serious burrs, in order to reduce the burrs, the mean square error of the face image to be beautified and the pixel mean value image can be subjected to mean filtering by using a first Box Filter algorithm, and the pixel mean square error image corresponding to the face image to be beautified can be obtained through calculation shown in a formula (3).
valF=BoxFilter[(F-meanF) 2 ] (3)
In the pixel mean square error diagram, the larger the pixel mean square error is, the larger the texture complexity of the pixel point is.
And thirdly, searching a maximum value of the pixel mean square error from the corresponding area of the pixel mean square error map according to the face area of the face image to be beautified.
When calculating the texture complexity weight of each pixel point, normalization processing can be performed by taking the maximum texture complexity in the whole face image to be beautified as a reference, for example, normalization processing is performed by taking the maximum texture complexity in the whole face image to be beautified as a reference aiming at the face image to be beautified shown in fig. 2, and the obtained weight distribution diagram is shown in fig. 3, wherein detail areas such as eyes, nose, mouth and the like of the face area are not obvious enough. In order to highlight the detail area in the weight distribution diagram, normalization processing is performed based on the maximum texture complexity in the face area of the face image to be beautified, and therefore, it is necessary to find the maximum value of the pixel mean square error from the corresponding area of the pixel mean square error diagram according to the face area, as shown in formula (4).
FaceMax=max{valF(x,y)|(x,y)∈FaceRoi} (4)
Here, faceMax is the maximum value of pixel mean square error, valF (x, y) is the pixel mean square error of pixel point (x, y), faceroi= { fx, fy, fw, fh } is the face region, and fx, fy, fw, fh are the coordinates, width, and height parameters of the face region, respectively.
And fourthly, dividing the pixel mean square error of each pixel point in the pixel mean square error map with the maximum value of the pixel mean square error to obtain the texture complexity weight of each pixel point.
The normalization process is to divide the pixel mean square error of each pixel point by the maximum value of the pixel mean square error, as shown in formula (5), and the obtained texture complexity weight is a number between 0 and 1.
Figure BDA0001798492550000061
The texInf (x, y) is the texture complexity weight of the pixel point (x, y).
Fifthly, generating a weight distribution diagram according to the texture complexity weight of each pixel point.
After the texture complexity weight of each pixel point is obtained based on the steps, a weight distribution diagram can be generated based on the complexity weight of each pixel point, as shown in fig. 4, the detail areas such as eyes, nose, mouth and the like of the face area can be obviously distinguished from other areas, and the higher the brightness is, the larger the texture complexity weight of the area is. And, through the texture complexity weight calculation, a large part of areas in the background are also allocated with high texture complexity weights. In the later stage, when the face skin-milling treatment is carried out, key skin-milling treatment can be carried out on the area with low texture complexity weight; while areas of high texture complexity may be lightly skinned or not skinned.
Optionally, before the step of calculating the pixel mean value map corresponding to the face image to be beautified by using the first box filter algorithm according to the face image to be beautified, the face beautification method provided by the embodiment of the invention may further include the following steps:
acquiring the image gain of a face image to be beautified;
and according to the image gain, setting a corresponding BoxFilter parameter setting principle and a preset inverse proportion linkage relation between the image gain and the BoxFilter parameter according to preset weight distribution, and correspondingly setting the BoxFilter parameter in the first BoxFilter algorithm.
Because the analog gain, the digital gain, the platform gain and the like of the equipment hardware have different image gains, the larger the image gain is, the larger the noise content of the image is, in order to further improve the beautifying effect, the box filter parameters in the first box filter algorithm can be set according to the preset inverse proportion linkage relation between the image gain and the box filter parameters, and the setting principle of the box filter parameters in the first box filter algorithm is determined because the first box filter algorithm is used in the calculation process of the weight distribution diagram. If the image gain is larger, setting the parameters of the box filter in the first box filter algorithm to be smaller; if the image gain is smaller, the Box Filter parameter in the first Box Filter algorithm is set larger. The texture complexity weight obtained will be different due to the difference in the box filter parameter settings, and will be larger if the image gain is larger, or smaller otherwise.
Optionally, before S102, the face beautifying method provided by the embodiment of the present invention may further perform the following steps:
judging whether the brightness of the face image to be beautified reaches a preset threshold value or not;
if not, the pixel value of the face image to be beautified is adjusted by using a preset pixel value correction algorithm, and the face image to be beautified is updated until the brightness of the face image to be beautified reaches a preset threshold value.
Because the areas of the face image to be beautified are different, the brightness of the face image to be beautified may be lower, if the brightness of the face image to be beautified is very low, when the face beautifies, the detail area and the flat area are difficult to distinguish, therefore, aiming at the condition that the brightness of the face image to be beautified is lower than the preset threshold value, the pixel value of the face image to be beautified can be adjusted by adopting the preset pixel value correction algorithm, so that the brightness of the face image to be beautified can reach the preset threshold value. The preset pixel value correction algorithm may be a Gamma correction algorithm, a linear adjustment algorithm, or the like.
Optionally, the preset pixel value correction algorithm may include a Gamma correction algorithm.
The Gamma correction algorithm is a nonlinear pixel adjustment algorithm, and the curve of the Gamma correction algorithm represents the nonlinear relation between the brightness of a pixel point and the input voltage. If the brightness is darker, the Gamma curve is a convex curve; if the brightness is brighter, the Gamma curve is a concave curve, and the adjustment of the image brightness can be realized by adjusting the Gamma variable, wherein the setting of the Gamma variable can be self-adaptive or a fixed value.
Optionally, before the step of adjusting the pixel value of the face image to be beautified and updating the face image to be beautified by using a preset pixel value correction algorithm, the face beautifying method provided by the embodiment of the invention may further execute the following steps:
acquiring the image gain of a face image to be beautified;
according to the image gain, setting the Gamma variable in the Gamma correction algorithm correspondingly according to a preset proportional linkage relation between the image gain and the Gamma variable.
The larger the image gain is, the darker the brightness of the image is, and the brightness of the image can be better improved by setting the Gamma variable to be a little larger.
S103, carrying out face peeling weighting treatment on the face image to be beautified by utilizing a preset face peeling algorithm based on the texture complexity weight of each pixel point in the weight distribution diagram, so as to obtain the beautified image.
The preset face peeling algorithm is an image filtering algorithm adopted in face peeling processing, such as a bilateral filtering algorithm, a Non-Local-Means (Non-Local mean) filtering algorithm, a BM3D (Block-Matching and 3D, 3-dimensional Block Matching) filtering algorithm, and the like. Because of factors such as high algorithm complexity of Non-Local-Means filtering algorithm and BM3D filtering algorithm, and limited computing resources, a bilateral filtering algorithm is generally adopted in practical application.
Optionally, the preset face skin-polishing algorithm may include a bilateral filtering algorithm, and S103 may specifically include the following steps:
the first step, according to the pixel value of each pixel point in the face image to be beautified, calculating the pixel value of each pixel point after filtering by utilizing a bilateral filtering algorithm.
The bilateral filter mainly comprises two parts, wherein one part is related to the pixel space distance, the other part is related to the pixel difference value of the pixel point, and the formula of the bilateral filter is as follows:
Figure BDA0001798492550000081
wherein BF (x, y) is the pixel value of the filtered pixel point (x, y), w (x, y, i, j) is the bilateral filter weight, and as shown in formula (7), the bilateral filter weight is equal to the product of the spatial proximity factor and the luminance similarity factor.
w(x,y,i,j)=d(x,y,i,j)*r(x,y,i,j) (7)
Wherein d (x, y, i, j) is a spatial proximity factor, and r (x, y, i, j) is a luminance similarity factor. The expression of the spatial proximity factor is shown in formula (8):
Figure BDA0001798492550000091
wherein delta d Is a spatial proximity parameter.
The expression of the luminance similarity factor is shown in formula (9):
Figure BDA0001798492550000092
wherein delta r Is a luminance similarity factor parameter.
And respectively calculating a global convolution kernel and a pixel difference kernel, defining a domain window n, and calculating according to a formula (8) and a formula (9) to obtain a spatial proximity factor parameter (shown as a formula (10)) and a brightness similarity factor parameter (shown as a formula (11)).
Figure BDA0001798492550000093
Wherein d ef Spatial proximity factor parameters for x-i=e, y-j=f.
R=[r 0 ,r 1 ,…,r m-1 ] 1*m (11)
r g And m is the total number of the pixel points, which is the brightness similarity factor parameter of the g-th pixel point.
Thus, according to the formula (6), the formula (10) and the formula (11), the pixel value of each pixel after filtering can be obtained as shown in the formula (12).
BF(x,y)=∑ ij F(i,j)*D(i-x)(j-y)*R(|F(i,j)-F(x,y)|) (12)
Wherein i e [ x-n, x+n ], j e [ y-n, y+n ].
And secondly, carrying out weighted calculation on the pixel values of all the pixel points in the face image to be beautified and the pixel values of all the pixel points after filtering by a preset weighted formula based on the texture complexity weight of all the pixel points in the weight distribution diagram to obtain the face image to be beautified.
After the weight distribution diagram is obtained, the weight distribution diagram contains the texture complexity weight of each pixel point, in order to keep details near eyes, nose, mouth, bang and the like, the pixel points with larger texture complexity weight are lightly or not ground, and the pixel points with smaller texture complexity weight (such as the pixel points in areas of a face, a forehead and the like) are subjected to key grinding, so that the face image to be beautified can be weighted according to a weighting formula shown in a formula (13), the detail area is kept, and the flat area is ground.
MF(x,y)=BF(x,y)*(1-texInf(x,y))+F(x,y)*texInf(x,y) (13)
Wherein MF (x, y) is a pixel value of a pixel (x, y) in the face image, BF (x, y) is a pixel value of a filtered pixel (x, y), texInf (x, y) is a texture complexity weight of the pixel (x, y) in the weight distribution diagram, and F (x, y) is a pixel value of the pixel (x, y) in the face image to be beautified.
Optionally, before the step of calculating the pixel value of each pixel point after filtering by using a bilateral filtering algorithm according to the pixel value of each pixel point in the face image to be beautified, the face beautification method provided by the embodiment of the invention may further perform the following steps:
acquiring the image gain of a face image to be beautified;
according to the image gain, setting bilateral filtering parameters in the bilateral filtering algorithm correspondingly according to a preset proportional linkage relation between the image gain and the bilateral filtering parameters.
Because the influence of the analog gain, the digital gain, the platform gain and the like of the equipment hardware is different, the larger the image gain is, the larger the noise content of the image is, and in order to further improve the beautifying effect, the bilateral filtering parameters can be set according to the image gain according to the preset proportional linkage relation of the image gain and the bilateral filtering parameters. If the image gain is larger, setting bilateral filtering parameters larger; if the image gain is smaller, the bilateral filtering parameters are set smaller.
Optionally, after S103, the face beautifying method provided by the embodiment of the present invention may further perform the following steps:
acquiring the image gain of a face image to be beautified;
according to the image gain, correspondingly setting sharpening weights according to a preset proportional linkage relation of the image gain and the sharpening weights to obtain a sharpening formula;
carrying out image sharpening on the beauty image by utilizing a sharpening formula to obtain a sharpened beauty image, wherein the sharpening formula is as follows:
dst(x,y)=(F(x,y)-Gaussian(F(x,y),H)*w)/(1-w) (14)
dst (x, y) is the pixel value of the pixel point (x, y) in the sharpened beautifying image, F (x, y) is the pixel value of the pixel point (x, y) in the face image to be sharpened, gaussian (F (x, y), H) is the convolution result of the pixel values of the pixel points in the face image to be sharpened by the Gaussian convolution kernel H, each parameter in the Gaussian convolution kernel H is the average value, and w is the sharpening weight.
Because some fine interference details and noise may exist in the beauty image, in order to improve the authenticity of the output image, in the embodiment of the invention, a sharpening method such as USM (Unsharp Mask) sharpening is used to remove the interference details and noise, and the parameters in the Gaussian convolution kernel H are replaced by the average value, so that the calculation complexity can be reduced. In addition, due to the influences of analog gain, digital gain, platform gain and the like of equipment hardware, the image gain is different, the larger the image gain is, the larger the noise content of the image is, and in order to further improve the beautifying effect, the sharpening weight in the sharpening formula can be set according to the image gain according to the preset proportional linkage relation of the image gain and the sharpening weight. If the image gain is larger, the sharpening weight is set larger, so that a stronger sharpening effect is achieved; if the image gain is smaller, the sharpening weight is set smaller, and the sharpening effect is lightened.
By applying the embodiment, the face image to be beautified is obtained, the texture complexity weight calculation is carried out on the face image to be beautified by utilizing a preset texture complexity analysis algorithm, a weight distribution diagram is obtained, and the face grinding weighting treatment is carried out on the face image to be beautified by utilizing a preset face grinding algorithm based on the texture complexity weight of each pixel point in the weight distribution diagram, so that the face image to be beautified is obtained. The method comprises the steps of calculating texture complexity weights of face images to be beautified by using a preset texture complexity analysis algorithm, wherein different areas of the face images to be beautified have different texture complexity, so that the texture complexity weights distributed by pixel points of different areas in an obtained weight distribution diagram are different, and when the face skin is abraded by using the preset face skin abrasion algorithm, weighting treatment of face skin abrasion is performed on the face images to be beautified based on the texture complexity weights of all the pixel points, so that the different areas of the face images to be beautified have different skin abrasion degrees, partial detail information can be reserved, and the face beautification effect is improved.
Taking the distribution condition of the texture as an evaluation criterion, wherein the texture is rich in the region, the texture complexity weight is larger, the texture is sparse in the region, and the texture complexity weight is smaller; and the intensity of the skin grinding is controlled by utilizing the detail protection weight, the skin is ground on the face, and further, the detail part is not ground. And under different gains, different parameters and weights are used, so that the face beautifying effect can be controlled more sensitively, the face beautifying effect is further improved, more details are reserved under low gains, and the face noise is less under high gains.
Based on the embodiment shown in fig. 1, the embodiment of the invention also provides a face beautifying method, as shown in fig. 5, the face beautifying method may include the following steps:
s501, acquiring a face image to be beautified.
S501 is the same as S101 in the embodiment shown in fig. 1, and will not be described here again.
S502, carrying out dark area detail improvement on the face image to be beautified.
Based on the embodiment shown in fig. 1, because the areas of the face image to be beautified are different, the brightness of the face image to be beautified may be low, if the brightness of the face image to be beautified is low, it is difficult to distinguish the detail area and the flat area when the face beautifies, so that after the face image to be beautified is obtained, the operation of improving the details of the dark area of the face image to be beautified may be performed first. Specific operations are shown in the embodiment of fig. 1, and a preset pixel value correction algorithm may be adopted to adjust the pixel value of the face image to be beautified according to the situation that the brightness of the face image to be beautified is lower than the preset threshold, so that the brightness of the face image to be beautified can reach the preset threshold. The detailed dark space detail lifting operation steps are shown in the embodiment of fig. 1, and are not described herein.
S503, distinguishing the edge, the detail and the flat area in the face image to be beautified.
The step of distinguishing the edge, the detail and the flat area in the face image to be beautified is actually the process of obtaining the weight distribution diagram in S102 in the embodiment shown in fig. 1. Through texture complexity analysis, the texture complexity of the edge and detail area of the face image to be beautified is larger, and the texture complexity weight of the corresponding area in the weight distribution diagram is larger; and the texture complexity of the flat area of the face image to be beautified is smaller, so that the texture complexity weight of the corresponding area in the weight distribution diagram is smaller. The specific steps for calculating the weight distribution map are shown in the embodiment of fig. 1, and will not be described herein.
S504, carrying out face peeling on the face image to be beautified.
The process of peeling the human face by the human push map is shown in S103 of the embodiment in fig. 1, and will not be described herein.
S505, fusing the edge and detail of the face image to be beautified with the beautified face image.
After the face image to be beautified is subjected to face skin grinding, the obtained beautified image is often a single-channel image, the display effect is not ideal, and some jumping and blurring situations can occur in an edge transition area inevitably, so that a guiding filtering method is required to be used for fusing edges, details and the beautified image, the edge transition is natural, and blurring is not generated.
Optionally, the step of fusing may specifically include:
the first step, a first pixel mean value diagram corresponding to the face image to be beautified, a second pixel mean value diagram corresponding to the face image to be beautified and a first mean square value diagram corresponding to the face image to be beautified are respectively calculated by using a second Box Filter algorithm, and the second mean square value diagram is calculated according to the dot multiplication result of the face image to be beautified and the face image to be beautified.
Optionally, the step of calculating the first pixel mean value map corresponding to the face image to be beautified, the second pixel mean value map corresponding to the face image to be beautified, and the first mean square value map corresponding to the face image to be beautified by using a second BoxFilter algorithm, and calculating the second mean square value map according to the dot multiplication result of the face image to be beautified and the face image to be beautified specifically may include:
the first pixel mean map is calculated using equation (15), the second pixel mean map is calculated using equation (16), the first mean square map is calculated using equation (17), and the second mean square map is calculated using equation (18), respectively, using a second BoxFilter algorithm.
meanI=BoxFilter(I) (15)
meanP=BoxFilter(P) (16)
corrI=BoxFilter(I.*I) (17)
corrIP=BoxFilter(I.*P) (18)
Wherein I is a face image to be beautified, P is a face image to be beautified, means I is a first pixel mean value diagram, means P is a second pixel mean value diagram, corrI is a first mean square value diagram, and corrIP is a second mean square value diagram.
And secondly, calculating a pixel variance diagram corresponding to the beauty image according to the first mean square value diagram and the first pixel mean diagram.
Optionally, the step of calculating the pixel variance map corresponding to the beauty image according to the first mean square value map and the first pixel mean value map may specifically include:
and calculating a pixel variance diagram corresponding to the beauty image according to the first mean square value diagram and the first pixel mean diagram by using a formula (19).
varI=corrI-meanI.*meanI (19)
Wherein varI is a pixel variance diagram.
And thirdly, calculating a covariance map of the face image to be beautified and the face image to be beautified according to the second mean square value map, the first pixel mean value map and the second pixel mean value map.
Optionally, the step of calculating the covariance map of the face image to be beautified and the face image to be beautified according to the second mean square value map, the first pixel mean map and the second pixel mean map may specifically include:
and calculating a covariance map of the face image to be beautified and the face image to be beautified according to the second mean square value map, the first pixel mean value map and the second pixel mean value map by utilizing a formula (20).
covIP=corrIP-meanI.*meanP (20)
Wherein covIP is a covariance map.
And fourthly, calculating a first linear parameter matrix according to the covariance map and the pixel variance map.
Optionally, the step of calculating the first linear parameter matrix according to the covariance map and the pixel variance map may specifically include:
According to the covariance map and the pixel variance map, a first linear parameter matrix is calculated by using a first linear parameter calculation formula, wherein the first linear parameter calculation formula is as follows:
a=covIP./(varI+∈) (21)
a is a first linear parameter matrix and e is a preset matrix.
And fifthly, calculating a second linear parameter matrix according to the second pixel mean value graph, the first linear parameter matrix and the first pixel mean value graph.
Optionally, the step of calculating the second linear parameter matrix according to the second pixel mean map, the first linear parameter matrix and the first pixel mean map may specifically include:
according to the second pixel mean value diagram, the first linear parameter matrix and the first pixel mean value diagram, a second linear parameter matrix is calculated by using a second linear parameter calculation formula, wherein the second linear parameter calculation formula is as follows:
b=meanP-a.*meanI (22)
b is a second linear parameter matrix.
And sixthly, respectively filtering the first linear parameter matrix and the second linear parameter matrix by using a second Box Filter algorithm to obtain a first parameter average matrix corresponding to the first linear parameter matrix and a second parameter average matrix corresponding to the second linear parameter matrix.
Optionally, the step of filtering the first linear parameter matrix and the second linear parameter matrix by using a second BoxFilter algorithm to obtain a first parameter average matrix corresponding to the first linear parameter matrix and a second parameter average matrix corresponding to the second linear parameter matrix may specifically include:
The second BoxFilter algorithm is used to calculate the first parameter mean matrix using equation (23) and the second parameter mean matrix using equation (24), respectively.
meanA=BoxFilter(a) (23)
meanB=BoxFilter(b) (24)
Wherein, means A is the first parameter average matrix, means B is the second parameter average matrix.
Seventh, according to the first parameter average matrix, the beauty image and the second parameter average matrix, an output image is obtained.
Optionally, the step of obtaining the output image according to the first parameter average matrix, the beauty image and the second parameter average matrix may specifically include:
according to the first parameter mean matrix, the beauty image and the second parameter mean matrix, an output image calculation formula is utilized to obtain an output image, wherein the output image calculation formula is as follows:
q=meanA.*I+meanB (25)
q is the output image.
According to the method, the components P of three channels of the face image P to be beautified are respectively aimed at R 、P G And P B Obtaining corresponding three passing output images q R 、q G And q B The final output image is obtained by arranging the output images of the three channels.
Optionally, before using the second BoxFilter algorithm to calculate the first pixel mean value map corresponding to the face image to be beautified, the second pixel mean value map corresponding to the face image to be beautified, and the first mean square value map corresponding to the face image to be beautified respectively, and calculating the second mean square value map according to the dot multiplication result of the face image to be beautified and the face image to be beautified, the face beautification method provided by the embodiment of the invention may further perform the following steps:
Acquiring the image gain of a face image to be beautified;
and according to the image gain, setting the Box Filter parameter in the second Box Filter algorithm correspondingly according to a preset output image corresponding Box Filter parameter setting principle and a preset proportional linkage relation between the image gain and the box Filter parameter.
Because the analog gain, the digital gain, the platform gain and the like of the equipment hardware are influenced, the image gain is different, the larger the image gain is, the larger the noise content of the image is, in order to further improve the beautifying effect, the box filter parameter in the second box filter algorithm can be set according to the preset proportional linkage relation between the image gain and the box filter parameter, and the setting principle of the box filter parameter in the second box filter algorithm is determined because the second box filter algorithm is used in the calculation process of the output image. If the image gain is larger, setting the parameters of the box filter in the second box filter algorithm to be larger; if the image gain is smaller, the Box Filter parameter in the second Box Filter algorithm is set smaller. The BoxFilter parameter is actually the size of the sliding window, and the BoxFilter parameter in the first BoxFilter algorithm and the BoxFilter parameter in the second BoxFilter algorithm may be set to be the same or different.
As shown in fig. 5, different image gains affect parameters of each link in the whole face beautifying process, and parameters of each link can be adjusted based on the image gains. Combining the embodiment shown in fig. 1 and the embodiment shown in fig. 5, under different image gains, the face beautifying effect is controlled by using different parameters, so that more details are reserved under the low gain, the face noise is less under the high gain, as shown in fig. 6, and under different image gains, the parameter settings mainly comprise: texture complexity parameters (BoxFilter parameters in the first BoxFilter algorithm), texture complexity weights (texInf (x, y)), bilateral filtering parameters, guided filtering parameters (BoxFilter parameters in the second BoxFilter algorithm), and sharpening weights.
S506, sharpening the fused image to obtain a sharpened beautifying image.
After the edge and detail of the face image to be beautified are fused with the beautified image to obtain the fused image, the sharpening process can be adopted because the image possibly contains some fine interference details and noise, so that the sharpened beautified image is more true and reliable. The specific sharpening process is similar to that described in the embodiment shown in fig. 1, and will not be described again.
By applying the embodiment, the face image to be beautified is obtained, the texture complexity weight calculation is carried out on the face image to be beautified by utilizing a preset texture complexity analysis algorithm, a weight distribution diagram is obtained, and the face grinding weighting treatment is carried out on the face image to be beautified by utilizing a preset face grinding algorithm based on the texture complexity weight of each pixel point in the weight distribution diagram, so that the face image to be beautified is obtained. The method comprises the steps of calculating texture complexity weights of face images to be beautified by using a preset texture complexity analysis algorithm, wherein different areas of the face images to be beautified have different texture complexity, so that the texture complexity weights distributed by pixel points of different areas in an obtained weight distribution diagram are different, and when the face skin is abraded by using the preset face skin abrasion algorithm, weighting treatment of face skin abrasion is performed on the face images to be beautified based on the texture complexity weights of all the pixel points, so that the different areas of the face images to be beautified have different skin abrasion degrees, partial detail information can be reserved, and the face beautification effect is improved.
Taking the distribution condition of the texture as an evaluation criterion, wherein the texture is rich in the region, the texture complexity weight is larger, the texture is sparse in the region, and the texture complexity weight is smaller; the intensity of the skin grinding is controlled by utilizing the detail protection weight, the skin is ground on the face, and further, the detail part is not ground; the fusion of the edge, the detail and the beauty image is carried out by using a guiding filtering method, so that the edge transition is natural and no blurring is generated. And under different gains, different parameters and weights are used, so that the face beautifying effect can be controlled more sensitively, the face beautifying effect is further improved, more details are reserved under low gains, and the face noise is less under high gains.
Corresponding to the above method embodiment, the embodiment of the present invention provides a face beautifying device, as shown in fig. 7, which may include:
an acquisition module 710, configured to acquire a face image to be beautified;
the calculating module 720 is configured to perform texture complexity weight calculation on the face image to be beautified by using a preset texture complexity analysis algorithm, so as to obtain a weight distribution diagram, where the weight distribution diagram includes texture complexity weights of all pixel points of the face image to be beautified;
and the face model Yan Mokuai is used for carrying out face grinding weighting processing on the face image to be beautified by utilizing a preset face grinding algorithm based on the texture complexity weight of each pixel point in the weight distribution diagram to obtain a beautified image.
Optionally, the apparatus may further include:
the judging module is used for judging whether the brightness of the face image to be beautified reaches a preset threshold value;
and the updating module is used for adjusting the pixel value of the face image to be beautified by using a preset pixel value correction algorithm if the judging result of the judging module is negative, and updating the face image to be beautified until the brightness of the face image to be beautified reaches the preset threshold value.
Optionally, the preset pixel value correction algorithm may include: gamma correction algorithm;
the acquiring module 710 may be further configured to acquire an image gain of the face image to be beautified;
the Gamma variable in the Gamma correction algorithm is as follows: and according to the image gain, setting according to a preset proportional linkage relation between the image gain and the Gamma variable.
Optionally, the computing module 720 may specifically be configured to:
according to the face image to be beautified, calculating a pixel mean value diagram corresponding to the face image to be beautified by using a first Box Filter algorithm;
according to the face image to be beautified and the pixel mean value diagram, calculating a pixel mean square error diagram corresponding to the face image to be beautified;
searching a maximum value of the pixel mean square error from the corresponding area of the pixel mean square error map according to the face area of the face image to be beautified;
dividing the pixel mean square error of each pixel point in the pixel mean square error map with the maximum value of the pixel mean square error to obtain the texture complexity weight of each pixel point;
and generating a weight distribution diagram according to the texture complexity weight of each pixel point.
Optionally, the computing module 720 may specifically be configured to:
And according to the face image to be beautified and the pixel mean value diagram, carrying out mean filtering on the mean square error of the face image to be beautified and the pixel mean value diagram by utilizing the first Box Filter algorithm to obtain a pixel mean square error diagram corresponding to the face image to be beautified.
Optionally, the acquiring module 710 may be further configured to acquire an image gain of the face image to be beautified;
the BoxFilter parameters in the first BoxFilter algorithm are: and according to the image gain, setting according to a preset weight distribution diagram corresponding to a box filter parameter setting principle and a preset inverse proportion linkage relation between the image gain and the box filter parameter.
Optionally, the preset face skin-grinding algorithm may include: a bilateral filtering algorithm;
the said united states Yan Mokuai 730 can be used in particular:
calculating the pixel value of each pixel point after filtering by utilizing the bilateral filtering algorithm according to the pixel value of each pixel point in the face image to be beautified;
and carrying out weighted calculation on the pixel values of all the pixel points in the face image to be beautified and the pixel values of all the pixel points after filtering by a preset weighted formula based on the texture complexity weight of all the pixel points in the weight distribution diagram to obtain the beautified image.
Optionally, the preset weighting formula may be:
MF(x,y)=BF(x,y)*(1-texInf(x,y))+F(x,y)*texInf(x,y)
wherein MF (x, y) is a pixel value of a pixel point (x, y) in the face image, BF (x, y) is a pixel value of a filtered pixel point (x, y), texInf (x, y) is a texture complexity weight of the pixel point (x, y) in the weight distribution diagram, and F (x, y) is a pixel value of the pixel point (x, y) in the face image to be beautified.
Optionally, the acquiring module 710 may be further configured to acquire an image gain of the face image to be beautified;
the bilateral filtering parameters in the bilateral filtering algorithm are as follows: and according to the image gain, setting correspondingly according to a preset proportional linkage relation between the image gain and the bilateral filtering parameter.
Optionally, the computing module 720 may be further configured to:
respectively calculating a first pixel mean value diagram corresponding to the face image to be beautified, a second pixel mean value diagram corresponding to the face image to be beautified and a first mean square value diagram corresponding to the face image to be beautified by using a second box filter algorithm, and calculating a second mean square value diagram according to the dot multiplication result of the face image to be beautified and the face image to be beautified;
calculating a pixel variance diagram corresponding to the beauty image according to the first mean square value diagram and the first pixel mean diagram;
According to the second mean square value graph, the first pixel mean value graph and the second pixel mean value graph, calculating covariance graphs of the face images to be beautified and the face images to be beautified;
calculating a first linear parameter matrix according to the covariance map and the pixel variance map;
calculating a second linear parameter matrix according to the second pixel mean value diagram, the first linear parameter matrix and the first pixel mean value diagram;
filtering the first linear parameter matrix and the second linear parameter matrix by using the second box filter algorithm to obtain a first parameter average matrix corresponding to the first linear parameter matrix and a second parameter average matrix corresponding to the second linear parameter matrix;
and obtaining an output image according to the first parameter average matrix, the beauty image and the second parameter average matrix.
Optionally, the computing module 720 may specifically be configured to:
according to the covariance map and the pixel variance map, a first linear parameter matrix is calculated by using a first linear parameter calculation formula, wherein the first linear parameter calculation formula is as follows:
a=covIP./(varI+∈)
a is the first linear parameter matrix, covIP is the covariance map, varI is the pixel variance map, and epsilon is a preset matrix;
According to the second pixel mean value diagram, the first linear parameter matrix and the first pixel mean value diagram, a second linear parameter matrix is calculated by using a second linear parameter calculation formula, wherein the second linear parameter calculation formula is as follows:
b=meanP-a.*meanI
b is the second linear parameter matrix, means P is the second pixel mean value diagram, and means I is the first pixel mean value diagram;
obtaining an output image by using an output image calculation formula according to the first parameter average matrix, the beauty image and the second parameter average matrix, wherein the output image calculation formula is as follows:
q=meanA.*I+meanB
q is the output image, means A is the first parameter average matrix, I is the beauty image, and means B is the second parameter average matrix.
Optionally, the acquiring module 710 may be further configured to acquire an image gain of the face image to be beautified;
the BoxFilter parameters in the second BoxFilter algorithm are: and according to the image gain, setting according to a preset output image corresponding to a box filter parameter setting principle and a preset proportional linkage relation between the image gain and the box filter parameter.
Optionally, the acquiring module is further configured to acquire an image gain of the face image to be beautified;
The apparatus may further include:
the setting module is used for correspondingly setting sharpening weights according to the image gain and a preset proportional linkage relation of the image gain and the sharpening weights to obtain a sharpening formula;
the sharpening module is used for carrying out image sharpening on the beauty image by utilizing the sharpening formula to obtain a sharpened beauty image, wherein the sharpening formula is as follows:
dst(x,y)=(F(x,y)-Gaussian(F(x,y),H)*w)/(1-w)
dst (x, y) is the pixel value of the pixel point (x, y) in the sharpened beautifying image, F (x, y) is the pixel value of the pixel point (x, y) in the face image to be sharpened, gaussian (F (x, y), H) is the convolution result of the Gaussian convolution kernel H on the pixel value of each pixel point in the face image to be sharpened, and each parameter in the Gaussian convolution kernel H is the average value and w is the sharpening weight.
By applying the embodiment, the face image to be beautified is obtained, the texture complexity weight calculation is carried out on the face image to be beautified by utilizing a preset texture complexity analysis algorithm, a weight distribution diagram is obtained, and the face grinding weighting treatment is carried out on the face image to be beautified by utilizing a preset face grinding algorithm based on the texture complexity weight of each pixel point in the weight distribution diagram, so that the face image to be beautified is obtained. The method comprises the steps of calculating texture complexity weights of face images to be beautified by using a preset texture complexity analysis algorithm, wherein different areas of the face images to be beautified have different texture complexity, so that the texture complexity weights distributed by pixel points of different areas in an obtained weight distribution diagram are different, and when the face skin is abraded by using the preset face skin abrasion algorithm, weighting treatment of face skin abrasion is performed on the face images to be beautified based on the texture complexity weights of all the pixel points, so that the different areas of the face images to be beautified have different skin abrasion degrees, partial detail information can be reserved, and the face beautification effect is improved.
An embodiment of the invention also provides an electronic device, as shown in fig. 8, comprising a processor 801 and a computer-readable storage medium 802, wherein,
the computer readable storage medium 802 is used for storing a computer program;
the processor 801 is configured to implement all the steps of the face beautifying method provided by the embodiment of the present invention when executing the program stored on the computer readable storage medium 802.
The computer-readable storage medium 802 and the processor 801 may be in data communication by wired connection or wireless connection, and the electronic device may communicate with other devices through a wired communication interface or a wireless communication interface. The example of data transmission between the processor 801 and the computer readable storage medium 802 through a bus is shown in fig. 8, and is not limited to a specific connection manner.
The computer readable storage medium may include RAM (Random Access Memory ) or NVM (Non-Volatile Memory), such as at least one magnetic disk Memory. Optionally, the computer readable storage medium may also be at least one storage device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In this embodiment, the processor can realize by reading a computer program stored in a computer-readable storage medium and by running the computer program: the face image to be beautified is obtained, a preset texture complexity analysis algorithm is utilized to calculate the texture complexity weight of the face image to be beautified, a weight distribution diagram is obtained, and based on the texture complexity weight of each pixel point in the weight distribution diagram, the face image to be beautified is subjected to weighting treatment of face grinding by utilizing the preset face grinding algorithm, so that the face image to be beautified is obtained. The method comprises the steps of calculating texture complexity weights of face images to be beautified by using a preset texture complexity analysis algorithm, wherein different areas of the face images to be beautified have different texture complexity, so that the texture complexity weights distributed by pixel points of different areas in an obtained weight distribution diagram are different, and when the face skin is abraded by using the preset face skin abrasion algorithm, weighting treatment of face skin abrasion is performed on the face images to be beautified based on the texture complexity weights of all the pixel points, so that the different areas of the face images to be beautified have different skin abrasion degrees, partial detail information can be reserved, and the face beautification effect is improved.
In addition, the embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, all the steps of the face beautifying method provided by the embodiment of the invention are realized.
In this embodiment, the computer readable storage medium stores a computer program for executing the face beautifying method provided by the embodiment of the present invention at the time of operation, so that it is possible to realize: the face image to be beautified is obtained, a preset texture complexity analysis algorithm is utilized to calculate the texture complexity weight of the face image to be beautified, a weight distribution diagram is obtained, and based on the texture complexity weight of each pixel point in the weight distribution diagram, the face image to be beautified is subjected to weighting treatment of face grinding by utilizing the preset face grinding algorithm, so that the face image to be beautified is obtained. The method comprises the steps of calculating texture complexity weights of face images to be beautified by using a preset texture complexity analysis algorithm, wherein different areas of the face images to be beautified have different texture complexity, so that the texture complexity weights distributed by pixel points of different areas in an obtained weight distribution diagram are different, and when the face skin is abraded by using the preset face skin abrasion algorithm, weighting treatment of face skin abrasion is performed on the face images to be beautified based on the texture complexity weights of all the pixel points, so that the different areas of the face images to be beautified have different skin abrasion degrees, partial detail information can be reserved, and the face beautification effect is improved.
For the electronic device and computer-readable storage medium embodiments, the description is relatively simple, as the method content involved is substantially similar to the method embodiments described above, and reference will be made to part of the description of the method embodiments for relevant points.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, electronic devices, and computer-readable storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the section descriptions of method embodiments being merely illustrative.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (12)

1. A face beautifying method, the method comprising:
acquiring a face image to be beautified;
carrying out texture complexity weight calculation on the face image to be beautified by using a preset texture complexity analysis algorithm to obtain a weight distribution diagram, wherein the weight distribution diagram comprises texture complexity weights of all pixel points of the face image to be beautified;
based on the texture complexity weight of each pixel point in the weight distribution diagram, carrying out face peeling weighting treatment on the face image to be beautified by utilizing a preset face peeling algorithm to obtain a beautified image;
The step of calculating the texture complexity weight of the face image to be beautified by using a preset texture complexity analysis algorithm to obtain a weight distribution diagram comprises the following steps: according to the face image to be beautified, calculating a pixel mean value diagram corresponding to the face image to be beautified by using a first Box Filter algorithm; according to the face image to be beautified and the pixel mean value diagram, calculating a pixel mean square error diagram corresponding to the face image to be beautified; searching a maximum value of the pixel mean square error from the corresponding area of the pixel mean square error map according to the face area of the face image to be beautified; dividing the pixel mean square error of each pixel point in the pixel mean square error map with the maximum value of the pixel mean square error to obtain the texture complexity weight of each pixel point; generating a weight distribution diagram according to the texture complexity weight of each pixel point;
the calculating a pixel mean square error diagram corresponding to the face image to be beautified according to the face image to be beautified and the pixel mean value diagram comprises the following steps: and according to the face image to be beautified and the pixel mean value diagram, carrying out mean filtering on the mean square error of the face image to be beautified and the pixel mean value diagram by utilizing the first Box Filter algorithm to obtain a pixel mean square error diagram corresponding to the face image to be beautified.
2. A facial beautification apparatus, the apparatus comprising:
the acquisition module is used for acquiring the face image to be beautified;
the computing module is used for carrying out texture complexity weight computation on the face image to be beautified by utilizing a preset texture complexity analysis algorithm to obtain a weight distribution diagram, wherein the weight distribution diagram comprises texture complexity weights of all pixel points of the face image to be beautified;
the beauty Yan Mokuai is used for carrying out weight processing of face grinding on the face image to be beautified by utilizing a preset face grinding algorithm based on the texture complexity weight of each pixel point in the weight distribution diagram to obtain a beautified image;
the computing module is specifically configured to: according to the face image to be beautified, calculating a pixel mean value diagram corresponding to the face image to be beautified by using a first Box Filter algorithm; according to the face image to be beautified and the pixel mean value diagram, calculating a pixel mean square error diagram corresponding to the face image to be beautified; searching a maximum value of the pixel mean square error from the corresponding area of the pixel mean square error map according to the face area of the face image to be beautified; dividing the pixel mean square error of each pixel point in the pixel mean square error map with the maximum value of the pixel mean square error to obtain the texture complexity weight of each pixel point; generating a weight distribution diagram according to the texture complexity weight of each pixel point;
The computing module is specifically configured to: and according to the face image to be beautified and the pixel mean value diagram, carrying out mean filtering on the mean square error of the face image to be beautified and the pixel mean value diagram by utilizing the first Box Filter algorithm to obtain a pixel mean square error diagram corresponding to the face image to be beautified.
3. The apparatus of claim 2, wherein the apparatus further comprises:
the judging module is used for judging whether the brightness of the face image to be beautified reaches a preset threshold value;
and the updating module is used for adjusting the pixel value of the face image to be beautified by using a preset pixel value correction algorithm if the judging result of the judging module is negative, and updating the face image to be beautified until the brightness of the face image to be beautified reaches the preset threshold value.
4. A device according to claim 3, wherein the preset pixel value correction algorithm comprises: gamma correction algorithm;
the acquisition module is also used for acquiring the image gain of the face image to be beautified;
the Gamma variable in the Gamma correction algorithm is as follows: and according to the image gain, setting according to a preset proportional linkage relation between the image gain and the Gamma variable.
5. The apparatus according to claim 2, wherein the acquiring module is further configured to acquire an image gain of the face image to be beautified;
the BoxFilter parameters in the first BoxFilter algorithm are: and according to the image gain, setting according to a preset weight distribution diagram corresponding to a box filter parameter setting principle and a preset inverse proportion linkage relation between the image gain and the box filter parameter.
6. The apparatus of claim 2, wherein the preset face peeling algorithm comprises: a bilateral filtering algorithm;
the said beauty Yan Mokuai is specifically for:
calculating the pixel value of each pixel point after filtering by utilizing the bilateral filtering algorithm according to the pixel value of each pixel point in the face image to be beautified;
and carrying out weighted calculation on the pixel values of all the pixel points in the face image to be beautified and the pixel values of all the pixel points after filtering by a preset weighted formula based on the texture complexity weight of all the pixel points in the weight distribution diagram to obtain the beautified image.
7. The apparatus of claim 6, wherein the predetermined weighting formula is:
MF(x,y)=BF(x,y)*(1-texInf(x,y))+F(x,y)*texInf(x,y)
wherein MF (x, y) is a pixel value of a pixel point (x, y) in the face image, BF (x, y) is a pixel value of a filtered pixel point (x, y), texInf (x, y) is a texture complexity weight of the pixel point (x, y) in the weight distribution diagram, and F (x, y) is a pixel value of the pixel point (x, y) in the face image to be beautified.
8. The apparatus of claim 6, wherein the acquiring module is further configured to acquire an image gain of the face image to be beautified;
the bilateral filtering parameters in the bilateral filtering algorithm are as follows: and according to the image gain, setting correspondingly according to a preset proportional linkage relation between the image gain and the bilateral filtering parameter.
9. The apparatus of claim 2, wherein the computing module is further configured to:
respectively calculating a first pixel mean value diagram corresponding to the face image to be beautified, a second pixel mean value diagram corresponding to the face image to be beautified and a first mean square value diagram corresponding to the face image to be beautified by using a second box filter algorithm, and calculating a second mean square value diagram according to the dot multiplication result of the face image to be beautified and the face image to be beautified;
calculating a pixel variance diagram corresponding to the beauty image according to the first mean square value diagram and the first pixel mean diagram;
according to the second mean square value graph, the first pixel mean value graph and the second pixel mean value graph, calculating covariance graphs of the face images to be beautified and the face images to be beautified;
calculating a first linear parameter matrix according to the covariance map and the pixel variance map;
Calculating a second linear parameter matrix according to the second pixel mean value diagram, the first linear parameter matrix and the first pixel mean value diagram;
filtering the first linear parameter matrix and the second linear parameter matrix by using the second box filter algorithm to obtain a first parameter average matrix corresponding to the first linear parameter matrix and a second parameter average matrix corresponding to the second linear parameter matrix;
and obtaining an output image according to the first parameter average matrix, the beauty image and the second parameter average matrix.
10. The apparatus according to claim 9, wherein the computing module is configured to:
according to the covariance map and the pixel variance map, a first linear parameter matrix is calculated by using a first linear parameter calculation formula, wherein the first linear parameter calculation formula is as follows:
a=covIP/(varI+∈)
a is the first linear parameter matrix, covIP is the covariance map, varI is the pixel variance map, and epsilon is a preset matrix;
according to the second pixel mean value diagram, the first linear parameter matrix and the first pixel mean value diagram, a second linear parameter matrix is calculated by using a second linear parameter calculation formula, wherein the second linear parameter calculation formula is as follows:
b=meanP-a*meanI
b is the second linear parameter matrix, means P is the second pixel mean value diagram, and means I is the first pixel mean value diagram;
obtaining an output image by using an output image calculation formula according to the first parameter average matrix, the beauty image and the second parameter average matrix, wherein the output image calculation formula is as follows:
q=meanA*I+meanB
q is the output image, means A is the first parameter average matrix, I is the beauty image, and means B is the second parameter average matrix.
11. The apparatus of claim 9, wherein the acquiring module is further configured to acquire an image gain of the face image to be beautified;
the BoxFilter parameters in the second BoxFilter algorithm are: and according to the image gain, setting according to a preset output image corresponding to a box filter parameter setting principle and a preset proportional linkage relation between the image gain and the box filter parameter.
12. The apparatus according to claim 2, wherein the acquiring module is further configured to acquire an image gain of the face image to be beautified;
the apparatus further comprises:
the setting module is used for correspondingly setting sharpening weights according to the image gain and a preset proportional linkage relation of the image gain and the sharpening weights to obtain a sharpening formula;
The sharpening module is used for carrying out image sharpening on the beauty image by utilizing the sharpening formula to obtain a sharpened beauty image, wherein the sharpening formula is as follows:
dst(x,y)=(F(x,y)-Gaussian(F(x,y),H)*w)/(1-w)
dst (x, y) is the pixel value of the pixel point (x, y) in the sharpened beautifying image, F (x, y) is the pixel value of the pixel point (x, y) in the face image to be sharpened, gaussian (F (x, y), H) is the convolution result of the Gaussian convolution kernel H on the pixel value of each pixel point in the face image to be sharpened, and each parameter in the Gaussian convolution kernel H is the average value and w is the sharpening weight.
CN201811066782.3A 2018-09-13 2018-09-13 Face beautifying method and device Active CN110895789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811066782.3A CN110895789B (en) 2018-09-13 2018-09-13 Face beautifying method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811066782.3A CN110895789B (en) 2018-09-13 2018-09-13 Face beautifying method and device

Publications (2)

Publication Number Publication Date
CN110895789A CN110895789A (en) 2020-03-20
CN110895789B true CN110895789B (en) 2023-05-02

Family

ID=69785125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811066782.3A Active CN110895789B (en) 2018-09-13 2018-09-13 Face beautifying method and device

Country Status (1)

Country Link
CN (1) CN110895789B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150393B (en) * 2020-10-12 2024-08-13 深圳数联天下智能科技有限公司 Face image peeling method and device, computer equipment and storage medium
CN114913099B (en) * 2021-12-28 2024-07-16 天翼数字生活科技有限公司 Method and system for processing video file

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833753A (en) * 2010-04-30 2010-09-15 西安电子科技大学 SAR image speckle removal method based on improved Bayesian non-local mean filter
FR2959093A1 (en) * 2010-04-20 2011-10-21 Canon Kk Method for predicting information of complexity texture of current image of image sequences to control allowed flow for hierarchical coding current image, involves obtaining prediction module of information of complexity texture
WO2015095529A1 (en) * 2013-12-19 2015-06-25 Google Inc. Image adjustment using texture mask
CN106204462A (en) * 2015-05-04 2016-12-07 南京邮电大学 Non-local mean denoising method based on image multiple features fusion
CN106339993A (en) * 2016-08-26 2017-01-18 北京金山猎豹科技有限公司 Human face image polishing method and device and terminal device
CN106373095A (en) * 2016-08-29 2017-02-01 广东欧珀移动通信有限公司 Image processing method and terminal
CN106920211A (en) * 2017-03-09 2017-07-04 广州四三九九信息科技有限公司 U.S. face processing method, device and terminal device
CN107169941A (en) * 2017-06-15 2017-09-15 北京大学深圳研究生院 A kind of video denoising method
WO2017185452A1 (en) * 2016-04-27 2017-11-02 宇龙计算机通信科技(深圳)有限公司 Image restoration method and system
CN107730465A (en) * 2017-10-09 2018-02-23 武汉斗鱼网络科技有限公司 Face U.S. face method and device in a kind of image
CN107766831A (en) * 2017-10-31 2018-03-06 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107798654A (en) * 2017-11-13 2018-03-13 北京小米移动软件有限公司 Image mill skin method and device, storage medium
CN108053377A (en) * 2017-12-11 2018-05-18 北京小米移动软件有限公司 Image processing method and equipment
CN108205804A (en) * 2016-12-16 2018-06-26 阿里巴巴集团控股有限公司 Image processing method, device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100790148B1 (en) * 2006-07-27 2008-01-02 삼성전자주식회사 Real time image complexity measurement method
EP2051524A1 (en) * 2007-10-15 2009-04-22 Panasonic Corporation Image enhancement considering the prediction error
US8351725B2 (en) * 2008-09-23 2013-01-08 Sharp Laboratories Of America, Inc. Image sharpening technique

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2959093A1 (en) * 2010-04-20 2011-10-21 Canon Kk Method for predicting information of complexity texture of current image of image sequences to control allowed flow for hierarchical coding current image, involves obtaining prediction module of information of complexity texture
CN101833753A (en) * 2010-04-30 2010-09-15 西安电子科技大学 SAR image speckle removal method based on improved Bayesian non-local mean filter
WO2015095529A1 (en) * 2013-12-19 2015-06-25 Google Inc. Image adjustment using texture mask
CN106204462A (en) * 2015-05-04 2016-12-07 南京邮电大学 Non-local mean denoising method based on image multiple features fusion
WO2017185452A1 (en) * 2016-04-27 2017-11-02 宇龙计算机通信科技(深圳)有限公司 Image restoration method and system
CN106339993A (en) * 2016-08-26 2017-01-18 北京金山猎豹科技有限公司 Human face image polishing method and device and terminal device
CN106373095A (en) * 2016-08-29 2017-02-01 广东欧珀移动通信有限公司 Image processing method and terminal
CN108205804A (en) * 2016-12-16 2018-06-26 阿里巴巴集团控股有限公司 Image processing method, device and electronic equipment
CN106920211A (en) * 2017-03-09 2017-07-04 广州四三九九信息科技有限公司 U.S. face processing method, device and terminal device
CN107169941A (en) * 2017-06-15 2017-09-15 北京大学深圳研究生院 A kind of video denoising method
CN107730465A (en) * 2017-10-09 2018-02-23 武汉斗鱼网络科技有限公司 Face U.S. face method and device in a kind of image
CN107766831A (en) * 2017-10-31 2018-03-06 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107798654A (en) * 2017-11-13 2018-03-13 北京小米移动软件有限公司 Image mill skin method and device, storage medium
CN108053377A (en) * 2017-12-11 2018-05-18 北京小米移动软件有限公司 Image processing method and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴颖斌 ; .基于肤色检测和导向滤波的人脸美化技术.运城学院学报.2018,(第03期),全文. *

Also Published As

Publication number Publication date
CN110895789A (en) 2020-03-20

Similar Documents

Publication Publication Date Title
Bai et al. Underwater image enhancement based on global and local equalization of histogram and dual-image multi-scale fusion
Deng et al. A guided edge-aware smoothing-sharpening filter based on patch interpolation model and generalized gamma distribution
Li et al. Structure-revealing low-light image enhancement via robust retinex model
Ma et al. Multi-exposure image fusion by optimizing a structural similarity index
US10410327B2 (en) Shallow depth of field rendering
Yang et al. An adaptive method for image dynamic range adjustment
Maurya et al. Contrast and brightness balance in image enhancement using Cuckoo Search-optimized image fusion
US20150326845A1 (en) Depth value restoration method and system
Lin et al. Low-light enhancement using a plug-and-play Retinex model with shrinkage mapping for illumination estimation
CN112258440B (en) Image processing method, device, electronic equipment and storage medium
CN110675334A (en) Image enhancement method and device
CN111369478B (en) Face image enhancement method and device, computer equipment and storage medium
CN111145086A (en) Image processing method and device and electronic equipment
US20110206293A1 (en) Image processing apparatus, image processing method, and computer readable medium storing program thereof
CN112214773B (en) Image processing method and device based on privacy protection and electronic equipment
Son et al. Layer-based approach for image pair fusion
Gupta et al. Histogram based image enhancement techniques: a survey
CN115578284A (en) Multi-scene image enhancement method and system
CN113379623B (en) Image processing method, device, electronic equipment and storage medium
CN110895789B (en) Face beautifying method and device
Marukatat Image enhancement using local intensity distribution equalization
CN114862729A (en) Image processing method, image processing device, computer equipment and storage medium
Yuan et al. Adaptive histogram equalization with visual perception consistency
CN115375592A (en) Image processing method and device, computer readable storage medium and electronic device
Pu et al. Fractional-order retinex for adaptive contrast enhancement of under-exposed traffic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant