[go: up one dir, main page]

CN112308957B - An automatic generation method of optimal fat and thin face portrait images based on deep learning - Google Patents

An automatic generation method of optimal fat and thin face portrait images based on deep learning Download PDF

Info

Publication number
CN112308957B
CN112308957B CN202010983159.5A CN202010983159A CN112308957B CN 112308957 B CN112308957 B CN 112308957B CN 202010983159 A CN202010983159 A CN 202010983159A CN 112308957 B CN112308957 B CN 112308957B
Authority
CN
China
Prior art keywords
face
thin
fat
optimal
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010983159.5A
Other languages
Chinese (zh)
Other versions
CN112308957A (en
Inventor
肖钦杰
唐祥峻
吴优
金乐阳
金小刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Publication of CN112308957A publication Critical patent/CN112308957A/en
Application granted granted Critical
Publication of CN112308957B publication Critical patent/CN112308957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种基于深度学习的最佳胖瘦肖像图像自动生成方法,包括如下步骤:生成人脸肖像图像中二维人脸的三维人脸模型、人脸相关参数和纹理映射,所述的人脸相关参数包括人脸姿态参数;将所述人脸肖像图像输入到训练好的基于深度学习的人脸最佳胖瘦估计模型中,输出人脸的最佳胖瘦尺度;以输出的最佳胖瘦尺度为输入,根据三维人脸胖瘦调整算法,对三维人脸模型进行调整,生成最佳胖瘦三维人脸模型;根据人脸姿态参数,将带纹理的最佳胖瘦的三维模型投影到二维平面上,得到最佳胖瘦的人脸;通过前后背景融合算法,使最佳胖瘦的人脸无缝嵌入到人脸肖像图像中,得到最佳胖瘦人脸肖像图像。本发明可用于社交、动画、游戏等领域。

Figure 202010983159

The invention discloses a method for automatically generating an optimal fat and thin portrait image based on deep learning. The face-related parameters include face pose parameters; the face portrait image is input into the trained best fat and thin estimation model based on deep learning, and the best fat and thin scale of the face is output; The best fat and thin scale is the input. According to the 3D face fat and thin adjustment algorithm, the 3D face model is adjusted to generate the best fat and thin 3D face model; The 3D model is projected on the 2D plane to get the best fat and thin face; through the front and back background fusion algorithm, the best fat and thin face is seamlessly embedded into the face portrait image to get the best fat and thin face portrait image. The present invention can be used in social, animation, game and other fields.

Figure 202010983159

Description

一种基于深度学习的最佳胖瘦人脸肖像图像自动生成方法An automatic generation method of optimal fat and thin face portrait images based on deep learning

技术领域technical field

本发明涉及肖像编辑技术领域,具体涉及一种基于深度学习的最佳胖瘦人脸肖像图像自动生成方法。The invention relates to the technical field of portrait editing, in particular to a method for automatically generating portrait images of optimal fat and thin faces based on deep learning.

背景技术Background technique

人脸在表达个人特征和产生第一印象方面起着至关重要的作用。大多数人希望他们在Facebook或Instagram上的所有照片看起来都漂亮。因此肖像编辑在许多应用中扮演着至关重要的角色,例如社交媒体,广告,视觉效果,健身激励措施等。为此,人们提出了一系列人脸肖像编辑方法来满足生产生活中各方面的需求。The human face plays a vital role in expressing personal characteristics and creating a first impression. Most people want all their photos on Facebook or Instagram to look good. So portrait editing plays a vital role in many applications such as social media, advertising, visual effects, fitness incentives, etc. To this end, a series of face portrait editing methods have been proposed to meet the needs of various aspects of production and life.

人像图像的面部吸引力会受到两个主要因素的影响,即面部纹理和面部形状。The facial attractiveness of a portrait image is affected by two main factors, facial texture and facial shape.

面部纹理仅由脸上的颜色决定,可以通过直接的颜色调整(例如亮度增强)来改善。例如Shih等人将一张人像的风格迁移到另一张人像上。人脸妆容迁移、人脸光照迁移和人脸重光照等工作在纹理上生成了令人信服的结果,但由于缺乏形状编辑,限制了应用的可能性。Facial texture is determined solely by the color on the face and can be improved with direct color adjustments such as brightness enhancement. For example, Shih et al. transfer the style of one portrait to another. Works such as face makeup transfer, face light transfer, and face relighting generate convincing results on textures, but the lack of shape editing limits application possibilities.

因此人们提出了许多人脸形状编辑相关的工作。Shih等人通过矫正相机畸变来解决广角镜头下人脸变形的问题。同样个性化夸张人脸生成以及人脸个性化分析与提取等方法也被提出用于解决人脸形状编辑相关问题。Therefore, many works related to face shape editing have been proposed. Shih et al. solved the problem of face deformation under wide-angle lens by correcting camera distortion. Similarly, methods such as personalized exaggerated face generation and face personalized analysis and extraction have also been proposed to solve problems related to face shape editing.

为了改进人脸的吸引力,Leyvand等人(Data-Driven Enhancement of FacialAttractiveness.ACM Transactions on Graph-ics27,3,Article 38(Aug.2008),9pages.)提出了受审美启发的人脸形状编辑方法,根据公共审美标准,在2D上依靠高精度的脸部特征点对人脸正脸进行鼻子、嘴巴、眼睛、眉毛、脸型等进行调整,然而,其方法无法进行侧脸的美化调整,并且对特征点精度极其敏感。To improve the attractiveness of human faces, Leyvand et al. (Data-Driven Enhancement of Facial Attractiveness. ACM Transactions on Graph-ics27, 3, Article 38 (Aug. 2008), 9 pages.) proposed an aesthetic-inspired face shape editing method , according to public aesthetic standards, the nose, mouth, eyes, eyebrows, face shape, etc. are adjusted on the front face of the human face by relying on high-precision facial feature points in 2D. However, this method cannot be used to beautify the face. Feature point accuracy is extremely sensitive.

Liao等人(Enhancing the Symmetry and Proportion of 3D FaceGeometry.IEEE Transactions on Visualization and ComputerGraphics18,10(2012),1704-1716.)将美化标标准扩展到三维人脸上,并加入了侧脸美化调整内容,但他们的方法着重于突出结构(面部特征和轮廓比例)的调整,而不是考虑人脸的潜在生理结构,因此很容易造成不合理的结果。Liao et al. (Enhancing the Symmetry and Proportion of 3D FaceGeometry. IEEE Transactions on Visualization and ComputerGraphics 18, 10 (2012), 1704-1716.) extended the beautification standard to 3D human faces, and added side face beautification adjustment content, But their method focuses on the adjustment of salient structures (facial features and silhouette proportions) rather than considering the underlying physiological structure of the human face, so it is prone to unreasonable results.

Zhao等人(Parametric Reshaping of Portrait Images for Weight-Change.IEEE Computer Graphics and Applications 38,1(2018),77-90.)基于面部区域上的稀疏特征点软组织厚度回归模型提出了一种基于单个参数的胖瘦调整方法。此方法基于真实人脸软组织深度数据对人脸进行调整,由于仅依靠稀疏的人脸特征点(而不是全脸信息)来重构和变形3D人脸,因此会产生不合理的人脸(特别是在处理夸张的姿势人脸和表情时)。此外他们的方法不是自动的,因此要求用户手动调整参数以获得满意的结果。Zhao et al. (Parametric Reshaping of Portrait Images for Weight-Change. IEEE Computer Graphics and Applications 38, 1 (2018), 77-90.) proposed a soft tissue thickness regression model based on sparse feature points on the face region based on a single parameter fat and thin adjustment method. This method adjusts the face based on real face soft tissue depth data. Since it only relies on sparse face feature points (rather than full face information) to reconstruct and deform 3D faces, it will produce unreasonable faces (especially is when dealing with exaggerated poses and expressions). Furthermore their method is not automatic, thus requiring the user to manually adjust the parameters to obtain satisfactory results.

A morphable model for the synthesis of 3D faces.Proceedings ofthe26th Annual Conference on Computer Graphics and Interactive Techniques-SIGGRAPH 1999,187-194公开了一种使用基于三维形变模型(3DMM)的参数表示法表征3D人脸,然后自动估算人脸胖瘦尺度,最后合理调整人脸获得最佳胖瘦结果。A morphable model for the synthesis of 3D faces. Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques-SIGGRAPH 1999, 187-194 discloses a method to characterize 3D faces using a three-dimensional morphable model (3DMM) based parametric representation, and then automatically Estimate the face fat and thin scale, and finally adjust the face reasonably to obtain the best fat and thin results.

然而,由于3DMM的表示限制,在胖瘦调整的同时保持人脸ID仍是一个具有挑战性的问题。而且,人脸自动胖瘦估计需要跨性别,年龄等不同人群的标记数据,为了确保注释的准确性和一致性,最好有同一个人具有不同胖瘦尺度的标记数据。但是,考虑到个人的人脸胖瘦通常在很长一段时间内保持恒定,使得同一个人的不同胖瘦数据很难获取。However, due to the representation limitation of 3DMM, maintaining face ID while adjusting for fat and thin is still a challenging problem. Moreover, automatic face fat and thin estimation requires labeled data of different groups of people such as gender and age. In order to ensure the accuracy and consistency of annotations, it is best to have labeled data of the same person with different fat and thin scales. However, considering that the face fatness and thinness of an individual usually remains constant for a long time, it is difficult to obtain different fatness and thinness data of the same person.

发明内容SUMMARY OF THE INVENTION

为了解决如何快速自动生成最佳胖瘦人脸生成问题,特别是胖瘦调整合理性问题,本发明提供了一种基于深度学习的最佳胖瘦肖像图像自动生成方法,对于给定的任意姿态表情的人脸肖像图像,该方法能够自动的生成有型、富有吸引力、合理的最佳胖瘦人脸肖像。In order to solve the problem of how to quickly and automatically generate the best fat and thin face, especially the problem of the rationality of fat and thin adjustment, the present invention provides an automatic generation method of the best fat and thin portrait image based on deep learning. Expression face portrait images, this method can automatically generate the best fat and thin face portraits that are stylish, attractive and reasonable.

为了解决上述技术问题,本发明提供如下技术方案:In order to solve the above-mentioned technical problems, the present invention provides the following technical solutions:

一种基于深度学习的最佳胖瘦人脸肖像图像自动生成方法,包括如下步骤:A method for automatic generation of optimal fat and thin face portrait images based on deep learning, including the following steps:

S1、生成人脸肖像图像中二维人脸的三维人脸模型、人脸相关参数和纹理映射,所述的人脸相关参数包括人脸姿态参数;S1, generate a three-dimensional face model, a face-related parameter and a texture map of a two-dimensional face in a face portrait image, and the face-related parameter includes a face pose parameter;

S2、将所述人脸肖像图像输入到训练好的基于深度学习的人脸最佳胖瘦估计模型中,输出人脸的最佳胖瘦尺度,所述人脸最佳胖瘦估计模型经包含公共人脸审美胖瘦标注的人脸数据库训练得到;S2. Input the face portrait image into the trained best fat and thin estimation model based on deep learning, output the best fat and thin scale of the face, and the best fat and thin estimation model of the human face contains The face database of public face aesthetic fat and thin annotation is obtained by training;

S3、以输出的最佳胖瘦尺度为输入,根据三维人脸胖瘦调整算法,对三维人脸模型进行调整,生成最佳胖瘦三维人脸模型;S3. Take the output optimal fatness and thinness scale as input, adjust the 3D face model according to the 3D face fatness and thinness adjustment algorithm, and generate the best fatness and thinness 3D face model;

S4、根据人脸姿态参数,将带纹理的最佳胖瘦的三维模型投影到二维平面上,得到最佳胖瘦的人脸;所述的带纹理的最佳胖瘦三维人脸模型根据纹理映射得到;S4. Project the textured best fat and thin three-dimensional model on a two-dimensional plane according to the face posture parameters to obtain the best fat and thin face; the textured best fat and thin three-dimensional face model is based on texture mapping;

S5、通过前后背景融合算法,使最佳胖瘦的人脸无缝嵌入到人脸肖像图像中,得到最佳胖瘦人脸肖像图像。S5. Through the front and back background fusion algorithm, the best fat and thin face is seamlessly embedded in the face portrait image, and the best fat and thin face portrait image is obtained.

与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:

(1)本发明提供了一个简单、有效、稳定的肖像图像最佳胖瘦自动生成方法。本发明能够处理表情姿态无关的人像图像,并自动生成合理的、高保真的、有型的、身份特征保持的人脸肖像。在实际的社交媒体、互联网中具有较高的应用价值。(1) The present invention provides a simple, effective and stable method for automatically generating the optimal fatness and thinness of a portrait image. The invention can process portrait images irrelevant to expressions and postures, and automatically generate reasonable, high-fidelity, stylish, and identity-preserving human face portraits. It has high application value in actual social media and the Internet.

(2)本发明提供的三维人脸胖瘦调整算法,可以权衡关于3DMM、人脸身份特征、人脸胖瘦度等的稠密信息约束和合理性约束,生成令人信服的匀称脸效果。适用于需要对人脸胖瘦进行自由控制的应用。(2) The three-dimensional face fatness and thinness adjustment algorithm provided by the present invention can balance the dense information constraints and rationality constraints on 3DMM, face identity features, face fatness and thinness, etc., to generate a convincing symmetrical face effect. It is suitable for applications that require free control of face fat and thin.

(3)本发明提供的人脸胖瘦数据库,包含公共审美的胖瘦程度标注,可用于训练基于深度学习的胖瘦估计模型,适用于自动肖像编辑等场景的应用。(3) The face fatness and thinness database provided by the present invention includes public aesthetic fatness and thinness degree labels, which can be used to train a fatness and thinness estimation model based on deep learning, and is suitable for applications such as automatic portrait editing.

附图说明Description of drawings

图1为本发明提供的基于深度学习的最佳胖瘦人脸肖像图像自动生成方法的流程图。FIG. 1 is a flow chart of a method for automatically generating an optimal fat and thin face portrait image based on deep learning provided by the present invention.

具体实施方式Detailed ways

如图1所示,本发明提供的基于深度学习的最佳胖瘦人脸肖像图像自动生成方法包括以下步骤:As shown in Figure 1, the method for automatically generating the best fat and thin face portrait image based on deep learning provided by the present invention includes the following steps:

S1、生成人脸肖像图像中二维人脸的三维人脸模型、人脸相关参数和纹理映射,所述的人脸相关参数包括人脸姿态参数。S1. Generate a three-dimensional face model, face-related parameters, and texture mapping of a two-dimensional face in a face portrait image, where the face-related parameters include face pose parameters.

S11、将人脸肖像图像中检测到的二维人脸输入到人脸特征点检测算法中,定位二维人脸的人脸关键点;S11, input the two-dimensional face detected in the face portrait image into the face feature point detection algorithm, and locate the face key points of the two-dimensional face;

S12、根据单目视觉三维人脸重建算法重建三维人脸模型并计算人脸相关参数,所述的人脸相关参数包括人脸身份特征参数,人脸表情参数和人脸姿态参数;S12, rebuilding a three-dimensional face model according to a monocular vision three-dimensional face reconstruction algorithm and calculating a face-related parameter, where the face-related parameter includes a face identity feature parameter, a face expression parameter and a face pose parameter;

S13、根据人脸肖像图像和三维人脸模型获得纹理映射。S13. Obtain a texture map according to the face portrait image and the three-dimensional face model.

纹理映射是一种用于在计算机生成的图形或3D模型上定义高频细节,表面纹理或颜色信息的方法。Texture mapping is a method for defining high frequency detail, surface texture or color information on computer-generated graphics or 3D models.

所述的纹理映射可以将像素从人脸肖像图片映射到三维人脸模型,将人脸肖像图片“包裹”在三维人脸模型周围。The texture mapping can map pixels from a face portrait picture to a three-dimensional face model, and "wrap" the face portrait picture around the three-dimensional face model.

所述的根据单目视觉三维人脸重建算法重建三维人脸模型并计算人脸相关参数,具体如下:The three-dimensional face model is reconstructed according to the monocular vision three-dimensional face reconstruction algorithm and the face-related parameters are calculated, as follows:

S121、如式(1)所示,将三维人脸模型表征为基于3DMM的参数表达,所述三维人脸模型表示为由n个顶点构成的固定拓扑的网格

Figure BDA0002687969610000041
Figure BDA0002687969610000042
所述的三维人脸模型的通过平均人脸
Figure BDA0002687969610000043
与一组人脸形状偏移量的线性组合来表示:S121, as shown in formula (1), characterize the three-dimensional face model as a parameter expression based on 3DMM, and the three-dimensional face model is expressed as a fixed topology grid composed of n vertices
Figure BDA0002687969610000041
Figure BDA0002687969610000042
The three-dimensional face model through the average face
Figure BDA0002687969610000043
It is represented by a linear combination with a set of face shape offsets:

Figure BDA0002687969610000044
Figure BDA0002687969610000044

其中,

Figure BDA0002687969610000045
是人脸身份特征形状偏移量,
Figure BDA0002687969610000046
是人脸表情特征形状偏移量,算法包含Nid个形状参数,Nexp个表情参数;α代表人脸身份特征参数,β代表人脸表情参数,
Figure BDA0002687969610000047
代表3n维空间;in,
Figure BDA0002687969610000045
is the face identity feature shape offset,
Figure BDA0002687969610000046
is the facial expression feature shape offset. The algorithm includes N id shape parameters and N exp expression parameters; α represents the facial identity feature parameter, β represents the facial expression parameter,
Figure BDA0002687969610000047
Represents a 3n-dimensional space;

S122、根据式(2)所述的能量方程的最优解得到人脸相关参数;S122, obtain face-related parameters according to the optimal solution of the energy equation described in formula (2);

E(P)=wfEf(P)+wrEr(P), 式(2)E(P)=w f E f (P)+w r E r (P), Equation (2)

其中,P代表人脸相关参数,P={R,t,α,β},R,t分别代表人脸姿态参数的旋转和平移,Ef(P)代表通过人脸关键点将三维人脸拟合到二维人脸上的能量项,Er(P)是正则项,wf,wr分别为两个能量项的系数。Among them, P represents the relevant parameters of the face, P={R, t, α, β}, R, t represent the rotation and translation of the face pose parameters, respectively, E f (P) represents the three-dimensional face through the key points of the face. The energy term fitted to the two-dimensional face, E r (P) is the regularization term, and w f and w r are the coefficients of the two energy terms, respectively.

该能量方程的优化问题为凸优化问题,可转化为最小二乘问题得到最优解。The optimization problem of the energy equation is a convex optimization problem, which can be transformed into a least squares problem to obtain the optimal solution.

所述的单目视觉三维人脸重建算法请参见Patrik Huber,Guosheng Hu,RafaelTena,Pouria Mortazavian,P Koppen,William J Christmas,Matthias Ratsch,andJosef Kittler.2016.A Multiresolution 3D Morphable Face Model and FittingFramework.INSTICC,79~86。For the monocular vision 3D face reconstruction algorithm, please refer to Patrik Huber, Guosheng Hu, RafaelTena, Pouria Mortazavian, P Koppen, William J Christmas, Matthias Ratsch, and Josef Kittler.2016.A Multiresolution 3D Morphable Face Model and FittingFramework.INSTICC, 79-86.

能量方程请参见Hyeongwoo Kim,Pablo Carrido,Ayush Tewari,Weipeng Xu,Justus Thies,Matthias Niessner,Patrick Pérez,Christian Richardt,Michael

Figure BDA0002687969610000051
and Christian Theobalt.2018.Deep Video Portraits.ACM TOG 37,4,Article 163(2018),14pages。Energy equation see Hyeongwoo Kim, Pablo Carrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Niessner, Patrick Pérez, Christian Richardt, Michael
Figure BDA0002687969610000051
and Christian Theobalt. 2018. Deep Video Portraits. ACM TOG 37, 4, Article 163 (2018), 14pages.

S2、将所述人脸肖像图像输入到训练好的基于深度学习的人脸最佳胖瘦估计模型中,输出人脸的最佳胖瘦尺度,所述人脸最佳胖瘦估计模型经包含公共人脸审美胖瘦标注的人脸数据库训练得到。S2. Input the face portrait image into the trained best fat and thin estimation model based on deep learning, output the best fat and thin scale of the face, and the best fat and thin estimation model of the human face contains The public face aesthetics are obtained by training the face database with fat and thin annotations.

S21、构建包含公共人脸审美胖瘦标注的人脸数据库;S21. Construct a face database including public face aesthetic fat and thin annotations;

S211、采用S1中的方法生成人脸数据库原始人脸肖像照片中二维人脸的三维人脸模型、人脸相关参数和纹理映射;S211, using the method in S1 to generate a three-dimensional face model, face-related parameters and texture mapping of the two-dimensional face in the original face portrait photo of the face database;

S212、采用S3中的方法,设定多个不同的胖瘦尺度替代最佳胖瘦尺度为输入,分别得到对应调整后的三维人脸模型,胖瘦尺度的取值范围为{-2.0,-1.6,-1.2,-0.8,-0.4,0.0,0.4,0.8};S212. Using the method in S3, set a plurality of different fat and thin scales to replace the optimal fat and thin scale as input, and obtain corresponding adjusted three-dimensional face models respectively. The value range of the fat and thin scale is {-2.0, - 1.6, -1.2, -0.8, -0.4, 0.0, 0.4, 0.8};

S213、采用S4~S5中的方法,依次将调整后的三维人脸模型投影到二维平面上,然后嵌入到相应的原始人脸肖像照片中,得到不同胖瘦的人脸肖像照片序列;S213. Using the methods in S4 to S5, project the adjusted three-dimensional face model onto a two-dimensional plane in turn, and then embed it into the corresponding original face portrait photos to obtain a sequence of face portrait photos with different fatness and thinness;

S214、请多名评分人员从人脸肖像照片序列选出胖瘦最佳的人脸肖像照片,对应的胖瘦尺度作为该人脸的胖瘦基线u,设定其它的人脸肖像照片的公共人脸审美胖瘦尺度s*=v-u,其中v为其它的人脸肖像照片对应的胖瘦尺度;S214. Ask multiple raters to select the best fat and thin face portrait photo from the face portrait photo sequence, and use the corresponding fat and thin scale as the fat and thin baseline u of the face, and set the public profile of other face portrait photos. The face aesthetic fatness and thinness scale s * = vu, where v is the fatness and thinness scale corresponding to other face portrait photos;

S215、对人脸数据库中的每张原始人脸肖像照片都进行S221~S224处理,从而得到包含公共人脸审美胖瘦标注的人脸数据库。S215: Perform S221-S224 processing on each original face portrait photo in the face database, so as to obtain a face database containing public face aesthetic fatness and thinness annotations.

S22、利用包含公共人脸审美胖瘦标注的人脸数据库对人脸最佳胖瘦估计模型进行训练;S22, using the face database containing the public face aesthetic fat and thin labels to train the best face fat and thin estimation model;

所述的人脸最佳胖瘦估计模型以ResNet的网络结构为基础,包含特征提取层和分类器,分类器为1类;训练过程中以∑||s-s*||2最小化为目标,其中,s为人脸最佳胖瘦估计模型估计的胖瘦尺度;s*代表公共人脸审美胖瘦尺度。The described best face fat and thin estimation model is based on the network structure of ResNet, including feature extraction layer and classifier . Among them, s is the fat and thin scale estimated by the best face fat and thin estimation model; s * represents the public face aesthetic fat and thin scale.

所述的ResNet的网络结构参见He K,Zhang X,Ren S,et al.Deep residuallearning for image recognition[C]//Proceedings of the IEEE conference oncomputer vision and pattern recognition.2016:770-778.,固定特征提取层,修改分类器为1类。See He K, Zhang X, Ren S, et al. Deep residual learning for image recognition [C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778., Fixed features Extraction layer, modify the classifier to 1 class.

S23、将人脸肖像图像输入人脸最佳胖瘦估计模型中得到最佳胖瘦尺度。S23, input the face portrait image into the best face fat and thin estimation model to obtain the best fat and thin scale.

本发明所述的包含公共人脸审美胖瘦标注的人脸数据库的构建方法以单一参数(公共人脸审美胖瘦尺度)来描述人脸胖瘦,避免种类繁多的人脸参数(比如额头、脸颊、下巴等的参数调整)。The construction method of the face database including the public face aesthetic fatness and thinness annotation according to the present invention uses a single parameter (the public face aesthetic fatness and thinness scale) to describe the face fatness and thinness, and avoids a wide variety of face parameters (such as forehead, parameter adjustment of cheeks, chin, etc.).

S3、以输出的最佳胖瘦尺度为输入,根据三维人脸胖瘦调整算法,对三维人脸模型进行调整,生成最佳胖瘦三维人脸模型。S3. Taking the output optimal fatness and thinness scale as the input, adjust the three-dimensional face model according to the three-dimensional face fatness and thinness adjustment algorithm, and generate the best fatness and thinness three-dimensional face model.

S31、首先固定三维人脸模型中的表情参数,简化三维人脸模型为X(α),对X(α)中任意一个点表示如下式所示:S31. First, fix the expression parameters in the three-dimensional face model, simplify the three-dimensional face model as X(α), and express any point in X(α) as shown in the following formula:

Figure BDA0002687969610000061
Figure BDA0002687969610000061

其中,

Figure BDA0002687969610000062
是Pid对应于xi的子矩阵;in,
Figure BDA0002687969610000062
is the submatrix of P id corresponding to x i ;

S32、计算式(5)的最优解得到三维人脸模型上各个顶点的新位置Y;S32, calculating the optimal solution of formula (5) to obtain the new position Y of each vertex on the three-dimensional face model;

E(Y,α)=Eshape(Y)+E3DMM(Y,α)+EID(Y),式(5)E(Y,α)=E shape (Y)+E 3DMM (Y,α)+E ID (Y), formula (5)

其中,

Figure BDA0002687969610000063
是三维人脸模型中n个顶点的新位置;Eshape(Y)代表稀疏胖瘦模型约束,E3DMM(Y,α)代表人脸合理性约束,EID(Y)代表人脸身份特征约束,
Figure BDA0002687969610000065
代表3维空间;in,
Figure BDA0002687969610000063
is the new position of n vertices in the 3D face model; E shape (Y) represents the sparse fat and thin model constraint, E 3DMM (Y, α) represents the face rationality constraint, and E ID (Y) represents the face identity feature constraint ,
Figure BDA0002687969610000065
Represents a 3-dimensional space;

Eshape(Y)的计算公式如下所示:The formula for calculating E shape (Y) is as follows:

Eshape(Y)=∑j(yI(j)-x′I(j))2 式(6)E shape (Y)=∑ j (y I(j) -x′ I(j) ) 2 Equation (6)

其中,I(j)是第j个人脸特征点的索引,yI(j)代表第j个人脸特征点的新位置;x′I(j)代表第j个人脸特征点的根据最佳胖瘦尺度调整后的新位置,计算公式如下所示:Among them, I(j) is the index of the jth face feature point, y I(j) represents the new position of the jth face feature point; x′ I(j) represents the jth face feature point according to the best fat The new position after thin scale adjustment, the calculation formula is as follows:

x′I(j)=xI(j)0)+δBMI·bj·nI(j) 式(4)x′ I(j) = x I(j)0 )+δBMI·b j ·n I(j) Formula (4)

其中,xI(j)0)代表第j个人脸特征点的原始位置,BMI表示身高体重比差,δBMI代表最佳胖瘦尺度,

Figure BDA0002687969610000064
是第j个人脸特征点的法向,b表示所有特征点的回归系数,b={bj},j=1,2,...,52;Among them, x I(j)0 ) represents the original position of the jth face feature point, BMI represents the height-to-weight ratio difference, and δBMI represents the optimal fat and thin scale,
Figure BDA0002687969610000064
is the normal direction of the jth face feature point, b represents the regression coefficient of all feature points, b={b j }, j=1, 2,..., 52;

稀疏胖瘦模型约束Eshape(Y)基于法医研究(参见Sven De Greef,Peter Claes,Dirk.Vandermeulen,Wouter Mollemans,Paul Suetens,and Guy Willems.2006.Large-Scale In-Vivo Caucasian Facial Soft Tissue Thick-ness Database forCraniofacial Reconstruction.Forensic science international 159(2006),S126-S146.)的稀疏人脸胖瘦调整模型用于人脸胖瘦约束。稀疏人脸胖瘦模型利用一个线性回归模型来表示52个三维人脸特征点位置的软组织深度与年龄、身高体重比(BMI)的映射关系。The sparse fat-thin model constraint E shape (Y) is based on forensic research (see Sven De Greef, Peter Claes, Dirk. Vandermeulen, Wouter Mollemans, Paul Suetens, and Guy Willems. 2006. Large-Scale In-Vivo Caucasian Facial Soft Tissue Thick- ness Database for Craniofacial Reconstruction. Forensic science international 159 (2006), S126-S146.) The sparse face fat and thin adjustment model is used for face fat and thin constraints. The sparse face fat and thin model uses a linear regression model to represent the mapping relationship between the soft tissue depth and age and height-to-weight ratio (BMI) of 52 three-dimensional face feature points.

E3DMM(Y,α)将人脸约束在合理的人脸空间内。E3DMM(Y,α)的计算公式如下所示:E 3DMM (Y, α) constrains faces within a reasonable face space. The calculation formula of E 3DMM (Y, α) is as follows:

Figure BDA0002687969610000071
Figure BDA0002687969610000071

其中,∈是由Pid的特征向量构成的对角矩阵,

Figure BDA0002687969610000072
是人脸身份特征参数α的下则项;where ∈ is a diagonal matrix consisting of the eigenvectors of P id ,
Figure BDA0002687969610000072
is the following term of the face identity feature parameter α;

EID(Y)约束胖瘦调整前后的人脸为同一个人脸。EID(Y)的计算公式如下所示:E ID (Y) constrains the face before and after adjustment of fat and thin to be the same face. The formula for calculating E ID (Y) is as follows:

Figure BDA0002687969610000073
Figure BDA0002687969610000073

其中Δ(·)是拉普拉斯算子,由于Eshape对人脸五官的修改不大,该算子足以保持人脸特征。Among them, Δ( ) is the Laplacian operator. Since the E shape does not modify the facial features, the operator is sufficient to maintain the facial features.

E(Y,α)构成一个凸优化问题,可通过求解最小二乘问题来得到胖瘦调整后所有顶点的新位置Y。E(Y, α) constitutes a convex optimization problem, and the new position Y of all vertices after fat and thin adjustment can be obtained by solving the least squares problem.

S33、将表情Pexpβ0添加到最优解Y,得到最佳胖瘦三维人脸模型。S33 , adding the expression P exp β 0 to the optimal solution Y to obtain the optimal fat and thin three-dimensional face model.

本发明所述的三维人脸调整算法是指可以权衡关于人脸身份特征、3DMM的稠密性约束和合理性约束、稀疏人脸调整模型的胖瘦约束,生成令人信服的匀称脸效果的三维人脸胖瘦调整算法。The three-dimensional face adjustment algorithm described in the present invention refers to a three-dimensional face adjustment algorithm that can balance the identity features of the face, the density constraints and rationality constraints of 3DMM, and the fat and thin constraints of the sparse face adjustment model to generate a convincing symmetrical face effect. Face fat and thin adjustment algorithm.

S4、根据人脸姿态参数,将带纹理的最佳胖瘦的三维模型投影到二维平面上,得到最佳胖瘦的人脸,所述的带纹理的最佳胖瘦二维人脸模型根据纹理映射得到。S4. Project the textured best fat and thin three-dimensional model onto a two-dimensional plane according to the face pose parameters to obtain the best fat and thin face, the textured best fat and thin two-dimensional face model Obtained from texture mapping.

所述的投影为正交投影。The projection described is an orthogonal projection.

S5、通过前后背景融合算法,使最佳胖瘦的人脸无缝嵌入到人脸肖像图像中,得到最佳胖瘦人脸肖像图像。S5. Through the front and back background fusion algorithm, the best fat and thin face is seamlessly embedded in the face portrait image, and the best fat and thin face portrait image is obtained.

S51、利用纹理映射建立最佳胖瘦的人脸到人脸肖像图像的对应;S51, using texture mapping to establish the correspondence between the best fat and thin face to the face portrait image;

S52、利用该对应指导人脸肖像图片的背景网格形变,使最佳胖瘦的人脸无缝地嵌入背景网格形变后的人脸肖像图片中,得到最佳胖瘦人脸肖像图像。S52 , using the background grid deformation of the corresponding guide face portrait picture, so that the best fat and thin face is seamlessly embedded in the face portrait picture after the background grid deformation, so as to obtain the best fat and thin face portrait image.

所述的前后背景融合算法参见YiChang Shih,Wei-Sheng Lai,and Chia-KaiLiang.2019.Distortion-Free Wide-Angle Portraits on Camera Phones.ACM TOG 38,4,Article 61,12pages。See YiChang Shih, Wei-Sheng Lai, and Chia-KaiLiang. 2019. Distortion-Free Wide-Angle Portraits on Camera Phones. ACM TOG 38, 4, Article 61, 12 pages for the described front and back background fusion algorithms.

Claims (4)

1. An optimal fat and thin face portrait image automatic generation method based on deep learning is characterized by comprising the following steps:
s1, generating a three-dimensional face model, face related parameters and texture mapping of a two-dimensional face in the face portrait image, which are as follows:
s11, inputting the two-dimensional face detected in the face portrait image into a face feature point detection algorithm, and positioning face key points of the two-dimensional face;
s12, reconstructing a three-dimensional face model according to a monocular vision three-dimensional face reconstruction algorithm and calculating face related parameters, wherein the face related parameters comprise face identity characteristic parameters, face expression parameters and face posture parameters, and the method specifically comprises the following steps:
s121, as shown in formula (1), representing a three-dimensional face model as a parameter expression based on 3DMM, wherein the three-dimensional face model is represented as a fixed-topology grid formed by n vertexes
Figure FDA0003519995370000011
Figure FDA0003519995370000012
The three-dimensional face model is obtained by averaging the faces
Figure FDA0003519995370000013
Expressed in linear combination with a set of face shape offsets:
Figure FDA0003519995370000014
wherein,
Figure FDA0003519995370000015
is the offset of the shape of the face identity feature,
Figure FDA0003519995370000016
is the offset of the facial expression characteristic shape, and the algorithm comprises NidA shape parameter, NexpAn individual expression parameter; alpha represents the human face identity characteristic parameter, beta represents the human face expression parameter,
Figure FDA0003519995370000017
representing a 3 n-dimensional space;
s122, obtaining face related parameters according to the optimal solution of the energy equation of the formula (2);
E(P)=wfEf(P)+wrEr(P)in the formula (2)
Wherein, P represents face related parameters, P ═ { R, t, α, β }, R, t represent rotation and translation of face pose parameters, respectively, Ef(P) represents an energy term fitting a three-dimensional face to a two-dimensional face through face key points, Er(P) is a regularization term, wf,wrCoefficients of two energy terms, respectively;
s13, obtaining texture mapping according to the face portrait image and the three-dimensional face model;
s2, inputting the face portrait image into a trained optimal face fat-thin estimation model based on deep learning, and outputting an optimal face fat-thin scale, wherein the optimal face fat-thin estimation model is obtained by training a face database containing public face aesthetic fat-thin labels, and the method specifically comprises the following steps:
s21, constructing a face database containing the aesthetic fat and thin annotation of the public face, which comprises the following specific steps:
s211, generating a three-dimensional face model, face related parameters and texture mapping of a two-dimensional face in the original face portrait photo of the face database by adopting the method in S1;
s212, setting a plurality of different fat-thin scales to replace the optimal fat-thin scale as input by adopting the method in S3, and respectively obtaining the correspondingly adjusted three-dimensional face model, wherein the value range of the fat-thin scales is { -2.0, -1.6, -1.2, -0.8, -0.4, 0.0, 0.4, 0.8 };
s213, projecting the adjusted three-dimensional face model onto a two-dimensional plane in sequence by adopting the method in S4-S5, and then embedding the three-dimensional face model into corresponding original face portrait photos to obtain face portrait photo sequences with different fat and thin sizes;
s214, a plurality of scoring personnel are requested to select the face portrait photo with the best fat and thin from the face portrait photo sequence, the corresponding fat and thin scale is used as the fat and thin base line u of the face, and the public face aesthetic fat and thin scale S of other face portrait photos is set*V-u, wherein v is the fat-thin scale corresponding to other face portrait photos;
s215, processing S211-S214 is carried out on each original face portrait photo in the face database, and therefore the face database containing aesthetic fat and thin labels of public faces is obtained;
s22, training an optimal face fat-thin estimation model by using a face database containing public face aesthetic fat-thin labels;
the optimal human face fat-thin estimation model is based on a network structure of ResNet and comprises a feature extraction layer and a classifier, wherein the classifier is of type 1; in the training process, the sigma is s-s*||2Minimizing to a target, wherein s is a fat-thin scale estimated by the optimal fat-thin estimation model of the human face; s*Representing aesthetic fat and thin dimensions of public human faces;
s23, inputting the face portrait image into the face optimal fat-thin estimation model to obtain an optimal fat-thin scale;
s3, adjusting the three-dimensional face model by taking the output optimal fat-thin scale as input according to a three-dimensional face fat-thin adjustment algorithm, and generating the optimal fat-thin three-dimensional face model as follows:
s31, firstly, fixing expression parameters in the three-dimensional face model, simplifying the three-dimensional face model to be X (alpha), and expressing any point in the X (alpha) as shown in the following formula:
Figure FDA0003519995370000021
wherein, alpha represents the face identity characteristic parameter;
Figure FDA0003519995370000022
is PidCorresponds to xiA sub-matrix of (a);
s32, calculating the optimal solution of the formula (5) to obtain the new position Y of each vertex on the three-dimensional face model;
E(Y,α)=Eshape(Y)+E3DMM(Y,α)+EID(Y), formula (5)
Wherein,
Figure FDA0003519995370000031
is the new position of n vertexes in the three-dimensional face model; eshape(Y) represents sparse fat-lean model constraints, E3DMM(Y, α) represents a face reasonableness constraint, EID(Y) represents a face identity feature constraint,
Figure FDA0003519995370000032
representing a 3-dimensional space;
Eshapethe calculation formula of (Y) is as follows:
Eshape(Y)=∑j(yI(j)-x′I(j))2formula (6)
Wherein I (j) is the index of the jth individual face feature point, yI(j)A new location representing a jth personal face feature point; x'I(j)The new position of the j-th individual face feature point adjusted according to the optimal fat-thin scale is represented by the following calculation formula:
x′I(j)=xI(j)0)+δBMI·bj·nI(j)formula (4)
xI(j)0) Representing the original location of the jth individual's face feature point, BMI representing height-to-weight ratio difference, δ BMI representing the optimal fat-thin scale,
Figure FDA0003519995370000033
is the normal to the jth face feature point, b represents the regression coefficients for all feature points, b ═ bjJ is an integer of 1-52;
E3DMMthe calculation formula of (Y, α) is as follows:
Figure FDA0003519995370000034
wherein e is represented by PidIs used to form a diagonal matrix of feature vectors,
Figure FDA0003519995370000035
is a regular term of the face identity characteristic parameter alpha;
EIDthe calculation formula of (Y) is as follows:
Figure FDA0003519995370000036
where Δ () is the laplacian operator;
s33 expression PexpAdding beta into the optimal solution Y to obtain an optimal fat and thin three-dimensional face model;
s4, projecting the optimal fat-thin three-dimensional model with textures onto a two-dimensional plane according to the human face posture parameters to obtain an optimal fat-thin human face; the optimal fat and thin three-dimensional face model with the texture is obtained according to texture mapping;
and S5, seamlessly embedding the optimal fat and thin face into the face portrait image through a front and back background fusion algorithm to obtain the optimal fat and thin face portrait image.
2. The method for automatically generating an optimal fat-thin human face portrait image based on deep learning of claim 1, wherein the projection of the step S4 is an orthogonal projection.
3. The method for automatically generating an optimal fat-thin face portrait image based on deep learning of claim 1, wherein the optimal fat-thin face is seamlessly embedded into the face portrait image through a front-back background fusion algorithm to obtain the optimal fat-thin face portrait image, which is specifically as follows:
s51, establishing the correspondence from the face with the best fat and thin to the face portrait image by utilizing texture mapping;
and S52, utilizing the corresponding background grid deformation for guiding the human face portrait picture to enable the optimal fat and thin human face to be seamlessly embedded into the human face portrait picture after the background grid deformation, and obtaining the optimal fat and thin human face portrait picture.
4. An apparatus for automatically generating an optimal fat-thin face portrait image based on deep learning, comprising a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor executes the method for automatically generating an optimal fat-thin face portrait image based on deep learning according to any one of claims 1 to 3.
CN202010983159.5A 2020-08-14 2020-09-17 An automatic generation method of optimal fat and thin face portrait images based on deep learning Active CN112308957B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010822680 2020-08-14
CN2020108226800 2020-08-14

Publications (2)

Publication Number Publication Date
CN112308957A CN112308957A (en) 2021-02-02
CN112308957B true CN112308957B (en) 2022-04-26

Family

ID=74484008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010983159.5A Active CN112308957B (en) 2020-08-14 2020-09-17 An automatic generation method of optimal fat and thin face portrait images based on deep learning

Country Status (1)

Country Link
CN (1) CN112308957B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117151992A (en) * 2023-07-07 2023-12-01 浙江大学 Face beautifying method, equipment and storage medium based on perspective distortion correction

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034355B (en) * 2021-04-20 2022-06-21 浙江大学 A deep learning-based method for double chin removal in portrait images
CN116563451B (en) * 2022-01-29 2024-11-01 腾讯科技(深圳)有限公司 Face three-dimensional result generation method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013189101A1 (en) * 2012-06-20 2013-12-27 浙江大学 Hair modeling and portrait editing method based on single image
CN111144285A (en) * 2019-12-25 2020-05-12 中国平安人寿保险股份有限公司 Fat and thin degree identification method, device, equipment and medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201023092A (en) * 2008-12-02 2010-06-16 Nat Univ Tsing Hua 3D face model construction method
CN103208133B (en) * 2013-04-02 2015-08-19 浙江大学 The method of adjustment that in a kind of image, face is fat or thin
US10095917B2 (en) * 2013-11-04 2018-10-09 Facebook, Inc. Systems and methods for facial representation
CN108960020A (en) * 2017-05-27 2018-12-07 富士通株式会社 Information processing method and information processing equipment
US10565758B2 (en) * 2017-06-14 2020-02-18 Adobe Inc. Neural face editing with intrinsic image disentangling
CN110276795B (en) * 2019-06-24 2022-11-18 大连理工大学 A Light Field Depth Estimation Method Based on Split Iterative Algorithm
CN111524226B (en) * 2020-04-21 2023-04-18 中国科学技术大学 Method for detecting key point and three-dimensional reconstruction of ironic portrait painting
CN111508069B (en) * 2020-05-22 2023-03-21 南京大学 Three-dimensional face reconstruction method based on single hand-drawn sketch

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013189101A1 (en) * 2012-06-20 2013-12-27 浙江大学 Hair modeling and portrait editing method based on single image
CN111144285A (en) * 2019-12-25 2020-05-12 中国平安人寿保险股份有限公司 Fat and thin degree identification method, device, equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117151992A (en) * 2023-07-07 2023-12-01 浙江大学 Face beautifying method, equipment and storage medium based on perspective distortion correction

Also Published As

Publication number Publication date
CN112308957A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN109584353B (en) Method for reconstructing three-dimensional facial expression model based on monocular video
CN109377557B (en) Real-time three-dimensional face reconstruction method based on single-frame face image
Zhou et al. Parametric reshaping of human bodies in images
Hwang et al. Reconstruction of partially damaged face images based on a morphable face model
Ma et al. Facial performance synthesis using deformation-driven polynomial displacement maps
Chai et al. High-quality hair modeling from a single portrait photo.
WO2022001236A1 (en) Three-dimensional model generation method and apparatus, and computer device and storage medium
CN101751689B (en) A 3D Face Reconstruction Method
Bickel et al. Multi-scale capture of facial geometry and motion
Wang et al. High resolution acquisition, learning and transfer of dynamic 3‐D facial expressions
CN112308957B (en) An automatic generation method of optimal fat and thin face portrait images based on deep learning
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
Fyffe et al. Multi‐view stereo on consistent face topology
WO2013189101A1 (en) Hair modeling and portrait editing method based on single image
CN106780713A (en) A kind of three-dimensional face modeling method and system based on single width photo
WO2022143354A1 (en) Face generation method and apparatus for virtual object, and device and readable storage medium
CN110660076A (en) Face exchange method
CN103348386A (en) Computer-implemented method and apparatus for tracking and re-shaping human shaped figure in digital video
CN111028354A (en) Image sequence-based model deformation human face three-dimensional reconstruction scheme
CN102103756A (en) Comic exaggeration method, device and system for human face digital image supporting position deflection
CN108564619B (en) Realistic three-dimensional face reconstruction method based on two photos
CN111640172A (en) Attitude migration method based on generation of countermeasure network
CN115861525A (en) Multi-view Face Reconstruction Method Based on Parametric Model
Li et al. Lightweight wrinkle synthesis for 3d facial modeling and animation
Song et al. A generic framework for efficient 2-D and 3-D facial expression analogy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant