[go: up one dir, main page]

CN108765273B - Virtual cosmetic surgery method and device for photographing faces - Google Patents

Virtual cosmetic surgery method and device for photographing faces Download PDF

Info

Publication number
CN108765273B
CN108765273B CN201810551058.3A CN201810551058A CN108765273B CN 108765273 B CN108765273 B CN 108765273B CN 201810551058 A CN201810551058 A CN 201810551058A CN 108765273 B CN108765273 B CN 108765273B
Authority
CN
China
Prior art keywords
face
dimensional
dimensional model
user
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810551058.3A
Other languages
Chinese (zh)
Other versions
CN108765273A (en
Inventor
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810551058.3A priority Critical patent/CN108765273B/en
Publication of CN108765273A publication Critical patent/CN108765273A/en
Priority to PCT/CN2019/089348 priority patent/WO2019228473A1/en
Application granted granted Critical
Publication of CN108765273B publication Critical patent/CN108765273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application provides a virtual face-lifting method and a virtual face-lifting device for face photographing, wherein the method comprises the following steps: acquiring a current original two-dimensional face image of a user and depth information corresponding to the original two-dimensional face image; performing three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model; inquiring face information registered in advance, and judging whether a user is registered; if the fact that the user is registered is known, face three-dimensional model shaping parameters corresponding to the user are obtained, key points on the original face three-dimensional model are adjusted according to the face three-dimensional model shaping parameters, and a target face three-dimensional model after virtual face-lifting is obtained; and mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image. Therefore, the registered user is beautified based on the human face three-dimensional model, the beautifying effect is optimized, and the satisfaction degree of the target user on the beautifying effect and the viscosity of the product are improved.

Description

人脸拍照的虚拟整容方法和装置Virtual cosmetic surgery method and device for photographing faces

技术领域technical field

本申请涉及人像处理技术领域,尤其涉及一种人脸拍照的虚拟整容方法和装置。The present application relates to the technical field of portrait processing, and in particular, to a virtual cosmetic surgery method and device for photographing a human face.

背景技术Background technique

随着终端设备的普及,越来越多的用户习惯使用终端设备进行拍照,因此,终端设备的拍照功能也越发的多元化,比如,相关拍照应用程序为用户提供美颜功能等。With the popularization of terminal devices, more and more users are accustomed to using the terminal devices to take pictures. Therefore, the camera functions of the terminal devices are also becoming more and more diversified. For example, relevant camera applications provide users with beauty functions.

相关技术中,基于二维的人脸图像进行美颜处理,处理效果不佳,处理后的图像真实感不强。In the related art, beauty processing is performed based on a two-dimensional face image, but the processing effect is not good, and the processed image is not very realistic.

发明内容SUMMARY OF THE INVENTION

本申请旨在至少在一定程度上解决相关技术中的技术问题之一。The present application aims to solve one of the technical problems in the related art at least to a certain extent.

为达上述目的,本申请第一方面实施例提出了一种人脸拍照的虚拟整容方法,包括:获取用户当前的原始二维人脸图像,以及与所述原始二维人脸图像对应的深度信息;根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型;查询预先注册的人脸信息,判断所述用户是否注册;若获知所述用户已经注册,则获取与所述用户对应的人脸三维模型整形参数,根据所述人脸三维模型整形参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型;将所述虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。In order to achieve the above purpose, an embodiment of the first aspect of the present application proposes a virtual face-lifting method for photographing a face, including: acquiring a current original two-dimensional face image of a user, and a depth corresponding to the original two-dimensional face image information; perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image, and obtain the original three-dimensional face model; query the pre-registered face information to determine whether the user is registered; if it is known that the user has been registered , then obtain the face 3D model shaping parameters corresponding to the user, and adjust the key points on the original face 3D model according to the face 3D model shaping parameters to obtain the target face 3D model after virtual plastic surgery ; Map the three-dimensional model of the target face after the virtual plastic surgery to a two-dimensional plane to obtain the target two-dimensional face image.

为达上述目的,本申请第二方面实施例提出了一种人脸拍照的虚拟整容装置,包括:获取模块,用于获取用户当前的原始二维人脸图像,以及与所述原始二维人脸图像对应的深度信息;重构模块,用于根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型;查询模块,用于查询预先注册的人脸信息,判断所述用户是否注册;调整模块,用于在获知所述用户已经注册时,获取与所述用户对应的人脸三维模型整形参数,根据所述人脸三维模型整形参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型;映射模块,用于将所述虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。In order to achieve the above purpose, a second aspect of the present application provides a virtual cosmetic surgery device for photographing a human face, including: an acquisition module for acquiring a current original two-dimensional face image of a user, and a depth information corresponding to the face image; a reconstruction module for performing 3D reconstruction according to the depth information and the original 2D face image to obtain a 3D model of the original face; a query module for querying pre-registered faces information, to determine whether the user is registered; the adjustment module is used to obtain the three-dimensional face model shaping parameters corresponding to the user when it is known that the user has been registered, and adjust the original three-dimensional face model shaping parameters according to the Adjusting key points on the three-dimensional face model to obtain a three-dimensional model of the target face after virtual plastic surgery; a mapping module for mapping the three-dimensional model of the target face after virtual plastic surgery to a two-dimensional plane to obtain a target two-dimensional face face image.

为达上述目的,本申请第三方面实施例提出了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如前述第一方面实施例所述的人脸拍照的虚拟整容方法。In order to achieve the above purpose, an embodiment of the third aspect of the present application proposes an electronic device, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, when the processor executes the computer program , to implement the virtual face-lifting method for photographing a human face as described in the embodiment of the first aspect.

为达上述目的,本申请第四方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如前述第一方面实施例所述的人脸拍照的虚拟整容方法。In order to achieve the above purpose, a fourth aspect embodiment of the present application provides a computer-readable storage medium on which a computer program is stored. A virtual cosmetic method for face photography.

为达上述目的,本申请第五方面实施例提出了一种图像处理电路。所述图像处理电路包括:图像单元、深度信息单元和处理单元;To achieve the above purpose, an image processing circuit is provided in the fifth aspect of the present application. The image processing circuit includes: an image unit, a depth information unit and a processing unit;

所述图像单元,用于输出用户当前的原始二维人脸图像;The image unit is used to output the current original two-dimensional face image of the user;

所述深度信息单元,用于输出与所述原始二维人脸图像对应的深度信息;the depth information unit for outputting depth information corresponding to the original two-dimensional face image;

所述处理单元,分别与所述图像单元和所述深度信息单元电性连接,用于根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断所述用户是否注册,若获知所述用户已经注册,则获取与所述用户对应的人脸三维模型整形参数,根据所述人脸三维模型整形参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型,将所述虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。本申请提供的技术方案,至少包括如下有益效果:The processing unit is electrically connected to the image unit and the depth information unit respectively, and is configured to perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image, obtain the three-dimensional model of the original face, and query Pre-registered face information, determine whether the user is registered, if it is known that the user has been registered, then obtain the face 3D model shaping parameters corresponding to the user, according to the face 3D model shaping parameters for the original face. The key points on the three-dimensional face model are adjusted to obtain a three-dimensional model of the target face after virtual plastic surgery, and the three-dimensional model of the target face after virtual plastic surgery is mapped to a two-dimensional plane to obtain a two-dimensional face image of the target. The technical solution provided by this application at least includes the following beneficial effects:

基于人脸三维模型对已经注册的用户进行美颜,优化了美颜效果,提升了目标用户对美颜效果的满意度和与产品的粘性。Based on the three-dimensional face model, the registered users are beautified, the beautification effect is optimized, and the target user's satisfaction with the beautification effect and the stickiness with the product are improved.

本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, in the following description, and in part will be apparent from the following description, or learned by practice of the present application.

附图说明Description of drawings

本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:

图1为本申请一个实施例所提供的人脸拍照的虚拟整容方法的流程示意图;FIG. 1 is a schematic flowchart of a virtual face-lifting method for photographing a human face provided by an embodiment of the present application;

图2为本申请另一个实施例所提供的人脸拍照的虚拟整容方法的流程示意图;FIG. 2 is a schematic flowchart of a virtual plastic surgery method for photographing a human face provided by another embodiment of the present application;

图3为本申请一个实施例所提供的深度图像采集组件的结构示意图;3 is a schematic structural diagram of a depth image acquisition component provided by an embodiment of the present application;

图4(a)为本申请一个实施例所提供的人脸拍照的虚拟整容方法的技术流程示意图;FIG. 4(a) is a schematic technical flow diagram of a virtual cosmetic surgery method for photographing a human face provided by an embodiment of the present application;

图4(b)为本申请另一个实施例所提供的人脸拍照的虚拟整容方法的技术流程示意图;FIG. 4(b) is a schematic technical flow diagram of a virtual cosmetic surgery method for photographing a human face provided by another embodiment of the present application;

图5是根据本申请一个实施例人脸拍照的虚拟整容装置的结构示意图;5 is a schematic structural diagram of a virtual cosmetic device for photographing a human face according to an embodiment of the present application;

图6是根据本申请另一个实施例人脸拍照的虚拟整容装置的结构示意图;FIG. 6 is a schematic structural diagram of a virtual cosmetic device for photographing a human face according to another embodiment of the present application;

图7为本申请实施例所提供的电子设备的结构示意图;以及FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application; and

图8为一个实施例中图像处理电路的示意图;8 is a schematic diagram of an image processing circuit in one embodiment;

图9为一个实施例中的图像处理电路的示意图。FIG. 9 is a schematic diagram of an image processing circuit in one embodiment.

具体实施方式Detailed ways

下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。The following describes in detail the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to be used to explain the present application, but should not be construed as a limitation to the present application.

针对现有技术中基于二维的人脸图像进行美颜处理,处理效果不佳,处理后的图像真实感不强的技术问题,本申请实施例中,通过获取二维的人脸图像,以及人脸图像对应的深度信息,根据深度信息和人脸图像,进行三维重构,得到人脸三维模型,基于人脸三维模型进行美颜,相较于二维美颜,考量了人脸的深度信息,实现了人脸不同部位的区分化美颜,提高了美颜的真实感,比如,基于人脸三维模型进行美颜,在对鼻子部位进行磨皮时,由于深度信息有助清晰的区分出鼻子与其他部位,因而,避免对其他部位误磨皮导致人脸模糊等。Aiming at the technical problems in the prior art that beauty processing is performed based on two-dimensional face images, the processing effect is not good, and the processed images are not very realistic, in the embodiments of the present application, by acquiring two-dimensional face images, and The depth information corresponding to the face image, according to the depth information and the face image, perform 3D reconstruction to obtain a 3D face model, and perform beauty based on the 3D model of the face. Compared with the 2D beauty, the depth of the face is considered. It realizes the differentiated beautification of different parts of the face and improves the realism of beautification. For example, when beautifying the face based on the 3D model of the face, when the nose is dermabrasion, the depth information helps to distinguish clearly. Out of the nose and other parts, therefore, avoid mistakenly dermabrasion of other parts to cause blurred faces, etc.

下面参考附图描述本申请实施例的人脸拍照的虚拟整容方法和装置。The following describes the virtual face-lifting method and device for photographing a human face according to the embodiments of the present application with reference to the accompanying drawings.

图1为本申请一个实施例所提供的人脸拍照的虚拟整容方法的流程示意图。FIG. 1 is a schematic flowchart of a virtual cosmetic surgery method for photographing a human face provided by an embodiment of the present application.

本申请实施例的人脸虚拟整容方法可以应用于具有深度信息和彩色信息获取装置的计算机设备,其中,具有深度信息和彩色信息(二维信息)获取装置功能的装置可以是双摄系统等,该计算机设备可以为手机、平板电脑、个人数字助理、穿戴式设备等具有各种操作系统、触摸屏和/或显示屏的硬件设备。The virtual face plastic surgery method of the embodiment of the present application can be applied to a computer device having a device for acquiring depth information and color information, wherein the device having the function of acquiring device for depth information and color information (two-dimensional information) can be a dual-camera system, etc., The computer device may be a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and other hardware devices with various operating systems, touch screens and/or display screens.

步骤101,获取用户当前的原始二维人脸图像,以及与原始二维人脸图像对应的深度信息。Step 101: Acquire a current original two-dimensional face image of the user and depth information corresponding to the original two-dimensional face image.

需要说明的是,根据应用场景的不同,本申请的实施例中,获取深度信息和原始二维人脸图像信息的硬件装置不同:It should be noted that, according to different application scenarios, in the embodiments of the present application, the hardware devices for acquiring depth information and original two-dimensional face image information are different:

作为一种可能的实现方式,获取原始二维人脸信息的硬件装置为可见光RGB图像传感器,可以基于计算机设备中的RGB可见光图像传感器获取原始二维人脸。具体地,可见光RGB图像传感器可以包括可见光摄像头,可见光摄像头可以捕获由成像对象反射的可见光进行成像,得到成像对象对应的原始二维人脸。As a possible implementation manner, the hardware device for obtaining the original two-dimensional face information is a visible light RGB image sensor, and the original two-dimensional face can be obtained based on an RGB visible light image sensor in a computer device. Specifically, the visible light RGB image sensor may include a visible light camera, and the visible light camera may capture the visible light reflected by the imaging object for imaging, and obtain the original two-dimensional face corresponding to the imaging object.

作为一种可能的实现方式,获取深度信息的方式为通过结构光传感器获取,具体地,如图2所示,获取每个人脸图像对应的深度信息的方式包括如下步骤:As a possible implementation, the way to obtain depth information is to obtain through a structured light sensor. Specifically, as shown in Figure 2, the way to obtain the depth information corresponding to each face image includes the following steps:

步骤201,向当前用户人脸投射结构光。Step 201, projecting structured light on the face of the current user.

步骤202,拍摄经当前用户人脸调制的结构光图像。Step 202, shooting a structured light image modulated by the current user's face.

步骤203,解调结构光图像的各个像素对应的相位信息以得到人脸图像对应的深度信息。Step 203: Demodulate phase information corresponding to each pixel of the structured light image to obtain depth information corresponding to the face image.

在本示例中,参见图3计算机设备为智能手机1000时,深度图像采集组件12包括结构光投射器121和结构光摄像头122。步骤201可以由结构光投射器121实现,步骤202和步骤203可以由结构光摄像头122实现。In this example, referring to FIG. 3 , when the computer device is a smart phone 1000 , the depth image acquisition component 12 includes a structured light projector 121 and a structured light camera 122 . Step 201 may be implemented by the structured light projector 121 , and steps 202 and 203 may be implemented by the structured light camera 122 .

也即是说,结构光投射器121可用于向当前用户人脸投射结构光;结构光摄像头122可用于拍摄经当前用户人脸调制的结构光图像,以及解调结构光图像的各个像素对应的相位信息以得到深度信息。That is to say, the structured light projector 121 can be used to project structured light to the face of the current user; the structured light camera 122 can be used to capture the structured light image modulated by the face of the current user, and the corresponding pixels of the demodulated structured light image. phase information to obtain depth information.

具体地,结构光投射器121将一定模式的结构光投射到当前用户的人脸上后,在当前用户的人脸的表面会形成由当前用户人脸调制后的结构光图像。结构光摄像头122拍摄经调制后的结构光图像,再对结构光图像进行解调以得到深度信息。其中,结构光的模式可以是激光条纹、格雷码、正弦条纹、非均匀散斑等。Specifically, after the structured light projector 121 projects a certain pattern of structured light onto the face of the current user, a structured light image modulated by the face of the current user is formed on the surface of the face of the current user. The structured light camera 122 captures the modulated structured light image, and then demodulates the structured light image to obtain depth information. Among them, the mode of structured light can be laser stripes, Gray codes, sinusoidal stripes, non-uniform speckle, and the like.

其中,结构光摄像头122可进一步用于解调结构光图像中各个像素对应的相位信息,将相位信息转化为深度信息,以及根据深度信息生成深度图像。The structured light camera 122 can be further used to demodulate the phase information corresponding to each pixel in the structured light image, convert the phase information into depth information, and generate a depth image according to the depth information.

具体地,与未经调制的结构光相比,调制后的结构光的相位信息发生了变化,在结构光图像中呈现出的结构光是产生了畸变之后的结构光,其中,变化的相位信息即可表征物体的深度信息。因此,结构光摄像头122首先解调出结构光图像中各个像素对应的相位信息,再根据相位信息计算出深度信息。Specifically, compared with the unmodulated structured light, the phase information of the modulated structured light has changed, and the structured light presented in the structured light image is the structured light after distortion is generated, wherein the changed phase information It can represent the depth information of the object. Therefore, the structured light camera 122 first demodulates the phase information corresponding to each pixel in the structured light image, and then calculates the depth information according to the phase information.

步骤102,根据深度信息和原始二维人脸图像进行三维重构,获取原始人脸三维模型。Step 102: Perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain a three-dimensional model of the original face.

具体地,根据深度信息和原始二维人脸图像进行三维重构,赋予相关点深度信息和二维信息,重构获取原始人脸三维模型,该原始人脸三维模型为三维立体模型可以充分还原出人脸,相对二维人脸模型,还包括了人脸的五官的立体角度等信息。Specifically, three-dimensional reconstruction is performed according to the depth information and the original two-dimensional face image, depth information and two-dimensional information of relevant points are given, and the original three-dimensional face model is obtained by reconstruction. The original three-dimensional face model is a three-dimensional model and can be fully restored. A face, relative to the two-dimensional face model, also includes information such as the three-dimensional angle of the facial features of the face.

根据应用场景的不同,根据深度信息和人脸图像进行三维重构获取原始人脸三维模型的方式包括但是不限于以下方式:According to different application scenarios, the ways to obtain the original face 3D model by performing 3D reconstruction according to the depth information and the face image include but are not limited to the following ways:

作为一种可能的实现方式,对每一张二维样本人脸图像进行关键点识别,得到定位关键点,对每一张人脸图像,根据定位关键点的深度信息和定位关键点在二维样本人脸图像上的距离,包括二维空间上的x轴距离和y轴距离,确定定位关键点在三维空间中的相对位置,根据定位关键点在三维空间中的相对位置,连接相邻的定位关键点,生成原始样本人脸三维模型。其中,关键点为人脸上的特征点,可包括眼角、鼻尖、嘴角上的点等。As a possible implementation, perform key point recognition on each two-dimensional sample face image to obtain positioning key points. For each face image, according to the depth information of the positioning key points and the positioning key points The distance on the face image, including the x-axis distance and the y-axis distance in the two-dimensional space, determines the relative position of the positioning key point in the three-dimensional space, and connects the adjacent positioning keys according to the relative position of the positioning key point in the three-dimensional space. point to generate a 3D model of the original sample face. Among them, the key points are the feature points on the face, which may include points on the corners of the eyes, the tip of the nose, and the corners of the mouth.

作为另一种可能的实现方式,获取多个角度原始二维的人脸图像,并筛选出清晰度较高的人脸图像作为原始数据,进行特征点定位,利用特征定位结果粗略估计人脸角度,根据人脸的角度和轮廓建立粗糙的人脸三维形变模型,并将人脸特征点通过平移、缩放操作调整到与人脸三维形变模型在同一尺度上,并抽取出与人脸特征点对应点的坐标信息形成稀疏人脸三维形变模型。As another possible implementation, obtain original two-dimensional face images from multiple angles, screen out face images with higher definition as original data, locate feature points, and use the feature positioning results to roughly estimate the face angle , build a rough three-dimensional deformation model of the face according to the angle and contour of the face, adjust the face feature points to the same scale as the face three-dimensional deformation model through translation and zoom operations, and extract the corresponding face feature points. The coordinate information of the points forms a sparse face 3D deformation model.

进而,根据人脸角度粗略估计值和稀疏人脸三维形变模型,进行微粒群算法迭代人脸三维重构,得到人脸三维几何模型,在得到人脸三维几何模型后,采用纹理张贴的方法将输入二维图像中的人脸纹理信息映射到人脸三维几何模型,得到完整的原始人脸三维模型。Then, according to the rough estimated value of the face angle and the sparse face 3D deformation model, the particle swarm algorithm iterative face 3D reconstruction is performed to obtain the face 3D geometric model. After the face 3D geometric model is obtained, the texture posting method is used to The face texture information in the input two-dimensional image is mapped to the three-dimensional geometric model of the face, and the complete original three-dimensional model of the face is obtained.

在本申请的一个实施例中,为了提升美颜效果,还可以基于美化后的原始二维人脸图像进行原始人脸三维模型的构建,由此,构建的原始人脸三维模型较为美观,保证了美颜的美观性。In an embodiment of the present application, in order to improve the effect of beautifying the face, the original three-dimensional face model can also be constructed based on the beautified original two-dimensional face image. Therefore, the constructed original three-dimensional face model is more beautiful, ensuring that beauty of beauty.

具体而言,提取用户的用户属性特征,其中,用户属性特征可包括性别、年龄、人种、以及肤色,其中,用户的属性特征可以是根据用户注册时输入的个人信息获取的,也可以是通过采集用户注册时的二维人脸图像信息分析获取的,根据用户属性特征对原始二维人脸图像进行美化处理,得到美化后的原始二维人脸图像,其中,根据用户属性特征对原始二维人脸图像进行美化处理的方式,可以为预先建立用户属性特征和美颜参数的对应关系,比如,女性的美颜参数为祛痘、磨皮、美白,男性的美颜参数为祛痘等,从而,在获取用户属性特征后,查询该对应关系获取对应的美颜参数,根据查询到的美颜参数对原始二维人脸图像进行美化处理。Specifically, the user attribute characteristics of the user are extracted, wherein the user attribute characteristics may include gender, age, race, and skin color, wherein the user attribute characteristics may be obtained according to the personal information input by the user during registration, or may be The original two-dimensional face image is obtained by collecting the two-dimensional face image information during user registration, and the original two-dimensional face image is beautified according to the user's attribute characteristics to obtain the original two-dimensional face image after beautification. The way of beautifying the two-dimensional face image can be pre-established the corresponding relationship between user attribute characteristics and beauty parameters. For example, the beauty parameters of women are acne removal, skin resurfacing, and whitening, and the beauty parameters of men are acne removal, etc. , thus, after acquiring the user attribute features, query the corresponding relationship to obtain the corresponding beauty parameters, and perform beautification processing on the original two-dimensional face image according to the query beauty parameters.

当然,对原始二维人脸图像进行美化的方式除了上述美颜外,还可包括亮度优化、清晰度提高、去噪处理、障碍物处理等,以保证原始人脸三维模型较为精确。Of course, the way to beautify the original two-dimensional face image can include brightness optimization, sharpness improvement, denoising, obstacle processing, etc., in addition to the above-mentioned beauty, to ensure that the original three-dimensional face model is more accurate.

步骤103,查询预先注册的人脸信息,判断用户是否注册。Step 103: Query pre-registered face information to determine whether the user is registered.

可以理解,在本实施例中,基于已经注册的用户提供优化美颜服务,一方面,使得已注册用户在拍照时,尤其是在多人拍照时,得到最优美颜效果,提升已注册用户的满意度,另一方面,有助于推广相关产品。在实际应用中,为了进一步提升已注册用户的拍照体验,在识别出已注册用户时,可以使用特殊标志性符号对已注册用户进行标注,比如,使用不同颜色人脸对焦框突出显示已经注册的用户,使用不同形状的对焦框突出显示已经注册的用户。It can be understood that, in this embodiment, the optimized beauty service is provided based on the registered users. On the one hand, the registered users can obtain the most beautiful beauty effect when taking pictures, especially when multiple people take pictures, and improve the registered users' beauty. Satisfaction, on the other hand, helps promote relevant products. In practical applications, in order to further improve the photographing experience of registered users, when identifying registered users, special symbolic symbols can be used to mark registered users, for example, using different colored face focus frames to highlight registered users Users, use different shaped focus boxes to highlight registered users.

在不同的应用场景下,查询预先注册的人脸信息,判断用户是否注册包括但不限于以下方式:In different application scenarios, querying pre-registered face information to determine whether a user is registered includes but is not limited to the following methods:

作为一种可能的实现方式,预先获取注册用户的面部特征,比如,胎记等特殊标记特征、鼻子、眼睛等五官部位的形状和位置特征等,分析原始二维人脸图像,比如采用图像识别的技术提取用户的面部特征,查询预先注册的面部数据库,判断是否存在面部特征,若存在,则确定用户已经注册;若不存在,则确定用户没有注册。As a possible implementation, pre-acquire the facial features of registered users, such as special marking features such as birthmarks, the shape and position features of facial features such as nose and eyes, etc., and analyze the original two-dimensional face image, such as using image recognition. The technology extracts the user's facial features, queries the pre-registered facial database, and determines whether there are facial features. If there is, it is determined that the user has been registered; if not, it is determined that the user is not registered.

步骤104,若获知用户已经注册,则获取与用户对应的人脸三维模型整形参数,根据人脸三维模型整形参数对原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型。Step 104, if it is known that the user has been registered, obtain the face 3D model shaping parameters corresponding to the user, and adjust the key points on the original face 3D model according to the face 3D model shaping parameters, so as to obtain the target face after virtual plastic surgery. 3D model.

其中,人脸三维模型整形参数包括但不限于对人脸三维模型中调整的目标关键点的调整位置以及距离等。The three-dimensional face model shaping parameters include, but are not limited to, the adjustment positions and distances of the target key points adjusted in the three-dimensional face model.

具体地,如果获知用户为已经注册的用户,则为了为该已注册用户提供优化的美颜服务,获取与用户对应的人脸三维模型整形参数,根据人脸三维模型整形参数对原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型。可以理解的是,原始人脸三维模型实际上是由关键点以及关键点连接形成的三角网络搭建的,因而,在对原始人脸三维模型上待整容部位的关键点进行调整时,对应的人脸三维模型变化,从而,得到虚拟整容后的目标人脸模型。Specifically, if it is known that the user is a registered user, in order to provide an optimized beauty service for the registered user, the three-dimensional face model shaping parameters corresponding to the user are obtained, and the original three-dimensional face model is reconstructed according to the three-dimensional face model shaping parameters. The key points on the model are adjusted to obtain the 3D model of the target face after virtual plastic surgery. It is understandable that the original three-dimensional face model is actually constructed by key points and a triangular network formed by the connection of key points. Therefore, when adjusting the key points of the original face three-dimensional The three-dimensional face model changes, thereby obtaining the target face model after virtual plastic surgery.

其中,与用户对应的人脸三维模型整形参数的方式可以为用户主动注册的、可以是根据用户的原始人脸三维模型法分析后自动生成的。The method of shaping parameters of the three-dimensional face model corresponding to the user may be actively registered by the user, or may be automatically generated according to the analysis of the original three-dimensional face model of the user.

作为一种可能的实现方式,获取用户多个角度的二维样本人脸图像,以及与每个二维样本人脸图像对应的深度信息,根据深度信息和二维样本人脸图像进行三维重构,获取原始样本人脸三维模型,对原始样本人脸三维模型上待整容部位的关键点进行调整,得到虚拟整容后的目标样本人脸三维模型,比较原始样本人脸三维模型和目标样本人脸三维模型,提取与用户对应的人脸三维模型整形参数,比如,根据相同部位对应的关键点坐标差异生成对应的坐标差值信息等。As a possible implementation, obtain 2D sample face images from multiple angles of the user, and depth information corresponding to each 2D sample face image, and perform 3D reconstruction according to the depth information and the 2D sample face image , obtain the 3D model of the original sample face, adjust the key points on the 3D face model of the original sample face to be plastic surgery, obtain the 3D face model of the target sample face after virtual plastic surgery, and compare the 3D face model of the original sample face with the target sample face 3D model, extracting the face 3D model shaping parameters corresponding to the user, for example, generating corresponding coordinate difference information according to the coordinate difference of key points corresponding to the same part.

在本实施例中,为了更加方便对人脸三维模型的调整,在原始样本人脸三维模型上显示每个整容部位的关键点,比如,以高亮显示的方式显示每个整容部位的关键点,检测用户对待整容部位的关键点进行的移位操作,比如检测用户对选中的关键点的拖动操作等,根据移位操作对关键点进行调整,根据调整后的关键点以及其他相邻关键点的连接,得到虚拟整容后的目标样本人脸三维模型。In this embodiment, in order to make the adjustment of the three-dimensional face model more convenient, the key points of each plastic surgery part are displayed on the original sample face three-dimensional model, for example, the key points of each plastic surgery part are displayed in a highlighted manner , detect the shift operation performed by the user on the key points of the part to be plastic surgery, such as detecting the user's drag operation on the selected key point, etc., adjust the key points according to the shift operation, and adjust the key points according to the adjusted key points and other adjacent key points. Connect the points to obtain the 3D face model of the target sample after virtual plastic surgery.

在实际执行过程中,可以基于不同的实现方式接收对原始样本人脸三维模型上待整容部位的关键点进行调整,示例说明如下:In the actual execution process, the key points of the part to be plastic surgery on the original sample face 3D model can be received and adjusted based on different implementation methods. The examples are as follows:

第一种示例:First example:

在本示例中,为了便于用户的操作,可以为用户提供调整控件通过用户对控件的操作实时进行人脸三维模型的调整。In this example, in order to facilitate the user's operation, an adjustment control may be provided for the user to adjust the three-dimensional face model in real time through the user's operation on the control.

具体而言,在本实施例中,生成与每个整容部位的关键点对应的调整控件,检测用户对待整容部位的关键点对应的调整控件进行的触控操作,获取相应的调整参数,根据调整参数对原始样本人脸三维模型上待整容部位的关键点进行调整,得到虚拟整容后的目标样本人脸三维模型,基于该虚拟整容后的目标样本人脸三维模型与原始样本人脸三维模型的差距获取整容参数。其中,调整参数包括关键点的移动方向和移动距离等。Specifically, in this embodiment, an adjustment control corresponding to the key point of each cosmetic part is generated, the touch operation performed by the user on the adjustment control corresponding to the key point of the part to be cosmetic surgery is detected, corresponding adjustment parameters are obtained, and according to the adjustment The parameters adjust the key points of the part to be plastic surgery on the original sample face 3D model, and obtain the target sample face 3D model after virtual plastic surgery. Gap gets cosmetic parameters. Among them, the adjustment parameters include the moving direction and moving distance of the key points.

在本实施例中,还可向用户提供整容建议信息,比如提供“丰唇、填充苹果机”等整容建议,其中,该整容建议信息可以为文字形式、语音形式等,若用户确认整容建议信息,根据整容建议信息确定待整容部位的关键点以及调整参数,比如,用户确认上述整容建议,确定的整容参数为调整嘴部的深度值以及脸颊的深度值等,其中,深度值变化的大小可以根据用户的原始样本人脸三维模型上对应部分的深度值确定,为了保证调整的效果自然,调整后的深度值与初始深度值的差值在一定范围内,根据调整参数对原始样本人脸三维模型上待整容部位的关键点进行调整,得到虚拟整容后的目标样本人脸三维模型。In this embodiment, plastic surgery suggestion information can also be provided to the user, for example, plastic surgery suggestions such as "enriching lips, filling an Apple computer", etc., where the plastic surgery suggestion information can be in the form of text, voice, etc. If the user confirms the plastic surgery suggestion information , determine the key points and adjustment parameters of the part to be plastic surgery according to the plastic surgery suggestion information. For example, if the user confirms the above plastic surgery suggestion, the determined cosmetic parameters are to adjust the depth value of the mouth and the depth value of the cheek, etc., among which, the size of the change of the depth value can be It is determined according to the depth value of the corresponding part of the user's original sample face 3D model. In order to ensure a natural adjustment effect, the difference between the adjusted depth value and the initial depth value is within a certain range. The key points of the part to be plastic surgery on the model are adjusted to obtain a three-dimensional face model of the target sample after virtual plastic surgery.

其中,为了进一步提高整容的效果的美感,在对原始人脸三维模型上待整容部位的关键点进行调整之前,还可对覆盖在原始人脸三维模型表面的皮肤纹理图进行美化,得到美化后的原始人脸三维模型。Among them, in order to further improve the beauty of the cosmetic effect, before adjusting the key points of the original face 3D model to be cosmetic, the skin texture map covering the surface of the original face 3D model can also be beautified. 3D model of the original human face.

可以理解的是,当人脸图像中有痘痘时,皮肤纹理图中痘痘对应的部位的颜色可以为红色,或者,当人脸图像中有雀斑时,皮肤纹理图中雀斑对应的部位的颜色可以为咖啡色或黑色,或者,当人脸图像中有黑痣时,皮肤纹理图中黑痣对应的部位的颜色可以为黑色。It can be understood that when there are acne in the face image, the color of the part corresponding to the acne in the skin texture image can be red, or, when there are freckles in the face image, the color of the part corresponding to the freckles in the skin texture image can be red. The color may be brown or black, or, when there are black moles in the face image, the color of the part corresponding to the black mole in the skin texture map may be black.

因此,可以根据原始人脸三维模型的皮肤纹理图的颜色,确定是否存在异常范围,当未存在异常范围时,可以不做任何处理,而当存在异常范围时,可以进一步根据异常范围内的各点在三维空间中的相对位置关系,以及异常范围的颜色信息,采用对应的美化策略,对异常范围进行美化。Therefore, it can be determined whether there is an abnormal range according to the color of the skin texture map of the original face three-dimensional model. The relative positional relationship of the points in the three-dimensional space and the color information of the abnormal range are used to beautify the abnormal range by using the corresponding beautification strategy.

一般情况下,痘痘是突出皮肤表面的,黑痣也可以是突出皮肤表面的,而雀斑是未突出皮肤表面的,因此,本申请实施例中,可以根据异常范围的中心点与边缘点之间的高度差,确定异常范围所属的异常类型,例如,异常类型可以为凸起或者未凸起。在确定异常类型后,可以根据异常类型和颜色信息,确定对应的美化策略,而后根据异常范围对应的匹配肤色,采用美化策略指示的滤波范围和滤波强度对异常范围进行磨皮处理。In general, acne protrudes from the skin surface, black moles can also protrude from the skin surface, and freckles do not protrude from the skin surface. Therefore, in the embodiment of the present application, the center point and the edge point of the abnormal range can be determined The height difference between them is used to determine the abnormality type to which the abnormality range belongs. For example, the abnormality type can be raised or not raised. After the abnormality type is determined, the corresponding beautification strategy can be determined according to the abnormality type and color information, and then according to the matching skin color corresponding to the abnormality range, the abnormality range is subjected to microdermabrasion using the filtering range and filter strength indicated by the beautification strategy.

举例而言,当异常类型为凸起,颜色信息为红色时,此时,该异常范围内可以为痘痘,痘痘对应的磨皮程度较强,当异常类型为未凸起,颜色为青色时,此时,该异常范围内可以为纹身,纹身对应的磨皮程度较弱。For example, when the abnormal type is raised and the color information is red, at this time, the abnormal range can be acne, and the degree of microdermabrasion corresponding to acne is relatively strong. When the abnormal type is not raised, the color is cyan. At this time, the abnormal range can be a tattoo, and the degree of microdermabrasion corresponding to the tattoo is relatively weak.

或者,还可以根据异常范围对应的匹配肤色,填充异常范围内的肤色。Alternatively, the skin color within the abnormal range can also be filled according to the matching skin color corresponding to the abnormal range.

例如,当异常类型为凸起,颜色信息为红色时,此时,该异常范围内可以为痘痘,则祛痘的美化策略可以为:对痘痘进行磨皮处理,以及可以根据痘痘附近的正常肤色,本申请实施例中记为匹配肤色,填充痘痘对应的异常范围内的肤色,或者,当异常类型为未凸起,颜色为咖啡色时,此时,该异常范围内可以为雀斑,则祛斑的美化策略可以为:根据雀斑附近的正常肤色,本申请实施例中记为匹配肤色,填充雀斑对应的异常范围内的肤色。For example, when the abnormal type is bulge and the color information is red, at this time, the abnormal range can be acne, and the beautification strategy for acne removal can be: microdermabrasion on the acne, and according to the surrounding area of the acne The normal skin color is marked as matching skin color in the examples of this application, and fills the skin color within the abnormal range corresponding to acne, or, when the abnormal type is not raised and the color is brown, at this time, the abnormal range can be freckles , then the beautification strategy for freckle removal may be: according to the normal skin color near the freckles, which is recorded as matching skin color in the embodiment of the present application, and filling the skin color within the abnormal range corresponding to the freckles.

本申请中,由于原始人脸三维模型的模型中,以各关键点为顶点得到的封闭区域的深度信息是一致的,当对覆盖在人脸三维模型表面的皮肤纹理图进行美化时,可以分别对各个封闭区域进行美化,由此,可以增加美化后的封闭区域中像素值的可信度,提升美化效果。In this application, since the depth information of the enclosed area obtained by taking each key point as a vertex in the model of the original three-dimensional face model is consistent, when the skin texture map covering the surface of the three-dimensional face model is beautified, it can be divided into By beautifying each enclosed area, the reliability of the pixel value in the beautified enclosed area can be increased, and the beautification effect can be improved.

作为本申请实施例的另一种可能的实现方式,可以预先设置局部人脸对应的美化策略,其中,局部人脸可以包括鼻部、唇部、眼部、脸颊等脸部部位。例如,对于鼻部而言,其对应的美化策略可以为鼻尖提亮处理,鼻翼阴影处理,从而增加鼻部的立体感,或者,对于脸颊而言,其对应的美化策略可以为添加腮红和/或磨皮处理。As another possible implementation manner of the embodiment of the present application, a beautification strategy corresponding to a partial human face may be preset, wherein the partial human face may include facial parts such as nose, lips, eyes, and cheeks. For example, for the nose, the corresponding beautification strategy can be to brighten the nose tip and shadow the nose to increase the three-dimensional effect of the nose, or, for the cheek, the corresponding beautification strategy can be adding blush and / or dermabrasion treatment.

因此,本申请实施例中,可以根据颜色信息和在原始人脸三维模型中的相对位置,从皮肤纹理图中识别出局部人脸,而后根据局部人脸对应的美化策略,对局部人脸进行美化。Therefore, in the embodiment of the present application, the partial face can be identified from the skin texture map according to the color information and the relative position in the original three-dimensional model of the face, and then the partial face can be processed according to the beautification strategy corresponding to the partial face. beautify.

可选地,在局部人脸为眉毛时,可以根据眉毛对应的美化策略指示的滤波强度,对局部人脸进行磨皮处理。Optionally, when the partial human face is the eyebrows, the partial human face may be subjected to microdermabrasion according to the filtering strength indicated by the beautification strategy corresponding to the eyebrows.

在局部人脸为脸颊时,可以根据脸颊对应的美化策略指示的滤波强度,对局部人脸进行磨皮处理。需要说明的是,为了使得美化后的效果更加自然,美化效果更加突出,脸颊对应的美化策略指示的滤波强度可以大于眉毛对应的美化策略指示的滤波强度。When the partial face is a cheek, the partial face can be subjected to microdermabrasion according to the filtering strength indicated by the beautification strategy corresponding to the cheek. It should be noted that, in order to make the beautification effect more natural and the beautification effect more prominent, the filtering strength indicated by the beautification strategy corresponding to the cheeks may be greater than the filtering strength indicated by the beautification strategy corresponding to the eyebrows.

在局部人脸属于鼻部时,可以根据鼻部对应的美化策略指示的阴影强度,增加局部人脸的阴影。When the partial face belongs to the nose, the shadow of the partial face can be increased according to the shadow intensity indicated by the beautification strategy corresponding to the nose.

本申请中,基于局部人脸在原始人脸三维模型中的相对位置,对其进行美化处理,可以使得美化后的皮肤纹理图更加自然,美化效果更加突出。并且,可以实现有针对性地对局部人脸进行美化处理,从而提升成像效果,提升用户的拍照体验。In the present application, beautification processing is performed on the partial face based on its relative position in the original face three-dimensional model, which can make the beautified skin texture map more natural and the beautification effect more prominent. In addition, targeted beautification processing can be performed on a local face, thereby improving the imaging effect and enhancing the user's photographing experience.

当然,在实际应用中,若获知用户没有注册,也可以为用户提供供较优的美颜服务。Of course, in practical applications, if it is known that the user is not registered, the user can also be provided with a better beauty service.

在本实施例中,若获知用户没有注册,则提取用户的用户属性特征,其中,用户属性特征可包括性别、年龄、人种、以及肤色,比如基于图像分析技术识别出用户的发型、首饰、有无化妆等,以确定用户属性特征,进而,获取预设的与用户属性特征对应的标准人脸三维模型整形参数,根据标准人脸三维模型整形参数对原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型。In this embodiment, if it is known that the user is not registered, the user attribute characteristics of the user are extracted, wherein the user attribute characteristics may include gender, age, race, and skin color, for example, the user's hairstyle, jewelry, Whether there is makeup, etc. to determine the user attribute characteristics, and then obtain the preset standard face 3D model shaping parameters corresponding to the user attribute characteristics, and perform key points on the original face 3D model according to the standard face 3D model shaping parameters. After adjustment, the 3D model of the target face after virtual plastic surgery is obtained.

步骤105,将虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。Step 105: Map the three-dimensional model of the target face after virtual plastic surgery to a two-dimensional plane to obtain a target two-dimensional face image.

具体地,在对原始人脸三维模型上待整容部位的关键点进行调整,得到虚拟整容后的目标人脸三维模型之后,可以将虚拟整容后的目标人脸三维模型映射到二维平面,得到目标人脸图像,并对目标二维人脸图像进行美颜处理。Specifically, after adjusting the key points of the part to be plastic surgery on the original three-dimensional face model to obtain the three-dimensional model of the target face after virtual plastic surgery, the three-dimensional model of the target face after the virtual plastic surgery can be mapped to a two-dimensional plane to obtain The target face image is processed, and the target two-dimensional face image is beautified.

本申请中,由于皮肤纹理图是三维的,对皮肤纹理图进行美化,可以使得美化后的皮肤纹理图更加自然,从而将根据美化后的人脸三维模虚拟整容后生成的目标人脸三维模型映射到二维平面,得到美化后的目标二维人脸图像,对目标二维人脸图像进行美颜处理,可以使得美颜后的目标二维人脸图像更加真实,美化效果更加突出,为用户提供了整容后美颜效果展示,进一步提升用户的整容体验。In the present application, since the skin texture map is three-dimensional, beautifying the skin texture map can make the beautified skin texture map more natural, so that the target face three-dimensional model generated after virtual plastic surgery according to the beautified three-dimensional face model Map to a two-dimensional plane to obtain a beautified target two-dimensional face image, and perform beautification processing on the target two-dimensional face image, which can make the beautified target two-dimensional face image more realistic and the beautification effect more prominent. The user provides a display of the beauty effect after cosmetic surgery to further enhance the user's cosmetic surgery experience.

为了使得本领域的技术人员对人脸拍照的虚拟整容方法的流程更加清楚,下面结合其在具体场景中的应用进行举例,说明如下:In order to make those skilled in the art clearer about the process of the virtual face-lifting method for taking pictures of human faces, an example is given below in conjunction with its application in a specific scene, and the description is as follows:

在本示例中,标定,是指对摄像头进行标定,确定人脸图像中的关键点在三维空间中对应的关键点。In this example, the calibration refers to calibrating the camera to determine the key points corresponding to the key points in the face image in the three-dimensional space.

在注册阶段,如图4(a)所示,可以通过摄像头模组预览扫描人脸,获取用户多个角度的二维样本人脸图像,比如,采集将近20张不同角度的二维样本人脸图像和深度图用于后续三维人脸重建,其中,在扫描时可提示缺失角度和扫描进度,以及与每个二维样本人脸图像对应的深度信息,根据深度信息和二维样本人脸图像进行三维重构,获取原始样本人脸三维模型。In the registration stage, as shown in Figure 4(a), you can preview and scan the face through the camera module to obtain 2D sample face images from multiple angles of the user. For example, collect nearly 20 2D sample faces from different angles. The image and depth map are used for subsequent 3D face reconstruction, in which the missing angle and scanning progress can be prompted during scanning, as well as the depth information corresponding to each 2D sample face image. According to the depth information and the 2D sample face image Perform 3D reconstruction to obtain the 3D model of the original sample face.

对3D人脸模型进行五官分析,如脸型、鼻宽、鼻高、眼睛大小、嘴唇厚度等,给出整容建议信息,若用户确认整容建议信息,根据整容建议信息确定待整容部位的关键点以及调整参数,根据调整参数对原始样本人脸三维模型上待整容部位的关键点进行调整,得到虚拟整容后的目标样本人脸三维模型。Perform facial features analysis on the 3D face model, such as face shape, nose width, nose height, eye size, lip thickness, etc., and give plastic surgery advice. If the user confirms the cosmetic advice information, determine the key points and adjustments of the part to be cosmetic surgery based on the cosmetic advice information. parameters, and according to the adjustment parameters, adjust the key points of the part to be plastic surgery on the original sample face three-dimensional model, and obtain the target sample face three-dimensional model after virtual plastic surgery.

进而,如图4(b)所示,在识别阶段,获取用户当前的原始二维人脸图像,以及与原始二维人脸图像对应的深度信息,根据深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断用户是否注册,若获知用户已经注册,则获取与用户对应的人脸三维模型整形参数,根据人脸三维模型整形参数对原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型,将虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。Further, as shown in Fig. 4(b), in the identification stage, the current original two-dimensional face image of the user and the depth information corresponding to the original two-dimensional face image are obtained, according to the depth information and the original two-dimensional face image Perform three-dimensional reconstruction of the image, obtain the original three-dimensional model of the face, query the pre-registered face information, and determine whether the user is registered. The plastic parameters adjust the key points on the original face 3D model to obtain the target face 3D model after virtual plastic surgery, and map the target face 3D model after virtual plastic surgery to the 2D plane to obtain the target 2D face image.

综上所述,本申请实施例的人脸拍照的虚拟整容方法,获取用户当前的原始二维人脸图像,以及与原始二维人脸图像对应的深度信息,根据深度信息和原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断用户是否注册,若获知用户已经注册,则获取与用户对应的人脸三维模型整形参数,根据人脸三维模型整形参数对原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型,将虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。由此,基于人脸三维模型对已经注册的用户进行美颜,优化了美颜效果,提升了目标用户对美颜效果的满意度和与产品的粘性。To sum up, the virtual face-lifting method for photographing a human face according to the embodiment of the present application obtains the current original two-dimensional face image of the user and the depth information corresponding to the original two-dimensional face image. Perform 3D reconstruction of the face image, obtain the original 3D model of the face, query the pre-registered face information, and determine whether the user is registered. The model shaping parameters adjust the key points on the original face 3D model to obtain the target face 3D model after virtual facelift, map the target face 3D model after virtual facelift to the 2D plane, and obtain the target 2D face image . As a result, the registered users are beautified based on the three-dimensional face model, the beautification effect is optimized, and the target user's satisfaction with the beautification effect and the stickiness with the product are improved.

为了实现上述实施例,本申请还提出一种人脸拍照的虚拟整容装置,图5是根据本申请一个实施例人脸拍照的虚拟整容装置的结构示意图。如图5所示,该人脸拍照的虚拟整容装置包括获取模块10、重构模块20、查询模块30、调整模块40和映射模块50。In order to realize the above-mentioned embodiment, the present application also proposes a virtual cosmetic surgery device for taking a photo of a human face. FIG. 5 is a schematic structural diagram of a virtual cosmetic surgery device for taking a photo of a human face according to an embodiment of the present application. As shown in FIG. 5 , the virtual cosmetic surgery device for photographing a human face includes an acquisition module 10 , a reconstruction module 20 , a query module 30 , an adjustment module 40 and a mapping module 50 .

其中,获取模块10,用于获取用户当前的原始二维人脸图像,以及与所述原始二维人脸图像对应的深度信息。Wherein, the obtaining module 10 is configured to obtain the current original two-dimensional face image of the user and depth information corresponding to the original two-dimensional face image.

重构模块20,用于根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型。The reconstruction module 20 is configured to perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain a three-dimensional model of the original face.

查询模块30,用于查询预先注册的人脸信息,判断所述用户是否注册。The query module 30 is configured to query the pre-registered face information to determine whether the user is registered.

在本申请的一个实施例中,如图6所示,查询模块30包括提取单元31、确定单元32。In an embodiment of the present application, as shown in FIG. 6 , the query module 30 includes an extraction unit 31 and a determination unit 32 .

其中,提取单元31,用于分析所述原始二维人脸图像,提取所述用户的面部特征。The extraction unit 31 is configured to analyze the original two-dimensional face image and extract the facial features of the user.

确定单元32,用于查询预先注册的面部数据库,判断是否存在所述面部特征,若存在,则确定所述用户已经注册,若不存在,则确定所述用户没有注册。The determining unit 32 is configured to query a pre-registered face database to determine whether the facial feature exists, if so, determine that the user has been registered, and if not, determine that the user is not registered.

调整模块40,用于在获知所述用户已经注册时,获取与所述用户对应的人脸三维模型整形参数,根据所述人脸三维模型整形参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型。The adjustment module 40 is configured to obtain the three-dimensional face model shaping parameters corresponding to the user when it is known that the user has been registered, and adjust the key points on the original three-dimensional face model according to the three-dimensional face model shaping parameters. Make adjustments to obtain a 3D model of the target face after virtual plastic surgery.

映射模块50,用于将所述虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。The mapping module 50 is configured to map the three-dimensional model of the target face after virtual plastic surgery to a two-dimensional plane to obtain a two-dimensional face image of the target.

需要说明的是,前述对人脸拍照的虚拟整容方法实施例的解释说明也适用于该实施例的人脸拍照的虚拟整容装置,此处不再赘述。It should be noted that, the foregoing explanations of the embodiment of the virtual cosmetic surgery method for photographing a human face are also applicable to the virtual cosmetic surgery device for photographing a human face in this embodiment, which will not be repeated here.

综上所述,本申请实施例的人脸拍照的虚拟整容装置,获取用户当前的原始二维人脸图像,以及与原始二维人脸图像对应的深度信息,根据深度信息和原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断用户是否注册,若获知用户已经注册,则获取与用户对应的人脸三维模型整形参数,根据人脸三维模型整形参数对原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型,将虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。由此,基于人脸三维模型对已经注册的用户进行美颜,优化了美颜效果,提升了目标用户对美颜效果的满意度和与产品的粘性。To sum up, the virtual cosmetic surgery device for photographing a human face according to the embodiment of the present application acquires the current original two-dimensional human face image of the user and the depth information corresponding to the original two-dimensional human face image. Perform 3D reconstruction of the face image, obtain the original 3D model of the face, query the pre-registered face information, and determine whether the user is registered. The model shaping parameters adjust the key points on the original face 3D model to obtain the target face 3D model after virtual facelift, map the target face 3D model after virtual facelift to the 2D plane, and obtain the target 2D face image . As a result, the registered users are beautified based on the three-dimensional face model, the beautification effect is optimized, and the target user's satisfaction with the beautification effect and the stickiness with the product are improved.

为了实现上述实施例,本申请还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被移动终端的处理器执行时实现如前述实施例中所述的人脸拍照的虚拟整容方法。In order to realize the above-mentioned embodiments, the present application also proposes a computer-readable storage medium on which a computer program is stored, and when the program is executed by the processor of the mobile terminal, the virtual face-lifting for taking photos of the human face as described in the foregoing embodiments is realized. method.

为了实现上述实施例,本申请还提出一种电子设备。In order to realize the above embodiments, the present application also proposes an electronic device.

图7为一个实施例中电子设备200的内部结构示意图。该电子设备200包括通过系统总线210连接的处理器220、存储器230、显示器240和输入装置250。其中,电子设备200的存储器230存储有操作系统和计算机可读指令。该计算机可读指令可被处理器220执行,以实现本申请实施方式的人脸美化方法。该处理器220用于提供计算和控制能力,支撑整个电子设备200的运行。电子设备200的显示器240可以是液晶显示屏或者电子墨水显示屏等,输入装置250可以是显示器240上覆盖的触摸层,也可以是电子设备200外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该电子设备200可以是手机、平板电脑、笔记本电脑、个人数字助理或穿戴式设备(例如智能手环、智能手表、智能头盔、智能眼镜)等。FIG. 7 is a schematic diagram of the internal structure of the electronic device 200 in one embodiment. The electronic device 200 includes a processor 220 , a memory 230 , a display 240 and an input device 250 connected through a system bus 210 . The memory 230 of the electronic device 200 stores an operating system and computer-readable instructions. The computer-readable instructions can be executed by the processor 220 to implement the face beautification method of the embodiment of the present application. The processor 220 is used to provide computing and control capabilities to support the operation of the entire electronic device 200 . The display 240 of the electronic device 200 may be a liquid crystal display screen or an electronic ink display screen, etc., and the input device 250 may be a touch layer covered on the display 240, or a button, a trackball or a touchpad provided on the housing of the electronic device 200, It can also be an external keyboard, trackpad or mouse, etc. The electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (eg, a smart bracelet, a smart watch, a smart helmet, and smart glasses).

本领域技术人员可以理解,图7中示出的结构,仅仅是与本申请方案相关的部分结构的示意图,并不构成对本申请方案所应用于其上的电子设备200的限定,具体的电子设备200可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 7 is only a schematic diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the electronic device 200 to which the solution of the present application is applied. The specific electronic device 200 may include more or fewer components than shown, or combine certain components, or have a different arrangement of components.

为了实现上述实施例,本发明还提出了一种图像处理电路,该图像处理电路包括图像单元310、深度信息单元320和处理单元330。其中,In order to realize the above embodiments, the present invention also provides an image processing circuit, the image processing circuit includes an image unit 310 , a depth information unit 320 and a processing unit 330 . in,

图像单元310,用于输出用户当前的原始二维人脸图像。The image unit 310 is configured to output the current original two-dimensional face image of the user.

深度信息单元320,用于输出与原始二维人脸图像对应的深度信息。The depth information unit 320 is configured to output depth information corresponding to the original two-dimensional face image.

处理单元330,分别与图像单元和深度信息单元电性连接,用于根据深度信息和原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断用户是否注册,若获知用户已经注册,则获取与用户对应的人脸三维模型整形参数,根据人脸三维模型整形参数对原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型,将虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。The processing unit 330 is electrically connected to the image unit and the depth information unit respectively, and is used for performing three-dimensional reconstruction according to the depth information and the original two-dimensional face image, obtaining the three-dimensional model of the original face, querying the pre-registered face information, and judging the user Whether to register or not, if it is known that the user has registered, obtain the face 3D model shaping parameters corresponding to the user, adjust the key points on the original face 3D model according to the face 3D model shaping parameters, and obtain the target face after virtual plastic surgery The three-dimensional model is to map the three-dimensional model of the target face after virtual plastic surgery to a two-dimensional plane to obtain a two-dimensional face image of the target.

本申请实施例中,图像单元310具体可以包括:电性连接的图像传感器311和图像信号处理(Image Signal Processing,简称ISP)处理器312。其中,In this embodiment of the present application, the image unit 310 may specifically include: an electrically connected image sensor 311 and an image signal processing (Image Signal Processing, ISP for short) processor 312 . in,

图像传感器311,用于输出原始图像数据。The image sensor 311 is used to output raw image data.

ISP处理器312,用于根据所述原始图像数据,输出所述原始二维人脸图像。The ISP processor 312 is configured to output the original two-dimensional face image according to the original image data.

本申请实施例中,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的人脸图像。其中,图像传感器311可包括色彩滤镜阵列(如Bayer滤镜),以及对应的感光单元,图像传感器311可获取每个感光单元捕捉的光强度和波长信息,并提供可由ISP处理器312处理的一组原始图像数据。ISP处理器312对原始图像数据进行处理后,得到YUV格式或者RGB格式的人脸图像,并发送至处理单元330。In this embodiment of the present application, the raw image data captured by the image sensor 311 is first processed by the ISP processor 312 , and the ISP processor 312 analyzes the raw image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311 Information, including face images in YUV format or RGB format. The image sensor 311 may include a color filter array (such as a Bayer filter) and a corresponding photosensitive unit. The image sensor 311 may acquire the light intensity and wavelength information captured by each photosensitive unit, and provide information that can be processed by the ISP processor 312. A set of raw image data. After the ISP processor 312 processes the original image data, a face image in YUV format or RGB format is obtained, and sent to the processing unit 330 .

其中,ISP处理器312在对原始图像数据进行处理时,可以按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器312可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。When processing the original image data, the ISP processor 312 can process the original image data pixel by pixel in various formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, the image processing operations can be performed with the same or different bit depth precision.

作为一种可能的实现方式,深度信息单元320,包括电性连接的结构光传感器321和深度图生成芯片322。其中,As a possible implementation manner, the depth information unit 320 includes an electrically connected structured light sensor 321 and a depth map generation chip 322 . in,

结构光传感器321,用于生成红外散斑图。The structured light sensor 321 is used to generate an infrared speckle pattern.

深度图生成芯片322,用于根据红外散斑图,输出与原始二维人脸图像对应的深度信息。The depth map generation chip 322 is used for outputting depth information corresponding to the original two-dimensional face image according to the infrared speckle map.

本申请实施例中,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map),该深度图指示了红外散斑图中各像素点的深度。深度图生成芯片322将深度图发送至处理单元330。In the embodiment of the present application, the structured light sensor 321 projects speckle structured light to the subject, acquires the structured light reflected by the subject, and images the infrared speckle image according to the reflected structured light. The structured light sensor 321 sends the infrared speckle image to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle image, and then determines the depth of the subject based on this to obtain the depth map (Depth Map), the depth map indicates the depth of each pixel in the infrared speckle map. The depth map generation chip 322 sends the depth map to the processing unit 330 .

作为一种可能的实现方式,处理单元330,包括:电性连接的CPU331和GPU(Graphics Processing Unit,图形处理器)332。其中,As a possible implementation manner, the processing unit 330 includes: a CPU 331 and a GPU (Graphics Processing Unit, graphics processor) 332 that are electrically connected. in,

CPU331,用于根据标定数据,对齐人脸图像与深度图,根据对齐后的人脸图像与深度图,输出人脸三维模型。The CPU 331 is configured to align the face image and the depth map according to the calibration data, and output a three-dimensional face model according to the aligned face image and the depth map.

GPU332,用于若获知所述用户已经注册,则获取与所述用户对应的人脸三维模型整形参数,根据所述人脸三维模型整形参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型,将所述虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。The GPU 332 is configured to acquire the three-dimensional face model shaping parameters corresponding to the user if it is known that the user has been registered, and adjust the key points on the original three-dimensional face model according to the three-dimensional face model shaping parameters to obtain a three-dimensional model of the target face after virtual plastic surgery, and map the three-dimensional model of the target face after virtual plastic surgery to a two-dimensional plane to obtain a two-dimensional face image of the target.

本申请实施例中,CPU331从ISP处理器312获取到人脸图像,从深度图生成芯片322获取到深度图,结合预先得到的标定数据,可以将人脸图像与深度图对齐,从而确定出人脸图像中各像素点对应的深度信息。进而,CPU331根据深度信息和人脸图像,进行三维重构,得到人脸三维模型。In the embodiment of the present application, the CPU 331 obtains the face image from the ISP processor 312, and obtains the depth map from the depth map generation chip 322. Combined with the pre-obtained calibration data, the face image and the depth map can be aligned to determine the person The depth information corresponding to each pixel in the face image. Further, the CPU 331 performs three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional face model.

CPU331将人脸三维模型发送至GPU332,以便GPU332根据人脸三维模型执行如前述实施例中描述的人脸拍照的虚拟整容方法,得到目标二维人脸图像。The CPU 331 sends the three-dimensional face model to the GPU 332, so that the GPU 332 executes the virtual facelifting method for photographing a face as described in the foregoing embodiments according to the three-dimensional face model, and obtains a target two-dimensional face image.

进一步地,图像处理电路还可以包括:第一显示单元341。Further, the image processing circuit may further include: a first display unit 341 .

第一显示单元341,与所述处理单元330电性连接,用于显示对待整容部位的关键点对应的调整控件。The first display unit 341, electrically connected to the processing unit 330, is used for displaying adjustment controls corresponding to key points of the part to be plastic surgery.

进一步地,图像处理电路还可以包括:第二显示单元342。Further, the image processing circuit may further include: a second display unit 342 .

第二显示单元342,与所述处理单元340电性连接,用于显示虚拟整容后的目标样本人脸三维模型。The second display unit 342, electrically connected to the processing unit 340, is used for displaying the three-dimensional face model of the target sample after virtual plastic surgery.

可选地,图像处理电路还可以包括:编码器350和存储器360。Optionally, the image processing circuit may further include: an encoder 350 and a memory 360 .

本申请实施例中,GPU332处理得到的美化后的人脸图,还可以由编码器350编码后存储至存储器360,其中,编码器350可由协处理器实现。In this embodiment of the present application, the beautified face image processed by the GPU 332 may also be encoded by the encoder 350 and stored in the memory 360, where the encoder 350 may be implemented by a coprocessor.

在一个实施例中,存储器360可以为多个,或者划分为多个存储空间,存储GPU312处理后的图像数据可存储至专用存储器,或者专用存储空间,并可包括DMA(Direct MemoryAccess,直接直接存储器存取)特征。存储器360可被配置为实现一个或多个帧缓冲器。In one embodiment, the memory 360 may be multiple or divided into multiple storage spaces, and the image data processed by the GPU 312 may be stored in a dedicated memory or dedicated storage space, and may include DMA (Direct Memory Access). access) feature. Memory 360 may be configured to implement one or more frame buffers.

下面结合图9,对上述过程进行详细说明。The above process will be described in detail below with reference to FIG. 9 .

需要说明的是,图9为作为一种可能的实现方式的图像处理电路的示意图。为便于说明,仅示出与本申请实施例相关的各个方面。It should be noted that FIG. 9 is a schematic diagram of an image processing circuit as a possible implementation manner. For the convenience of description, only various aspects related to the embodiments of the present application are shown.

如图9所示,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的人脸图像。其中,图像传感器311可包括色彩滤镜阵列(如Bayer滤镜),以及对应的感光单元,图像传感器311可获取每个感光单元捕捉的光强度和波长信息,并提供可由ISP处理器312处理的一组原始图像数据。ISP处理器312对原始图像数据进行处理后得到YUV格式或者RGB格式的人脸图像,并发送至CPU331。As shown in FIG. 9, raw image data captured by image sensor 311 is first processed by ISP processor 312, which analyzes the raw image data to capture image statistics that can be used to determine one or more control parameters of image sensor 311 Information, including face images in YUV format or RGB format. The image sensor 311 may include a color filter array (such as a Bayer filter) and a corresponding photosensitive unit. The image sensor 311 may acquire the light intensity and wavelength information captured by each photosensitive unit, and provide information that can be processed by the ISP processor 312. A set of raw image data. The ISP processor 312 processes the original image data to obtain a face image in YUV format or RGB format, and sends it to the CPU 331 .

其中,ISP处理器312在对原始图像数据进行处理时,可以按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器312可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。When processing the original image data, the ISP processor 312 can process the original image data pixel by pixel in various formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, the image processing operations can be performed with the same or different bit depth precision.

如图9所示,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map),该深度图指示了红外散斑图中各像素点的深度。深度图生成芯片322将深度图发送至CPU331。As shown in FIG. 9 , the structured light sensor 321 projects speckle structured light to the subject, acquires the structured light reflected by the subject, and forms an infrared speckle image according to the reflected structured light. The structured light sensor 321 sends the infrared speckle image to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle image, and then determines the depth of the subject based on this to obtain the depth map (Depth Map), the depth map indicates the depth of each pixel in the infrared speckle map. The depth map generation chip 322 sends the depth map to the CPU 331 .

CPU331从ISP处理器312获取到人脸图像,从深度图生成芯片322获取到深度图,结合预先得到的标定数据,可以将人脸图像与深度图对齐,从而确定出人脸图像中各像素点对应的深度信息。进而,CPU331根据深度信息和人脸图像,进行三维重构,得到人脸三维模型。The CPU 331 obtains the face image from the ISP processor 312, obtains the depth map from the depth map generation chip 322, and combines the pre-obtained calibration data to align the face image with the depth map, thereby determining each pixel in the face image. corresponding depth information. Further, the CPU 331 performs three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional face model.

CPU331将人脸三维模型发送至GPU332,以便GPU332根据人脸三维模型执行如前述实施例中描述的方法,实现人脸虚拟整容,得到虚拟整容后的人脸图像。GPU332处理得到的虚拟整容后的人脸图像,可以由显示器340(包括上述第一显示单元341和第二显示单元351)显示,和/或,由编码器350编码后存储至存储器360,其中,编码器350可由协处理器实现。The CPU 331 sends the three-dimensional face model to the GPU 332, so that the GPU 332 executes the method described in the foregoing embodiments according to the three-dimensional face model, realizes virtual face-lifting, and obtains a face image after virtual face-lifting. The face image after virtual plastic surgery that is processed by the GPU 332 can be displayed by the display 340 (including the above-mentioned first display unit 341 and the second display unit 351), and/or stored in the memory 360 after being encoded by the encoder 350, wherein, The encoder 350 may be implemented by a coprocessor.

在一个实施例中,存储器360可以为多个,或者划分为多个存储空间,存储GPU332处理后的图像数据可存储至专用存储器,或者专用存储空间,并可包括DMA(Direct MemoryAccess,直接直接存储器存取)特征。存储器360可被配置为实现一个或多个帧缓冲器。In one embodiment, the memory 360 may be multiple or divided into multiple storage spaces, and the image data processed by the GPU 332 may be stored in a dedicated memory or dedicated storage space, and may include DMA (Direct Memory Access). access) feature. Memory 360 may be configured to implement one or more frame buffers.

例如,以下为运用图9中的处理器220或运用图9中的图像处理电路(具体为CPU331和GPU332)实现控制方法的步骤:For example, the following are the steps of implementing the control method using the processor 220 in FIG. 9 or using the image processing circuit (specifically CPU331 and GPU332) in FIG. 9 :

CPU331获取二维的人脸图像,以及所述人脸图像对应的深度信息;CPU331根据所述深度信息和所述人脸图像,进行三维重构,得到人脸三维模型;GPU332获取与所述用户对应的人脸三维模型整形参数,根据所述人脸三维模型整形参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟整容后的目标人脸三维模型;GPU332将所述虚拟整容后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。The CPU331 obtains a two-dimensional face image and the depth information corresponding to the face image; the CPU 331 performs three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional face model; Corresponding face 3D model shaping parameters, adjust the key points on the original face 3D model according to the face 3D model shaping parameters to obtain the target face 3D model after virtual plastic surgery; GPU332 The 3D model of the target face is mapped to a 2D plane to obtain a 2D face image of the target.

在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.

此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In the description of the present application, "plurality" means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.

流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing custom logical functions or steps of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.

在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium, For use with, or in conjunction with, an instruction execution system, apparatus, or device (such as a computer-based system, a system including a processor, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus, or apparatus) or equipment. For the purposes of this specification, a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.

应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of this application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware as in another embodiment, it can be implemented by any one of the following techniques known in the art, or a combination thereof: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.

本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the program is stored in a computer-readable storage medium. When executed, one or a combination of the steps of the method embodiment is included.

此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.

上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like. Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and variations.

Claims (19)

1. A virtual face-lifting method for face photographing is characterized by comprising the following steps:
acquiring original two-dimensional face images of a user at multiple current angles, projecting non-uniform speckle structured light to the face of the user, shooting a structured light image modulated by the face of the user, and demodulating phase information corresponding to each pixel of the structured light image to obtain depth information corresponding to the original two-dimensional face image;
performing three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model;
inquiring face information registered in advance, and judging whether the user is registered;
if the fact that the user is registered is known, inquiring registration information to obtain a face three-dimensional model shaping parameter corresponding to the user, adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face shaping, wherein before the inquiring registration information obtains the face three-dimensional model shaping parameter corresponding to the user, the method comprises the following steps:
acquiring two-dimensional sample face images of the user at multiple angles and depth information corresponding to each two-dimensional sample face image, performing three-dimensional reconstruction according to the depth information and the two-dimensional sample face images to acquire an original sample face three-dimensional model, adjusting key points of a part to be trimmed on the original sample face three-dimensional model to obtain a target sample face three-dimensional model after virtual trimming, comparing the original sample face three-dimensional model with the target sample face three-dimensional model, extracting face three-dimensional model trimming parameters corresponding to the user, and storing the face three-dimensional model trimming parameters in the registration information;
if the user is not registered, acquiring attribute features of the user, searching standard face three-dimensional model shaping parameters corresponding to the attribute features according to the attribute features, and adjusting key points on the original face three-dimensional model according to the standard face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting;
and mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
2. The method of claim 1, wherein the querying pre-registered face information and determining whether the user is registered comprises:
analyzing the original two-dimensional face image and extracting facial features of the user;
inquiring a face database registered in advance, judging whether the face features exist, and if so, determining that the user is registered; and if not, determining that the user is not registered.
3. The method of claim 1, wherein the attribute characteristics of the user comprise:
gender, age, race, and skin color.
4. The method according to claim 1, before the obtaining an original face three-dimensional model according to the depth information and the original two-dimensional face image by three-dimensional reconstruction, further comprising:
extracting attribute features of the user;
and beautifying the original two-dimensional face image according to the attribute characteristics to obtain a beautified original two-dimensional face image.
5. The method of claim 1, after said obtaining the original human face three-dimensional model, further comprising:
and beautifying the skin texture map covered on the surface of the original human face three-dimensional model to obtain the beautified original human face three-dimensional model.
6. The method of claim 1, wherein the performing three-dimensional reconstruction based on the depth information and the two-dimensional sample face image to obtain an original sample face three-dimensional model comprises:
performing key point identification on each two-dimensional sample face image to obtain positioning key points;
determining the relative position of the positioning key point in a three-dimensional space according to the depth information of the positioning key point and the distance of the positioning key point on each two-dimensional sample face image;
and connecting adjacent positioning key points according to the relative positions of the positioning key points in the three-dimensional space to generate an original sample human face three-dimensional model.
7. The method according to claim 1, wherein the adjusting key points of the portion to be face-trimmed on the original sample human face three-dimensional model to obtain a virtual face-trimmed target sample human face three-dimensional model comprises:
generating an adjusting control corresponding to the key point of each face-lifting part;
detecting touch operation of an adjusting control corresponding to a key point of a part to be beautified by the user, and acquiring corresponding adjusting parameters;
and adjusting key points of the part to be subjected to face-lifting on the original sample human face three-dimensional model according to the adjustment parameters to obtain a target sample human face three-dimensional model after virtual face-lifting.
8. The method according to claim 1, wherein the adjusting key points of the portion to be face-trimmed on the original sample human face three-dimensional model to obtain a virtual face-trimmed target sample human face three-dimensional model comprises:
displaying key points of each face-lifting part on the original sample human face three-dimensional model;
and detecting the displacement operation of the key points of the part to be subjected to face-lifting by the user, and adjusting the key points according to the displacement operation to obtain the target sample human face three-dimensional model after virtual face-lifting.
9. The method according to claim 1, wherein the adjusting key points of the portion to be face-trimmed on the original sample human face three-dimensional model to obtain a virtual face-trimmed target sample human face three-dimensional model comprises:
providing a cosmetic suggestion information to the user;
if the user confirms the face-lifting suggestion information, determining key points and adjustment parameters of the part to be face-lifted according to the face-lifting suggestion information;
and adjusting key points of the part to be subjected to face-lifting on the original sample human face three-dimensional model according to the adjustment parameters to obtain a target sample human face three-dimensional model after virtual face-lifting.
10. A virtual face-lifting device for face photographing, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring original two-dimensional face images of a plurality of current angles of a user, projecting non-uniform speckle structured light to the face of the current user, shooting a structured light image modulated by the face of the current user, and demodulating phase information corresponding to each pixel of the structured light image to obtain depth information corresponding to the original two-dimensional face image;
the reconstruction module is used for carrying out three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model;
the query module is used for querying face information registered in advance and judging whether the user is registered;
the adjusting module is used for inquiring registration information to obtain a face three-dimensional model shaping parameter corresponding to the user when the user is registered, and adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face-lifting;
a registration module, configured to, before the registration information is queried to obtain face three-dimensional model shaping parameters corresponding to the user, obtain two-dimensional sample face images of the user at multiple angles and depth information corresponding to each two-dimensional sample face image, perform three-dimensional reconstruction according to the depth information and the two-dimensional sample face images, obtain an original sample face three-dimensional model, adjust a key point of a portion to be reshaped on the original sample face three-dimensional model, obtain a target sample face three-dimensional model after virtual reshaping, compare the original sample face three-dimensional model with the target sample face three-dimensional model, extract face three-dimensional model shaping parameters corresponding to the user, and store the face three-dimensional model shaping parameters in the registration information;
the adjusting module is further configured to, when the user is not registered, acquire attribute features of the user, search for standard face three-dimensional model shaping parameters corresponding to the attribute features according to the attribute features, and adjust key points on the original face three-dimensional model according to the standard face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting;
and the mapping module is used for mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
11. The apparatus of claim 10, wherein the query module comprises:
the extracting unit is used for analyzing the original two-dimensional face image and extracting the facial features of the user;
and the determining unit is used for inquiring a face database registered in advance, judging whether the face features exist or not, if so, determining that the user is registered, and if not, determining that the user is not registered.
12. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the virtual face-lift method for photographing a human face as claimed in any one of claims 1 to 9 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a virtual face-lift method for photographing a human face according to any one of claims 1 to 9.
14. An image processing circuit, characterized in that the image processing circuit comprises: an image unit, a depth information unit and a processing unit;
the image unit is used for outputting original two-dimensional face images of a plurality of current angles of a user;
the depth information unit comprises a structured light sensor and a depth map generation chip which are electrically connected, wherein the depth information unit is used for projecting non-uniform speckle structured light to the face of a current user, the structured light sensor is used for generating a structured light image modulated by the face of the current user, and the depth map generation chip is used for demodulating phase information corresponding to each pixel of the structured light image to obtain depth information corresponding to the original two-dimensional face image;
the processing unit is respectively electrically connected with the image unit and the depth information unit and is used for carrying out three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model, inquiring pre-registered face information and judging whether the user is registered or not, inquiring registered information to obtain face three-dimensional model shaping parameters corresponding to the user if the user is registered, adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting,
before the query registration information obtains the face three-dimensional model shaping parameters corresponding to the user, the method comprises the following steps:
acquiring two-dimensional sample face images of the user at a plurality of angles and depth information corresponding to each two-dimensional sample face image,
performing three-dimensional reconstruction according to the depth information and the two-dimensional sample face image to obtain an original sample face three-dimensional model,
adjusting key points of a part to be reshaped on the original sample human face three-dimensional model to obtain a target sample human face three-dimensional model after virtual reshaping,
comparing the original sample human face three-dimensional model with the target sample human face three-dimensional model, extracting human face three-dimensional model shaping parameters corresponding to the user, and storing the human face three-dimensional model shaping parameters in the registration information,
if the user is not registered, acquiring attribute features of the user, searching standard face three-dimensional model shaping parameters corresponding to the attribute features according to the attribute features, and adjusting key points on the original face three-dimensional model according to the standard face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting;
and mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
15. The image processing circuit of claim 14, wherein the image unit comprises an image sensor and an image signal processing ISP processor electrically connected;
the image sensor is used for outputting original image data;
and the image signal processing ISP processor is used for outputting the original two-dimensional face image according to the original image data.
16. The image processing circuit of claim 15, wherein the processing unit comprises a CPU and a GPU electrically connected;
the CPU is used for carrying out three-dimensional reconstruction according to the depth information and the original two-dimensional face image, acquiring an original face three-dimensional model, inquiring face information registered in advance and judging whether the user is registered or not;
and the GPU is used for acquiring a face three-dimensional model shaping parameter corresponding to the user if the user is registered, adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face shaping, and mapping the target face three-dimensional model after virtual face shaping to a two-dimensional plane to obtain a target two-dimensional face image.
17. The image processing circuit of claim 16, wherein the GPU is further configured to:
extracting attribute features of the user;
and beautifying the original two-dimensional face image according to the attribute characteristics to obtain a beautified original two-dimensional face image.
18. The image processing circuit according to any of claims 14-17, further comprising a first display unit;
the first display unit is electrically connected with the processing unit and used for displaying the adjusting control corresponding to the key point of the part to be beautified.
19. The image processing circuit according to any of claims 14-17, further comprising a second display unit;
and the second display unit is electrically connected with the processing unit and is used for displaying the target sample human face three-dimensional model after virtual face-lifting.
CN201810551058.3A 2018-05-31 2018-05-31 Virtual cosmetic surgery method and device for photographing faces Active CN108765273B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810551058.3A CN108765273B (en) 2018-05-31 2018-05-31 Virtual cosmetic surgery method and device for photographing faces
PCT/CN2019/089348 WO2019228473A1 (en) 2018-05-31 2019-05-30 Method and apparatus for beautifying face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810551058.3A CN108765273B (en) 2018-05-31 2018-05-31 Virtual cosmetic surgery method and device for photographing faces

Publications (2)

Publication Number Publication Date
CN108765273A CN108765273A (en) 2018-11-06
CN108765273B true CN108765273B (en) 2021-03-09

Family

ID=64001237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810551058.3A Active CN108765273B (en) 2018-05-31 2018-05-31 Virtual cosmetic surgery method and device for photographing faces

Country Status (2)

Country Link
CN (1) CN108765273B (en)
WO (1) WO2019228473A1 (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765273B (en) * 2018-05-31 2021-03-09 Oppo广东移动通信有限公司 Virtual cosmetic surgery method and device for photographing faces
CN111353931B (en) * 2018-12-24 2023-10-03 黄庆武整形医生集团(深圳)有限公司 Shaping simulation method, system, readable storage medium and apparatus
CN110020600B (en) * 2019-03-05 2021-04-16 厦门美图之家科技有限公司 Method for generating a data set for training a face alignment model
CN110189406B (en) * 2019-05-31 2023-11-28 创新先进技术有限公司 Image data labeling method and device
CN110278029B (en) * 2019-06-25 2020-12-22 Oppo广东移动通信有限公司 Data transmission control method and related product
CN110310318B (en) * 2019-07-03 2022-10-04 北京字节跳动网络技术有限公司 Special effect processing method and device, storage medium and terminal
CN110321849B (en) * 2019-07-05 2023-12-22 腾讯科技(深圳)有限公司 Image data processing method, device and computer readable storage medium
CN110473295B (en) * 2019-08-07 2023-04-25 重庆灵翎互娱科技有限公司 Method and equipment for carrying out beautifying treatment based on three-dimensional face model
CN110675489B (en) * 2019-09-25 2024-01-23 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN111031305A (en) * 2019-11-21 2020-04-17 北京市商汤科技开发有限公司 Image processing method and apparatus, image device, and storage medium
JP2022512262A (en) 2019-11-21 2022-02-03 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド Image processing methods and equipment, image processing equipment and storage media
CN112927343B (en) * 2019-12-05 2023-09-05 杭州海康威视数字技术股份有限公司 Image generation method and device
CN110807448B (en) * 2020-01-07 2020-04-14 南京甄视智能科技有限公司 Human face key point data enhancement method
GB2591994B (en) * 2020-01-31 2024-05-22 Fuel 3D Tech Limited A method for generating a 3D model
CN111370100A (en) * 2020-03-11 2020-07-03 深圳小佳科技有限公司 Method and system for cosmetic surgery recommendation based on cloud server
CN111539882A (en) * 2020-04-17 2020-08-14 华为技术有限公司 Interactive method for assisting makeup, terminal and computer storage medium
CN113534189B (en) * 2020-04-22 2024-12-13 深圳引望智能技术有限公司 Weight detection method, human body characteristic parameter detection method and device
CN111966852B (en) * 2020-06-28 2024-04-09 北京百度网讯科技有限公司 Face-based virtual face-lifting method and device
CN112150618B (en) * 2020-10-16 2022-11-29 四川大学 Processing method and device for virtual shaping of canthus
CN113177879B (en) * 2021-04-30 2024-12-06 北京百度网讯科技有限公司 Image processing method, device, electronic device and storage medium
CN113724396A (en) * 2021-09-10 2021-11-30 广州帕克西软件开发有限公司 Virtual face-lifting method and device based on face mesh
CN113763285B (en) * 2021-09-27 2024-06-11 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN114581987A (en) * 2021-10-20 2022-06-03 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114120414B (en) * 2021-11-29 2022-11-01 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN113902790B (en) * 2021-12-09 2022-03-25 北京的卢深视科技有限公司 Beauty guidance method, device, electronic equipment and computer readable storage medium
CN114299267A (en) * 2021-12-28 2022-04-08 北京快来文化传播集团有限公司 Image editing system and method
CN116778076B (en) * 2022-03-11 2025-01-21 腾讯科技(深圳)有限公司 A method for constructing a face sample and a related device
CN114998554B (en) * 2022-05-05 2024-08-20 清华大学 Three-dimensional cartoon face modeling method and device
CN115239888B (en) * 2022-08-31 2023-09-12 北京百度网讯科技有限公司 Method, device, electronic equipment and medium for reconstructing three-dimensional face image
CN117831187B (en) * 2024-01-08 2024-08-23 浙江德方智能科技有限公司 Park sidewalk management method and system based on face recognition authorization

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6283858B1 (en) * 1997-02-25 2001-09-04 Bgk International Incorporated Method for manipulating images
CN105938627A (en) * 2016-04-12 2016-09-14 湖南拓视觉信息技术有限公司 Processing method and system for virtual plastic processing on face
CN107705356A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777195B (en) * 2010-01-29 2012-04-25 浙江大学 Three-dimensional face model adjusting method
CN106940880A (en) * 2016-01-04 2017-07-11 中兴通讯股份有限公司 A kind of U.S. face processing method, device and terminal device
CN107993209B (en) * 2017-11-30 2020-06-12 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108040208A (en) * 2017-12-18 2018-05-15 信利光电股份有限公司 A kind of depth U.S. face method, apparatus, equipment and computer-readable recording medium
CN108765273B (en) * 2018-05-31 2021-03-09 Oppo广东移动通信有限公司 Virtual cosmetic surgery method and device for photographing faces

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6283858B1 (en) * 1997-02-25 2001-09-04 Bgk International Incorporated Method for manipulating images
CN105938627A (en) * 2016-04-12 2016-09-14 湖南拓视觉信息技术有限公司 Processing method and system for virtual plastic processing on face
CN107705356A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
计算机辅助三维整形外科手术关键技术研究;田炜;《中国优秀硕士学位论文全文数据库信息科技辑》;20071015;正文第9-52页 *

Also Published As

Publication number Publication date
CN108765273A (en) 2018-11-06
WO2019228473A1 (en) 2019-12-05

Similar Documents

Publication Publication Date Title
CN108765273B (en) Virtual cosmetic surgery method and device for photographing faces
CN108447017B (en) Face virtual face-lifting method and device
CN109118569B (en) Rendering method and device based on three-dimensional model
CN107852533B (en) Three-dimensional content generation device and three-dimensional content generation method thereof
CN107479801B (en) Terminal display method and device based on user expression and terminal
CN108764180A (en) Face recognition method and device, electronic equipment and readable storage medium
CN108682050B (en) Three-dimensional model-based beautifying method and device
CN108876709A (en) Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing
WO2020034698A1 (en) Three-dimensional model-based special effect processing method and device, and electronic apparatus
CN109191584B (en) Three-dimensional model processing method and device, electronic equipment and readable storage medium
CN107948517B (en) Preview image blur processing method, device and device
WO2021036314A1 (en) Facial image processing method and apparatus, image device, and storage medium
CN107818305A (en) Image processing method, device, electronic device, and computer-readable storage medium
WO2020034785A1 (en) Method and device for processing three-dimensional model
CN108550185A (en) Face beautification processing method and device
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN109191393B (en) Three-dimensional model-based beauty method
CN110675487A (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
CN109272579B (en) Three-dimensional model-based makeup method and device, electronic equipment and storage medium
CN109242760B (en) Face image processing method and device and electronic equipment
CN103679767A (en) Image generation apparatus and image generation method
KR20170092533A (en) A face pose rectification method and apparatus
JP2004030007A (en) Makeup simulation apparatus, makeup simulation method, makeup simulation program and recording medium with program recorded thereon
JP4348028B2 (en) Image processing method, image processing apparatus, imaging apparatus, and computer program
CN107705356A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant