[go: up one dir, main page]

CN109147037B - Special effect processing method, device and electronic device based on 3D model - Google Patents

Special effect processing method, device and electronic device based on 3D model Download PDF

Info

Publication number
CN109147037B
CN109147037B CN201810934012.XA CN201810934012A CN109147037B CN 109147037 B CN109147037 B CN 109147037B CN 201810934012 A CN201810934012 A CN 201810934012A CN 109147037 B CN109147037 B CN 109147037B
Authority
CN
China
Prior art keywords
special effect
model
dimensional model
dimensional
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810934012.XA
Other languages
Chinese (zh)
Other versions
CN109147037A (en
Inventor
阎法典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810934012.XA priority Critical patent/CN109147037B/en
Publication of CN109147037A publication Critical patent/CN109147037A/en
Priority to PCT/CN2019/088118 priority patent/WO2020034698A1/en
Application granted granted Critical
Publication of CN109147037B publication Critical patent/CN109147037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a special effect processing method and device based on a three-dimensional model and electronic equipment, wherein the method comprises the following steps: acquiring an acquired two-dimensional face image and depth information corresponding to the face image; performing three-dimensional reconstruction on the face according to the depth information and the face image to obtain a three-dimensional model corresponding to the face; recognizing expression categories corresponding to the two-dimensional face images; and fusing the three-dimensional model with the special effect model corresponding to the expression category to obtain the three-dimensional model after special effect processing. The method can realize that different special effect models do not need to be manually switched by a user, improve the automation degree of special effect addition, improve the enjoyment and playability of the user in the special effect addition process, improve the reality of the special effect addition and enable the treated effect to be better and natural.

Description

基于三维模型的特效处理方法、装置和电子设备Special effect processing method, device and electronic device based on 3D model

技术领域technical field

本申请涉及电子设备技术领域,尤其涉及一种基于三维模型的特效处理方法、装置和电子设备。The present application relates to the technical field of electronic devices, and in particular, to a three-dimensional model-based special effect processing method, device, and electronic device.

背景技术Background technique

随着电子设备的普及,越来越多的用户喜欢利用电子设备的拍照功能进行拍照或者记录生活。而且为了使得所拍摄的图像更加有趣,开发了各种用于对图像进行美化或者增加特效的应用程序。用户可以根据自己的需求,从应用程序自带的所有特效中选择自己喜欢的特效来处理图像,使得图像生动有趣。With the popularization of electronic devices, more and more users like to take pictures or record their life by using the photographing function of the electronic devices. And in order to make the captured images more interesting, various applications for beautifying the images or adding special effects have been developed. Users can choose their favorite special effects from all the special effects that come with the application to process images according to their own needs, making the images vivid and interesting.

由于眼泪等面部特效的添加,依赖于用户的主动选择,导致特效添加的自动化程度较低,此外,对图像增加特效是在二维图像上进行的,使得特效无法与图像完美贴合或者匹配,导致图像处理效果较差,特效添加的真实感不强。The addition of facial special effects such as tears depends on the user's active selection, resulting in a low degree of automation of adding special effects. In addition, adding special effects to images is performed on a two-dimensional image, so that the special effects cannot be perfectly fitted or matched with the image. As a result, the image processing effect is poor, and the realism added by special effects is not strong.

发明内容SUMMARY OF THE INVENTION

本申请提出一种基于三维模型的特效处理方法、装置和电子设备,以实现无需用户手动切换不同的特效模型,提升特效添加的自动化程度,以及提升用户在特效添加过程中的乐趣性和可玩性,提升特效添加的真实感,使得处理后的效果更佳自然,用于解决现有技术中特效添加的真实感不强以及自动化程度低的技术问题。The present application proposes a special effect processing method, device and electronic device based on a three-dimensional model, so as to realize that there is no need for a user to manually switch between different special effect models, improve the automation degree of special effect addition, and improve the fun and playability of the user in the special effect addition process It can improve the realism of special effects, and make the processed effects better and more natural. It is used to solve the technical problems of low realism and low automation in the prior art.

本申请一方面实施例提出了一种基于三维模型的特效处理方法,包括:An embodiment of the present application proposes a special effect processing method based on a three-dimensional model, including:

获取采集到的二维的人脸图像,以及所述人脸图像对应的深度信息;acquiring the collected two-dimensional face image and depth information corresponding to the face image;

根据所述深度信息和所述人脸图像,对人脸进行三维重构,以得到所述人脸对应的三维模型;performing three-dimensional reconstruction on the human face according to the depth information and the human face image to obtain a three-dimensional model corresponding to the human face;

识别与所述二维的人脸图像对应的表情类别;Identify the expression category corresponding to the two-dimensional face image;

将所述三维模型与所述表情类别对应的特效模型进行融合,得到特效处理后的三维模型。The three-dimensional model and the special effect model corresponding to the expression category are fused to obtain a three-dimensional model after special effect processing.

本申请实施例的基于三维模型的特效处理方法,通过获取采集到的二维的人脸图像,以及人脸图像对应的深度信息,而后根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型,接着识别与二维的人脸图像对应的表情类别,最后将三维模型与表情类别对应的特效模型进行融合。由此,无需用户手动切换不同的特效模型,提升特效添加的自动化程度,以及提升用户在特效添加过程中的乐趣性和可玩性。此外,根据用户做出的表情,确定相应的特效模型,从而将特效模型与三维模型进行融合,可以提升特效添加的真实感,使得处理后的效果更佳自然。The three-dimensional model-based special effects processing method according to the embodiment of the present application obtains the collected two-dimensional face image and the depth information corresponding to the face image, and then performs three-dimensional reconstruction of the face according to the depth information and the face image. , to obtain the three-dimensional model corresponding to the face, then identify the expression category corresponding to the two-dimensional face image, and finally fuse the three-dimensional model with the special effect model corresponding to the expression category. As a result, the user does not need to manually switch between different special effect models, the automation degree of special effect addition is improved, and the fun and playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are integrated, which can enhance the realism added by the special effect, and make the processed effect more natural.

本申请又一方面实施例提出了一种基于三维模型的特效处理装置,包括:Another aspect of the present application provides a three-dimensional model-based special effect processing device, including:

获取模块,用于获取采集到的二维的人脸图像,以及所述人脸图像对应的深度信息;an acquisition module for acquiring the collected two-dimensional face image and depth information corresponding to the face image;

重构模块,用于根据所述深度信息和所述人脸图像,对人脸进行三维重构,以得到所述人脸对应的三维模型;a reconstruction module, configured to perform three-dimensional reconstruction of the human face according to the depth information and the human face image, so as to obtain a three-dimensional model corresponding to the human face;

识别模块,用于识别与所述二维的人脸图像对应的表情类别;an identification module for identifying the expression category corresponding to the two-dimensional face image;

融合模块,用于将所述三维模型与所述表情类别对应的特效模型进行融合,得到特效处理后的三维模型。The fusion module is used to fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing.

本申请实施例的基于三维模型的特效处理装置,通过获取采集到的二维的人脸图像,以及人脸图像对应的深度信息,而后根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型,接着识别与二维的人脸图像对应的表情类别,最后将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。由此,无需用户手动切换不同的特效模型,提升特效添加的自动化程度,以及提升用户在特效添加过程中的乐趣性和可玩性。此外,根据用户做出的表情,确定相应的特效模型,从而将特效模型与三维模型进行融合,可以提升特效添加的真实感,使得处理后的效果更佳自然。The three-dimensional model-based special effect processing device according to the embodiment of the present application obtains the collected two-dimensional face image and the depth information corresponding to the face image, and then performs three-dimensional reconstruction of the face according to the depth information and the face image. , to obtain a three-dimensional model corresponding to the face, then identify the expression category corresponding to the two-dimensional face image, and finally fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing. As a result, the user does not need to manually switch between different special effect models, the automation degree of special effect addition is improved, and the fun and playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are integrated, which can enhance the realism added by the special effect, and make the processed effect more natural.

本申请又一方面实施例提出了一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如本申请前述实施例提出的基于三维模型的特效处理方法。Another aspect of the present application provides an electronic device, including: a memory, a processor, and a computer program stored in the memory and running on the processor, when the processor executes the program, the computer program as described in the present application is implemented. The three-dimensional model-based special effect processing method proposed in the foregoing embodiments.

本申请又一方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如本申请前述实施例提出的基于三维模型的特效处理方法。Another aspect of the present application provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the three-dimensional model-based special effect processing as proposed in the foregoing embodiments of the present application is implemented. method.

本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, in the following description, and in part will be apparent from the following description, or learned by practice of the present application.

附图说明Description of drawings

本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:

图1为本申请实施例一所提供的基于三维模型的特效处理方法的流程示意图;1 is a schematic flowchart of a three-dimensional model-based special effect processing method provided in Embodiment 1 of the present application;

图2为本申请实施例二所提供的基于三维模型的特效处理方法的流程示意图;2 is a schematic flowchart of a three-dimensional model-based special effect processing method provided in Embodiment 2 of the present application;

图3为本申请实施例三所提供的基于三维模型的特效处理方法的流程示意图;3 is a schematic flowchart of a three-dimensional model-based special effect processing method provided in Embodiment 3 of the present application;

图4为本申请实施例四所提供的基于三维模型的特效处理装置的结构示意图;4 is a schematic structural diagram of a three-dimensional model-based special effect processing device provided in Embodiment 4 of the present application;

图5为本申请实施例五所提供的基于三维模型的特效处理装置的结构示意图;5 is a schematic structural diagram of a three-dimensional model-based special effect processing device provided in Embodiment 5 of the present application;

图6为一个实施例中电子设备的内部结构示意图;6 is a schematic diagram of the internal structure of an electronic device in one embodiment;

图7为作为一种可能的实现方式的图像处理电路的示意图;7 is a schematic diagram of an image processing circuit as a possible implementation;

图8为作为另一种可能的实现方式的图像处理电路的示意图。FIG. 8 is a schematic diagram of an image processing circuit as another possible implementation.

具体实施方式Detailed ways

下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。The following describes in detail the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to be used to explain the present application, but should not be construed as a limitation to the present application.

本申请主要针对现有技术中特效添加的真实感不强以及自动化程度低的技术问题,提出一种基于三维模型的特效处理方法。This application mainly aims at the technical problems of low realism and low degree of automation added by special effects in the prior art, and proposes a special effect processing method based on a three-dimensional model.

本申请实施例的基于三维模型的特效处理方法,通过获取采集到的二维的人脸图像,以及人脸图像对应的深度信息,而后根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型,接着识别与二维的人脸图像对应的表情类别,最后将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。由此,无需用户手动切换不同的特效模型,提升特效添加的自动化程度,以及提升用户在特效添加过程中的乐趣性和可玩性。此外,根据用户做出的表情,确定相应的特效模型,从而将特效模型与三维模型进行融合,可以提升特效添加的真实感,使得处理后的效果更佳自然。The three-dimensional model-based special effects processing method according to the embodiment of the present application obtains the collected two-dimensional face image and the depth information corresponding to the face image, and then performs three-dimensional reconstruction of the face according to the depth information and the face image. , to obtain a three-dimensional model corresponding to the face, then identify the expression category corresponding to the two-dimensional face image, and finally fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing. As a result, the user does not need to manually switch between different special effect models, the automation degree of special effect addition is improved, and the fun and playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are integrated, which can enhance the realism added by the special effect, and make the processed effect more natural.

下面参考附图描述本申请实施例的基于三维模型的特效处理方法、装置和电子设备。The three-dimensional model-based special effect processing method, apparatus, and electronic device according to the embodiments of the present application are described below with reference to the accompanying drawings.

图1为本申请实施例一所提供的基于三维模型的特效处理方法的流程示意图。FIG. 1 is a schematic flowchart of a three-dimensional model-based special effect processing method provided by Embodiment 1 of the present application.

如图1所示,该基于三维模型的特效处理方法包括以下步骤:As shown in Figure 1, the three-dimensional model-based special effect processing method includes the following steps:

步骤101,获取采集到的二维的人脸图像,以及人脸图像对应的深度信息。Step 101: Acquire the collected two-dimensional face image and depth information corresponding to the face image.

本申请实施例中,电子设备可以包括可见光图像传感器,可以基于电子设备中的可见光图像传感器获取二维的人脸图像。具体地,可见光图像传感器可以包括可见光摄像头,可见光摄像头可以捕获由人脸反射的可见光进行成像,得到二维的人脸图像。In this embodiment of the present application, the electronic device may include a visible light image sensor, and a two-dimensional face image may be acquired based on the visible light image sensor in the electronic device. Specifically, the visible light image sensor may include a visible light camera, and the visible light camera may capture the visible light reflected by the face for imaging to obtain a two-dimensional face image.

本申请实施例中,电子设备还可以包括结构光图像传感器,可以基于电子设备中的结构光图像传感器,获取人脸图像对应的深度信息。可选地,结构光图像传感器可以包括镭射灯以及激光摄像头。脉冲宽度调制(Pulse Width Modulation,简称PWM)可以调制镭射灯以发出结构光,结构光照射至成人脸,激光摄像头可以捕获由人脸反射的结构光进行成像,得到人脸对应的结构光图像。深度引擎可以根据人脸对应的结构光图像,计算获得人脸对应的深度信息,即二维的人脸图像对应的深度信息。In this embodiment of the present application, the electronic device may further include a structured light image sensor, and depth information corresponding to the face image may be acquired based on the structured light image sensor in the electronic device. Optionally, the structured light image sensor may include a laser light and a laser camera. Pulse Width Modulation (PWM) can modulate the laser light to emit structured light, and the structured light is irradiated to the face of an adult. The laser camera can capture the structured light reflected by the face for imaging, and obtain the structured light image corresponding to the face. The depth engine can calculate and obtain the depth information corresponding to the face according to the structured light image corresponding to the face, that is, the depth information corresponding to the two-dimensional face image.

步骤102,根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型。Step 102: Perform three-dimensional reconstruction of the human face according to the depth information and the human face image to obtain a three-dimensional model corresponding to the human face.

本申请实施例中,在获取深度信息和人脸图像后,可以根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型。本申请中,人脸对应的三维模型的构建,是根据深度信息和人脸图像,进行三维重构得到的,而不是简单的获取RGB数据和深度数据。In this embodiment of the present application, after acquiring the depth information and the face image, three-dimensional reconstruction of the face may be performed according to the depth information and the face image, so as to obtain a three-dimensional model corresponding to the face. In this application, the construction of the three-dimensional model corresponding to the face is obtained by performing three-dimensional reconstruction according to the depth information and the face image, rather than simply acquiring RGB data and depth data.

作为一种可能的实现方式,可以将深度信息与二维的人脸图像对应的色彩信息进行融合,得到人脸对应的三维模型。具体地,可以基于人脸关键点检测技术,从深度信息提取人脸的关键点,以及从色彩信息中提取人脸的关键点,而后将从深度信息中提取的关键点和从色彩信息中提取的关键点,进行配准和关键点融合处理,最终根据融合后的关键点,生成人脸对应的三维模型。其中,关键点为人脸上显眼的点,或者为关键位置上的点,例如关键点可以为眼角、鼻尖、嘴角等。As a possible implementation manner, the depth information and the color information corresponding to the two-dimensional face image can be fused to obtain a three-dimensional model corresponding to the face. Specifically, based on the face key point detection technology, the key points of the face can be extracted from the depth information, and the key points of the face can be extracted from the color information, and then the key points extracted from the depth information and the color information can be extracted. The key points are registered and the key points are fused, and finally the 3D model corresponding to the face is generated according to the fused key points. Among them, the key points are conspicuous points on the face, or points at key positions, for example, the key points may be the corners of the eyes, the tip of the nose, the corners of the mouth, and the like.

作为另一种可能的实现方式,可以基于人脸关键点检测技术,对人脸图像进行关键点识别,得到人脸图像对应的第二关键点,而后根据第二关键点的深度信息和第二关键点在人脸图像上的位置,确定第二关键点对应的第一关键点在人脸的三维模型中的相对位置,从而可以根据第一关键点在三维空间中的相对位置,连接相邻的第一关键点,生成局部人脸三维框架。其中,局部人脸可以包括鼻部、唇部、眼部、脸颊等脸部部位。As another possible implementation, based on the face key point detection technology, key points can be identified on the face image to obtain the second key point corresponding to the face image, and then according to the depth information of the second key point and the second key point The position of the key point on the face image is to determine the relative position of the first key point corresponding to the second key point in the three-dimensional model of the face, so that the neighbors can be connected according to the relative position of the first key point in the three-dimensional space. The first key point of , generates a local face 3D frame. The partial human face may include facial parts such as nose, lips, eyes, and cheeks.

在生成局部人脸三维框架后,可以根据不同的局部人脸三维框架中包含的相同第一关键点,对不同的局部人脸三维框架进行拼接,得到人脸对应的三维模型。After the partial face three-dimensional frame is generated, different partial face three-dimensional frames can be spliced according to the same first key point contained in the different partial face three-dimensional frames to obtain a three-dimensional model corresponding to the face.

步骤103,识别与二维的人脸图像对应的表情类别。Step 103: Identify the expression category corresponding to the two-dimensional face image.

作为一种可能的实现方式,用户可以预先录制不同表情类别对应的参考表情,例如,用户可以预先录制伤心、高兴、沮丧、愤怒、思考等表情类别对应的参考表情,在得到二维的人脸图像后,可以将人脸图像与参考表情进行匹配,将匹配中的目标参考表情对应的表情类别,作为人脸图像的表情类别。As a possible implementation, the user can pre-record the reference expressions corresponding to different expression categories. For example, the user can pre-record the reference expressions corresponding to the expression categories such as sad, happy, frustrated, angry, and thinking. After the image, the face image and the reference expression can be matched, and the expression category corresponding to the target reference expression in the matching is used as the expression category of the face image.

作为另一种可能的实现方式,可以获取当前帧之前采集的至少一帧人脸图像,而后可以根据当前帧的人脸图像与至少一帧人脸图像,确定表情类别。比如,根据当前帧的人脸图像与至少一帧人脸图像,可以确定眉毛上扬还是下拉、眼睛变大还是变小、嘴角上扬还是下拉等,进而可以确定表情类别。例如,当根据当前帧的人脸图像与至少一帧人脸图像,确定眼睛变小、眼角上扬、嘴角上扬时,可以确定表情类别为高兴。As another possible implementation manner, at least one frame of face image collected before the current frame may be acquired, and then the expression category may be determined according to the face image of the current frame and at least one frame of face image. For example, according to the face image of the current frame and at least one frame of face image, it can be determined whether the eyebrows are raised or pulled down, the eyes are larger or smaller, the corners of the mouth are raised or pulled down, etc., and then the expression category can be determined. For example, when it is determined that the eyes are getting smaller, the corners of the eyes are raised, and the corners of the mouth are raised according to the face image of the current frame and at least one frame of the face image, the expression category can be determined to be happy.

步骤104,将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。Step 104 , fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing.

本申请实施例中,可以预先存储表情类别与特效模型之间的对应关系,例如,当表情类别为伤心时,特效模型可以为眼泪,当表情类别为愤怒时,特效模型可以为火焰,当表情类别为紧张时,特效模型可以为冷汗等等。In this embodiment of the present application, the correspondence between the expression category and the special effect model may be pre-stored. For example, when the expression category is sad, the special effect model may be tears, when the expression category is anger, the special effect model may be flame, and when the expression category is anger When the category is nervous, the special effect model can be cold sweat and so on.

其中,特效模型可以存储在电子设备的应用程序的素材库中,该素材库中存储有不同的特效模型,或者,电子设备上的应用程序也可以从服务器上实时下载新的特效模型,新下载的特效模型可以存储到素材库中。The special effect model can be stored in the material library of the application program of the electronic device, and different special effect models are stored in the material library, or the application program on the electronic device can also download the new special effect model from the server in real time. The special effect model can be stored in the material library.

可选地,在确定表情类别后,可以查询上述对应关系,获取与表情类别匹配的特效模型,而后将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。Optionally, after the expression category is determined, the above-mentioned correspondence may be queried to obtain a special effect model matching the expression category, and then the three-dimensional model and the special effect model corresponding to the expression category are fused to obtain a three-dimensional model after special effects processing.

作为一种可能的实现方式,为了提升特效处理后的三维模型的显示效果,增强特效添加后的三维模型的真实感,本申请实施例中,可以调整特效模型相对三维模型的角度,以使三维模型和特效模型角度匹配,而后对特效模型进行渲染后,贴图至三维模型。As a possible implementation, in order to improve the display effect of the 3D model after special effects processing and enhance the realism of the 3D model after special effects are added, in this embodiment of the present application, the angle of the special effect model relative to the 3D model can be adjusted, so that the 3D model can be The angle of the model and the special effect model are matched, and then the special effect model is rendered and mapped to the 3D model.

进一步地,当得到特效处理后的三维模型后,可以在电子设备的显示界面对特效处理后的三维模型进行展示,从而可以便于用户直观化的获知特效处理后的三维模型。Further, after obtaining the 3D model after special effect processing, the 3D model after special effect processing can be displayed on the display interface of the electronic device, so that it is convenient for the user to intuitively know the 3D model after special effect processing.

本申请实施例的基于三维模型的特效处理方法,通过获取采集到的二维的人脸图像,以及人脸图像对应的深度信息,而后根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型,接着识别与二维的人脸图像对应的表情类别,最后将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。由此,无需用户手动切换不同的特效模型,提升特效添加的自动化程度,以及提升用户在特效添加过程中的乐趣性和可玩性。此外,根据用户做出的表情,确定相应的特效模型,从而将特效模型与三维模型进行融合,可以提升特效添加的真实感,使得处理后的效果更佳自然。The three-dimensional model-based special effects processing method according to the embodiment of the present application obtains the collected two-dimensional face image and the depth information corresponding to the face image, and then performs three-dimensional reconstruction of the face according to the depth information and the face image. , to obtain a three-dimensional model corresponding to the face, then identify the expression category corresponding to the two-dimensional face image, and finally fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing. As a result, the user does not need to manually switch between different special effect models, the automation degree of special effect addition is improved, and the fun and playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are integrated, which can enhance the realism added by the special effect, and make the processed effect more natural.

作为一种可能的实现方式,为了提升表情类别识别的效率以及识别的准确性,本申请中,在获取当前帧之前采集的至少一帧人脸图像后,只有在确定至少一帧人脸图像中各关键点的位置,与当前帧的人脸图像中各关键点的位置之间的差异大于阈值时,才对当前帧对应的表情类别进行识别。下面结合图2,对上述过程进行详细说明。As a possible implementation, in order to improve the efficiency and accuracy of facial expression category recognition, in this application, after obtaining at least one frame of face image collected before the current frame, only after determining at least one frame of face image Only when the difference between the position of each key point and the position of each key point in the face image of the current frame is greater than the threshold, the expression category corresponding to the current frame is identified. The above process will be described in detail below with reference to FIG. 2 .

图2为本申请实施例二所提供的基于三维模型的特效处理方法的流程示意图。FIG. 2 is a schematic flowchart of a special effect processing method based on a three-dimensional model provided by Embodiment 2 of the present application.

见图2,在图1所示实施例的基础上,步骤103具体可以包括以下子步骤:Referring to FIG. 2, on the basis of the embodiment shown in FIG. 1, step 103 may specifically include the following sub-steps:

步骤201,识别当前帧的人脸图像中各关键点的位置。Step 201: Identify the position of each key point in the face image of the current frame.

具体地,可以基于关键点识别技术,识别当前帧的人脸图像中各关键点的位置。Specifically, the position of each key point in the face image of the current frame can be identified based on the key point identification technology.

步骤202,对当前帧之前采集的至少一帧人脸图像,识别至少一帧人脸图像中各关键点的位置。Step 202 , for at least one frame of face image collected before the current frame, identify the position of each key point in the at least one frame of face image.

本申请实施例中,可以获取当前帧之前采集的至少一帧人脸图像,并基于关键点识别技术,确定至少一帧人脸图像中各关键点的位置。In this embodiment of the present application, at least one frame of face image collected before the current frame may be acquired, and based on the key point recognition technology, the position of each key point in the at least one frame of face image may be determined.

步骤203,判断至少一帧人脸图像中各关键点的位置,与当前帧的人脸图像中各关键点的位置之间的差异大于阈值,若是,执行步骤204,否则,执行步骤205。Step 203: Determine if the difference between the positions of the key points in the at least one frame of face image and the positions of the key points in the face image of the current frame is greater than a threshold, if yes, go to Step 204, otherwise, go to Step 205.

其中,阈值可以预设在电子设备的内置程序中,或者,阈值也可以由用户进行设置,对此不作限制。The threshold value may be preset in the built-in program of the electronic device, or the threshold value may also be set by the user, which is not limited.

步骤204,识别当前帧对应的表情类别。Step 204, identifying the expression category corresponding to the current frame.

本申请实施例中,当至少一帧人脸图像中各关键点的位置,与当前帧的人脸图像中各关键点的位置之间的差异大于阈值时,表明用户连续做出的表情具有较大的变化,此时,用户可能想要添加特效,因此,可以进一步识别当前帧对应的表情类别,从而触发后续特效添加的步骤,具体的识别过程可以参见上述实施例中步骤103的执行过程,在此不做赘述。In the embodiment of the present application, when the difference between the position of each key point in at least one frame of face image and the position of each key point in the face image of the current frame is greater than the threshold, it indicates that the expressions continuously made by the user have a relatively high value. At this time, the user may want to add special effects. Therefore, the expression category corresponding to the current frame can be further identified, thereby triggering the subsequent steps of adding special effects. For the specific identification process, please refer to the execution process of step 103 in the above embodiment. I won't go into details here.

步骤205,不做任何处理。In step 205, no processing is performed.

本申请实施例中,在至少一帧人脸图像中各关键点的位置,与当前帧的人脸图像中各关键点的位置之间的差异未大于阈值时,表明用户连续做出的表情未发生较大变化,此时,用户可能并不想添加特效,因此,可以不做任何处理。In the embodiment of the present application, when the difference between the position of each key point in at least one frame of face image and the position of each key point in the face image of the current frame is not greater than the threshold, it indicates that the continuous expressions made by the user are not If there is a big change, at this time, the user may not want to add special effects, so you can do nothing.

作为一种可能的实现方式,参见图3,在图1所示实施例的基础上,步骤104具体可以包括以下子步骤:As a possible implementation, referring to FIG. 3 , on the basis of the embodiment shown in FIG. 1 , step 104 may specifically include the following sub-steps:

步骤301,根据表情类别,获取对应的特效模型。Step 301: Acquire a corresponding special effect model according to the expression category.

作为一种可能的实现方式,可以预先存储表情类别与特效模型之间的对应关系,在确定人脸图像对应的表情类别后,可以查询上述对应关系,获取与表情类别匹配的特效模型,操作简单,且易于实现。As a possible implementation, the correspondence between the expression category and the special effect model can be stored in advance. After the expression category corresponding to the face image is determined, the above-mentioned correspondence can be queried to obtain the special effect model matching the expression category. The operation is simple. , and is easy to implement.

步骤302,调整特效模型相对三维模型的角度,以使三维模型和特效模型角度匹配。Step 302: Adjust the angle of the special effect model relative to the three-dimensional model, so that the angles of the three-dimensional model and the special effect model match.

需要说明的是,将特效模型贴图至三维模型之前,需要调整特效模型相对三维模型的角度,以使三维模型和特效模型角度匹配。比如,特效模型为眼泪时,当三维模型中人脸是正对屏幕或者侧对屏幕时,眼泪的显示效果是不同的,因此,需要根据人脸的偏转角度,调整特效模型的转动角度,从而使得三维模型和特效模型角度匹配,进而提升后续特效处理效果。It should be noted that, before mapping the special effect model to the 3D model, the angle of the special effect model relative to the 3D model needs to be adjusted so that the angles of the 3D model and the special effect model match. For example, when the special effect model is tears, when the face in the 3D model is facing the screen or facing the screen sideways, the display effect of tears is different. Therefore, it is necessary to adjust the rotation angle of the special effect model according to the deflection angle of the face, so as to make The angle of the 3D model and the special effect model are matched, thereby improving the effect of subsequent special effects processing.

作为一种可能的实现方式,可以预先确定不同特效模型适用的角度参数,其中,角度参数可以为定值,或者为取值范围(例如为[-45°,45°]),对此不作限制。在确定与表情类别对应的特效模型后,可以查询该特效模型适用的角度参数,而后旋转特效模型,以使特效模型中预设目标关键点的第一连线,与三维模型中预设参考关键点的第二连线之间的夹角符合角度参数。As a possible implementation manner, angle parameters applicable to different special effect models may be predetermined, wherein the angle parameters may be a fixed value or a value range (for example, [-45°, 45°]), which is not limited . After determining the special effect model corresponding to the expression category, you can query the applicable angle parameters of the special effect model, and then rotate the special effect model, so that the first connection between the preset target key points in the special effect model and the preset reference key in the 3D model The angle between the second line connecting the points conforms to the angle parameter.

步骤303,根据特效模型,在三维模型中,查询对应的待贴图关键点。Step 303 , according to the special effect model, in the three-dimensional model, query the corresponding key points to be mapped.

应当理解的是,不同的特效模型,在三维模型中的待贴图关键点不同。例如,当特效模型为眼泪时,一般眼泪是从眼角对应的关键点,流下至鼻翼对应的关键点,而后再从鼻翼对应的关键点流下至嘴角对应的关键点,因此,可以将眼角至鼻翼,以及鼻翼至嘴角对应的关键点,作为待贴图关键点。或者,当特效模型为冷汗时,一般冷汗是从额头对应的关键点,流下至眉尾对应的关键点,再从眉尾对应的关键点流下至脸颊对应的关键点,接着从脸颊对应的关键点流下至下巴对应的关键点,因此,可以将额头至眉尾、眉尾至脸颊,以及脸颊至下巴对应的关键点,作为待贴图关键点。It should be understood that different special effect models have different key points to be mapped in the 3D model. For example, when the special effect model is a tear, the tear generally flows from the key point corresponding to the corner of the eye to the key point corresponding to the nose wing, and then flows from the key point corresponding to the nose wing to the key point corresponding to the corner of the mouth. Therefore, you can move the eye corner to the nose wing. , and the corresponding key points from the nose wing to the corner of the mouth as the key points to be mapped. Or, when the special effect model is cold sweat, the cold sweat generally flows from the key point corresponding to the forehead to the key point corresponding to the end of the eyebrow, and then flows from the key point corresponding to the end of the eyebrow to the key point corresponding to the cheek, and then from the key point corresponding to the cheek. The points flow down to the key points corresponding to the chin. Therefore, the key points corresponding to the forehead to the eyebrow end, the eyebrow end to the cheek, and the cheek to the chin can be used as the key points to be mapped.

作为一种可能的实现方式,可以预先建立不同特效模型与待贴图关键点之间的对应关系,在确定与表情类别对应的特效模型后,可以查询上述对应关系,获取在三维模型中,与该特效模型对应的待贴图关键点。As a possible implementation, the corresponding relationship between different special effect models and key points to be mapped can be established in advance. After determining the special effect model corresponding to the expression category, the above-mentioned corresponding relationship can be queried to obtain the corresponding relationship in the 3D model. The key point to be mapped corresponding to the special effect model.

步骤304,在三维模型中,将特效模型对应的待贴图关键点所在区域作为待贴图区域。Step 304: In the three-dimensional model, the area where the key points to be mapped corresponding to the special effect model are located is used as the area to be mapped.

本申请实施例中,当特效模型不同时,待贴图区域不同,在确定三维模型中,特效模型对应的待贴图关键点时,可以将对应的待贴图关键点所在区域作为待贴图区域。In the embodiment of the present application, when the special effect models are different, the areas to be mapped are different. When determining the key points to be mapped corresponding to the special effect models in the 3D model, the area where the corresponding key points to be mapped can be used as the area to be mapped.

步骤305,根据三维模型的待贴图区域,对特效模型进行形变,以使形变后的特效模型覆盖待贴图区域。Step 305 , deform the special effect model according to the to-be-mapped area of the three-dimensional model, so that the deformed special-effect model covers the to-be-mapped area.

本申请实施例中,在确定三维模型中的待贴图区域后,可以对特效模型进行形变,以使形变后的特效模型覆盖待贴图区域,从而提升特效处理效果。In the embodiment of the present application, after determining the area to be mapped in the 3D model, the special effect model may be deformed, so that the deformed special effect model covers the area to be mapped, thereby improving the effect of special effects processing.

步骤306,对特效模型进行渲染后,贴图至三维模型。Step 306 , after rendering the special effect model, map it to the three-dimensional model.

为了使得特效模型与三维模型匹配,进而保证特效处理后的三维模型的显示效果,本申请中,可以对特效模型进行渲染后,贴图至三维模型。In order to match the special effect model with the three-dimensional model, thereby ensuring the display effect of the three-dimensional model after special effect processing, in the present application, the special effect model may be rendered and then mapped to the three-dimensional model.

作为一种可能的实现方式,可以根据三维模型的光效,对特效模型进行渲染,从而使得渲染后的特效模型的光效与三维模型匹配,进而提升特效处理后的三维模型的显示效果。As a possible implementation, the special effect model can be rendered according to the light effect of the three-dimensional model, so that the light effect of the rendered special effect model matches the three-dimensional model, thereby improving the display effect of the three-dimensional model after special effect processing.

为了实现上述实施例,本申请还提出一种基于三维模型的特效处理装置。In order to realize the above embodiments, the present application also proposes a special effect processing device based on a three-dimensional model.

图4为本申请实施例四所提供的基于三维模型的特效处理装置的结构示意图。FIG. 4 is a schematic structural diagram of a special effect processing apparatus based on a three-dimensional model provided by Embodiment 4 of the present application.

如图4所示,该基于三维模型的特效处理装置100包括:获取模块110、重构模块120、识别模块130,以及融合模块140。其中,As shown in FIG. 4 , the three-dimensional model-based special effect processing apparatus 100 includes: an acquisition module 110 , a reconstruction module 120 , an identification module 130 , and a fusion module 140 . in,

获取模块110,用于获取采集到的二维的人脸图像,以及人脸图像对应的深度信息。The acquiring module 110 is configured to acquire the collected two-dimensional face image and depth information corresponding to the face image.

重构模块120,用于根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型。The reconstruction module 120 is configured to perform three-dimensional reconstruction of the human face according to the depth information and the human face image, so as to obtain a three-dimensional model corresponding to the human face.

识别模块130,用于识别与二维的人脸图像对应的表情类别。The identification module 130 is used for identifying the expression category corresponding to the two-dimensional face image.

融合模块140,用于将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。The fusion module 140 is configured to fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing.

进一步地,在本申请实施例的一种可能的实现方式中,参见图5,在图4所示实施例的基础上,该基于三维模型的特效处理装置100还可以包括:Further, in a possible implementation manner of the embodiment of the present application, referring to FIG. 5 , on the basis of the embodiment shown in FIG. 4 , the three-dimensional model-based special effect processing apparatus 100 may further include:

作为一种可能的实现方式,识别模块130,包括:As a possible implementation manner, the identification module 130 includes:

第一识别子模块131,用于识别当前帧的人脸图像中各关键点的位置。The first identification sub-module 131 is used to identify the position of each key point in the face image of the current frame.

第二识别子模块132,用于对当前帧之前采集的至少一帧人脸图像,识别至少一帧人脸图像中各关键点的位置。The second identification sub-module 132 is configured to identify the position of each key point in the at least one frame of face image collected before the current frame.

第三识别子模块133,用于若至少一帧人脸图像中各关键点的位置,与当前帧的人脸图像中各关键点的位置之间的差异大于阈值,识别当前帧对应的表情类别。The third identification sub-module 133 is configured to identify the expression category corresponding to the current frame if the difference between the positions of the key points in the at least one frame of the face image and the positions of the key points in the face image of the current frame is greater than a threshold .

作为一种可能的实现方式,融合模块140,包括:As a possible implementation manner, the fusion module 140 includes:

获取子模块141,用于根据表情类别,获取对应的特效模型。The obtaining sub-module 141 is used to obtain the corresponding special effect model according to the expression category.

调整子模块142,用于调整特效模型相对三维模型的角度,以使三维模型和特效模型角度匹配。The adjustment sub-module 142 is used to adjust the angle of the special effect model relative to the three-dimensional model, so that the angles of the three-dimensional model and the special effect model match.

作为一种可能的实现方式,调整子模块142,具体用于:查询特效模型适用的角度参数;旋转特效模型,以使特效模型中预设目标关键点的第一连线,与三维模型中预设参考关键点的第二连线之间的夹角符合角度参数。As a possible implementation manner, the adjustment sub-module 142 is specifically used for: querying the applicable angle parameters of the special effect model; Let the included angle between the second connecting lines of the reference key point conform to the angle parameter.

贴图子模块143,用于对特效模型进行渲染后,贴图至三维模型。The texture sub-module 143 is used to render the special effect model and then map the texture to the 3D model.

作为一种可能的实现方式,贴图子模块143,具体用于:根据三维模型的光效,对特效模型进行渲染。As a possible implementation manner, the texture sub-module 143 is specifically configured to: render the special effect model according to the light effect of the three-dimensional model.

形变子模块144,用于在对特效模型进行渲染后,贴图至三维模型之前,根据三维模型的待贴图区域,对特效模型进行形变,以使形变后的特效模型覆盖待贴图区域。The deformation sub-module 144 is used to deform the special effect model according to the to-be-mapped area of the three-dimensional model after rendering the special-effect model and before mapping the three-dimensional model, so that the deformed special-effect model covers the to-be-mapped area.

查询子模块145,用于在根据三维模型的待贴图区域,对特效模型进行形变之前,根据特效模型,在三维模型中,查询对应的待贴图关键点。The query sub-module 145 is configured to query the corresponding key points to be mapped in the three-dimensional model according to the special effect model before deforming the special effect model according to the to-be-mapped area of the three-dimensional model.

处理子模块146,用于在三维模型中,将特效模型对应的待贴图关键点所在区域作为待贴图区域。The processing sub-module 146 is configured to, in the three-dimensional model, use the area where the key points to be mapped corresponding to the special effect model are located as the area to be mapped.

需要说明的是,前述对基于三维模型的特效处理方法实施例的解释说明也适用于该实施例的基于三维模型的特效处理装置100,此处不再赘述。It should be noted that, the foregoing explanations on the embodiment of the three-dimensional model-based special effect processing method are also applicable to the three-dimensional model-based special effect processing apparatus 100 of this embodiment, and are not repeated here.

本申请实施例的基于三维模型的特效处理装置,通过获取采集到的二维的人脸图像,以及人脸图像对应的深度信息,而后根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型,接着识别与二维的人脸图像对应的表情类别,最后将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。由此,无需用户手动切换不同的特效模型,提升特效添加的自动化程度,以及提升用户在特效添加过程中的乐趣性和可玩性。此外,根据用户做出的表情,确定相应的特效模型,从而将特效模型与三维模型进行融合,可以提升特效添加的真实感,使得处理后的效果更佳自然。The three-dimensional model-based special effect processing device according to the embodiment of the present application obtains the collected two-dimensional face image and the depth information corresponding to the face image, and then performs three-dimensional reconstruction of the face according to the depth information and the face image. , to obtain a three-dimensional model corresponding to the face, then identify the expression category corresponding to the two-dimensional face image, and finally fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing. As a result, the user does not need to manually switch between different special effect models, the automation degree of special effect addition is improved, and the fun and playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are integrated, which can enhance the realism added by the special effect, and make the processed effect more natural.

为了实现上述实施例,本申请还提出一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时,实现本申请前述实施例提出的基于三维模型的特效处理方法。In order to realize the above-mentioned embodiments, the present application also proposes an electronic device, including: a memory, a processor, and a computer program stored in the memory and running on the processor. When the processor executes the program, the above-mentioned embodiments of the present application provide 3D model-based special effects processing method.

为了实现上述实施例,本申请还提出一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现本申请前述实施例提出的基于三维模型的特效处理方法。In order to realize the above-mentioned embodiments, the present application also proposes a computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the three-dimensional model-based special effect processing proposed by the foregoing embodiments of the present application is realized. method.

图6为一个实施例中电子设备200的内部结构示意图。该电子设备200包括通过系统总线210连接的处理器220、存储器230、显示器240和输入装置250。其中,电子设备200的存储器230存储有操作系统和计算机可读指令。该计算机可读指令可被处理器220执行,以实现本申请实施方式的基于三维模型的特效处理方法。该处理器220用于提供计算和控制能力,支撑整个电子设备200的运行。电子设备200的显示器240可以是液晶显示屏或者电子墨水显示屏等,输入装置250可以是显示器240上覆盖的触摸层,也可以是电子设备200外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该电子设备200可以是手机、平板电脑、笔记本电脑、个人数字助理或穿戴式设备(例如智能手环、智能手表、智能头盔、智能眼镜)等。FIG. 6 is a schematic diagram of the internal structure of the electronic device 200 in one embodiment. The electronic device 200 includes a processor 220 , a memory 230 , a display 240 and an input device 250 connected through a system bus 210 . The memory 230 of the electronic device 200 stores an operating system and computer-readable instructions. The computer-readable instructions can be executed by the processor 220 to implement the three-dimensional model-based special effect processing method of the embodiment of the present application. The processor 220 is used to provide computing and control capabilities to support the operation of the entire electronic device 200 . The display 240 of the electronic device 200 may be a liquid crystal display screen or an electronic ink display screen, etc., and the input device 250 may be a touch layer covered on the display 240, or a button, a trackball or a touchpad provided on the casing of the electronic device 200, It can also be an external keyboard, trackpad or mouse, etc. The electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (eg, a smart bracelet, a smart watch, a smart helmet, and smart glasses).

本领域技术人员可以理解,图6中示出的结构,仅仅是与本申请方案相关的部分结构的示意图,并不构成对本申请方案所应用于其上的电子设备200的限定,具体的电子设备200可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 6 is only a schematic diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the electronic device 200 to which the solution of the present application is applied. The specific electronic device 200 may include more or fewer components than shown, or combine certain components, or have a different arrangement of components.

为了清楚说明本实施例提供的电子设备,请参阅图7,提供了本申请实施例的图像处理电路,图像处理电路可利用硬件和/或软件组件实现。In order to clearly illustrate the electronic device provided in this embodiment, please refer to FIG. 7 , which provides an image processing circuit of an embodiment of the present application, and the image processing circuit may be implemented by hardware and/or software components.

需要说明的是,图7为作为一种可能的实现方式的图像处理电路的示意图。为便于说明,仅示出与本申请实施例相关的各个方面。It should be noted that FIG. 7 is a schematic diagram of an image processing circuit as a possible implementation manner. For the convenience of description, only various aspects related to the embodiments of the present application are shown.

如图7,该图像处理电路具体包括:图像单元310、深度信息单元320和处理单元330。其中,As shown in FIG. 7 , the image processing circuit specifically includes: an image unit 310 , a depth information unit 320 and a processing unit 330 . in,

图像单元310,用于输出二维的人脸图像。The image unit 310 is used for outputting a two-dimensional face image.

深度信息单元320,用于输出深度信息。The depth information unit 320 is used for outputting depth information.

本申请实施例中,可以通过图像单元310,获取二维的人脸图像,以及通过深度信息单元320,获取人脸图像对应的深度信息。In this embodiment of the present application, the image unit 310 may be used to obtain a two-dimensional face image, and the depth information unit 320 may be used to obtain depth information corresponding to the face image.

处理单元330,分别与图像单元310和深度信息单元320电性连接,用于根据图像单元310获取的二维的人脸图像,以及深度信息单元320获取的对应的深度信息,对人脸进行三维重构,以得到人脸对应的三维模型,识别与二维的人脸图像对应的表情类别,将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。The processing unit 330 is electrically connected to the image unit 310 and the depth information unit 320, respectively, and is used to perform three-dimensional image processing on the face according to the two-dimensional face image obtained by the image unit 310 and the corresponding depth information obtained by the depth information unit 320. Reconstruction is performed to obtain a three-dimensional model corresponding to the face, identify the expression category corresponding to the two-dimensional face image, and fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing.

本申请实施例中,图像单元310获取的二维的人脸图像可以发送至处理单元330,以及深度信息单元320获取的人脸图像对应的深度信息可以发送至处理单元330,处理单元330可以根据人脸图像以及深度信息,对人脸进行三维重构,以得到人脸对应的三维模型,识别与二维的人脸图像对应的表情类别,将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。具体的实现过程,可以参见上述图1至图3实施例中对基于三维模型的特效处理方法的解释说明,此处不做赘述。In this embodiment of the present application, the two-dimensional face image obtained by the image unit 310 may be sent to the processing unit 330, and the depth information corresponding to the face image obtained by the depth information unit 320 may be sent to the processing unit 330, and the processing unit 330 may Face image and depth information, perform three-dimensional reconstruction of the face to obtain a three-dimensional model corresponding to the face, identify the expression category corresponding to the two-dimensional face image, and fuse the three-dimensional model with the special effect model corresponding to the expression category. The 3D model after special effects processing is obtained. For the specific implementation process, reference may be made to the explanation of the three-dimensional model-based special effect processing method in the embodiments of FIG. 1 to FIG. 3 , which will not be repeated here.

进一步地,作为本申请一种可能的实现方式,参见图8,在图7所示实施例的基础上,该图像处理电路还可以包括:Further, as a possible implementation manner of the present application, referring to FIG. 8 , on the basis of the embodiment shown in FIG. 7 , the image processing circuit may further include:

作为一种可能的实现方式,图像单元310具体可以包括:电性连接的图像传感器311和图像信号处理(Image Signal Processing,简称ISP)处理器312。其中,As a possible implementation manner, the image unit 310 may specifically include: an electrically connected image sensor 311 and an image signal processing (Image Signal Processing, ISP for short) processor 312 . in,

图像传感器311,用于输出原始图像数据。The image sensor 311 is used to output raw image data.

ISP处理器312,用于根据原始图像数据,输出人脸图像。The ISP processor 312 is configured to output a face image according to the original image data.

本申请实施例中,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的人脸图像。其中,图像传感器311可包括色彩滤镜阵列(如Bayer滤镜),以及对应的感光单元,图像传感器311可获取每个感光单元捕捉的光强度和波长信息,并提供可由ISP处理器312处理的一组原始图像数据。ISP处理器312对原始图像数据进行处理后,得到YUV格式或者RGB格式的人脸图像,并发送至处理单元330。In this embodiment of the present application, the raw image data captured by the image sensor 311 is first processed by the ISP processor 312 , and the ISP processor 312 analyzes the raw image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311 Information, including face images in YUV format or RGB format. The image sensor 311 may include a color filter array (such as a Bayer filter) and a corresponding photosensitive unit. The image sensor 311 may acquire the light intensity and wavelength information captured by each photosensitive unit, and provide information that can be processed by the ISP processor 312. A set of raw image data. After the ISP processor 312 processes the original image data, a face image in YUV format or RGB format is obtained, and sent to the processing unit 330 .

其中,ISP处理器312在对原始图像数据进行处理时,可以按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器312可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。When processing the original image data, the ISP processor 312 can process the original image data pixel by pixel in various formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, the image processing operations can be performed with the same or different bit depth precision.

作为一种可能的实现方式,深度信息单元320,包括电性连接的结构光传感器321和深度图生成芯片322。其中,As a possible implementation manner, the depth information unit 320 includes an electrically connected structured light sensor 321 and a depth map generation chip 322 . in,

结构光传感器321,用于生成红外散斑图。The structured light sensor 321 is used to generate an infrared speckle pattern.

深度图生成芯片322,用于根据红外散斑图,输出深度信息;深度信息包括深度图。The depth map generation chip 322 is configured to output depth information according to the infrared speckle map; the depth information includes a depth map.

本申请实施例中,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map),该深度图指示了红外散斑图中各像素点的深度。深度图生成芯片322将深度图发送至处理单元330。In the embodiment of the present application, the structured light sensor 321 projects speckle structured light to the subject, acquires the structured light reflected by the subject, and images the infrared speckle image according to the reflected structured light. The structured light sensor 321 sends the infrared speckle image to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle image, and then determines the depth of the subject based on this to obtain the depth map (Depth Map), the depth map indicates the depth of each pixel in the infrared speckle map. The depth map generation chip 322 sends the depth map to the processing unit 330 .

作为一种可能的实现方式,处理单元330,包括:电性连接的CPU331和GPU(Graphics Processing Unit,图形处理器)332。其中,As a possible implementation manner, the processing unit 330 includes: a CPU 331 and a GPU (Graphics Processing Unit, graphics processor) 332 that are electrically connected. in,

CPU331,用于根据标定数据,对齐人脸图像与深度图,根据对齐后的人脸图像与深度图,输出人脸对应的三维模型。The CPU 331 is configured to align the face image and the depth map according to the calibration data, and output a three-dimensional model corresponding to the face according to the aligned face image and the depth map.

GPU332,用于识别与二维的人脸图像对应的表情类别,将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。The GPU 332 is configured to identify the expression category corresponding to the two-dimensional face image, and fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing.

本申请实施例中,CPU331从ISP处理器312获取到人脸图像,从深度图生成芯片322获取到深度图,结合预先得到的标定数据,可以将人脸图像与深度图对齐,从而确定出人脸图像中各像素点对应的深度信息。进而,CPU331根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型。In the embodiment of the present application, the CPU 331 obtains the face image from the ISP processor 312, and obtains the depth map from the depth map generation chip 322. Combined with the pre-obtained calibration data, the face image and the depth map can be aligned to determine the person The depth information corresponding to each pixel in the face image. Furthermore, the CPU 331 performs three-dimensional reconstruction of the human face according to the depth information and the human face image, so as to obtain a three-dimensional model corresponding to the human face.

CPU331将人脸对应的三维模型发送至GPU332,以便GPU332根据人脸对应的三维模型执行如前述实施例中描述的基于三维模型的特效处理方法,实现将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。The CPU 331 sends the three-dimensional model corresponding to the human face to the GPU 332, so that the GPU 332 executes the three-dimensional model-based special effect processing method described in the foregoing embodiment according to the three-dimensional model corresponding to the human face, so as to realize the fusion of the three-dimensional model and the special effect model corresponding to the expression category. , and get the 3D model after special effects processing.

进一步地,图像处理电路还可以包括:显示单元340。Further, the image processing circuit may further include: a display unit 340 .

显示单元340,与GPU332电性连接,用于根据特效处理后的三维模型进行展示。The display unit 340 is electrically connected to the GPU 332, and is used for displaying according to the three-dimensional model processed by special effects.

具体地,GPU332处理得到的特效处理后的三维模型,可以由显示器340显示。Specifically, the special effect processed three-dimensional model obtained by the GPU 332 can be displayed on the display 340 .

可选地,图像处理电路还可以包括:编码器350和存储器360。Optionally, the image processing circuit may further include: an encoder 350 and a memory 360 .

本申请实施例中,GPU332处理得到的特效处理后的三维模型,还可以由编码器350编码后存储至存储器360,其中,编码器350可由协处理器实现。In the embodiment of the present application, the three-dimensional model after the special effect processing obtained by the GPU 332 may also be encoded by the encoder 350 and stored in the memory 360, where the encoder 350 may be implemented by a coprocessor.

在一个实施例中,存储器360可以为多个,或者划分为多个存储空间,存储GPU312处理后的图像数据可存储至专用存储器,或者专用存储空间,并可包括DMA(Direct MemoryAccess,直接直接存储器存取)特征。存储器360可被配置为实现一个或多个帧缓冲器。In one embodiment, the memory 360 may be multiple or divided into multiple storage spaces, and the image data processed by the GPU 312 may be stored in a dedicated memory, or may be stored in a dedicated storage space, and may include DMA (Direct Memory Access). access) feature. Memory 360 may be configured to implement one or more frame buffers.

下面结合图8,对上述过程进行详细说明。The above process will be described in detail below with reference to FIG. 8 .

如图8所示,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的人脸图像,并发送至CPU331。As shown in FIG. 8, raw image data captured by image sensor 311 is first processed by ISP processor 312, which analyzes the raw image data to capture image statistics that can be used to determine one or more control parameters of image sensor 311 The information, including the face image in YUV format or RGB format, is sent to CPU331.

如图8所示,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map)。深度图生成芯片322将深度图发送至CPU331。As shown in FIG. 8 , the structured light sensor 321 projects speckle structured light to the subject, acquires the structured light reflected by the subject, and forms an infrared speckle image according to the reflected structured light. The structured light sensor 321 sends the infrared speckle image to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle image, and then determines the depth of the subject based on this to obtain the depth map (Depth Map). The depth map generation chip 322 sends the depth map to the CPU 331 .

CPU331从ISP处理器312获取到人脸图像,从深度图生成芯片322获取到深度图,结合预先得到的标定数据,可以将人脸图像与深度图对齐,从而确定出人脸图像中各像素点对应的深度信息。进而,CPU331根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型。The CPU 331 obtains the face image from the ISP processor 312, obtains the depth map from the depth map generation chip 322, and combines the pre-obtained calibration data to align the face image with the depth map, thereby determining each pixel in the face image. corresponding depth information. Furthermore, the CPU 331 performs three-dimensional reconstruction of the human face according to the depth information and the human face image, so as to obtain a three-dimensional model corresponding to the human face.

CPU331将人脸对应的三维模型发送至GPU332,以便GPU332根据人脸的三维模型执行如前述实施例中描述的基于三维模型的特效处理方法,实现将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。GPU332处理得到的特效处理后的三维模型,可以由显示器340显示,和/或,由编码器350编码后存储至存储器360。The CPU331 sends the three-dimensional model corresponding to the human face to the GPU332, so that the GPU332 executes the special effect processing method based on the three-dimensional model as described in the foregoing embodiment according to the three-dimensional model of the human face, so that the three-dimensional model and the special effect model corresponding to the expression category are fused, The 3D model after special effects processing is obtained. The special effect processed three-dimensional model obtained by the GPU 332 may be displayed on the display 340 and/or stored in the memory 360 after being encoded by the encoder 350 .

例如,以下为运用图6中的处理器220或运用图8中的图像处理电路(具体为CPU331和GPU332)实现控制方法的步骤:For example, the following are the steps of implementing the control method using the processor 220 in FIG. 6 or using the image processing circuit (specifically CPU331 and GPU332) in FIG. 8 :

CPU331获取二维的人脸图像,以及人脸图像对应的深度信息;CPU331根据深度信息和人脸图像,对人脸进行三维重构,以得到人脸对应的三维模型;GPU332识别与二维的人脸图像对应的表情类别;GPU332将三维模型与表情类别对应的特效模型进行融合,得到特效处理后的三维模型。The CPU331 obtains a two-dimensional face image and the depth information corresponding to the face image; The expression category corresponding to the face image; the GPU 332 fuses the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing.

在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.

此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In the description of the present application, "plurality" means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.

流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing custom logical functions or steps of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.

在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium, For use with, or in conjunction with, an instruction execution system, apparatus, or device (such as a computer-based system, a system including a processor, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus, or apparatus) or equipment. For the purposes of this specification, a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.

应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of this application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware as in another embodiment, it can be implemented by any one of the following techniques known in the art, or a combination thereof: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.

本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those skilled in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, one or a combination of the steps of the method embodiment is included.

此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.

上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like. Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and variations.

Claims (7)

1. A special effect processing method based on a three-dimensional model is characterized by comprising the following steps:
acquiring an acquired two-dimensional face image and depth information corresponding to the face image;
performing three-dimensional reconstruction on a human face according to the depth information and the human face image to obtain a three-dimensional model corresponding to the human face, wherein the key points extracted from the depth information and the key points extracted from the color information corresponding to the human face image are subjected to registration and fusion processing to generate the three-dimensional model;
identifying an expression category corresponding to the two-dimensional face image;
acquiring a corresponding special effect model according to the expression category;
adjusting the angle of the special effect model relative to the three-dimensional model so that the three-dimensional model and the special effect model are angle-matched;
inquiring the corresponding relation between different pre-established special effect models and key points of a to-be-pasted drawing, and acquiring the key points of the to-be-pasted drawing corresponding to the special effect models in the three-dimensional model;
in the three-dimensional model, taking the region where the key point of the to-be-pasted picture corresponding to the special effect model is located as the region of the to-be-pasted picture;
according to the area to be pasted of the three-dimensional model, deforming the special effect model so that the deformed special effect model covers the area to be pasted;
and after rendering the special effect model, pasting a picture to the three-dimensional model.
2. The special effects processing method according to claim 1, wherein the identifying an expression class corresponding to the two-dimensional face image includes:
identifying the position of each key point in the face image of the current frame;
identifying the position of each key point in at least one frame of face image collected before the current frame;
and if the difference between the position of each key point in the at least one frame of face image and the position of each key point in the face image of the current frame is greater than a threshold value, identifying the expression type corresponding to the current frame.
3. The special effects processing method of claim 1, wherein the adjusting the angle of the special effects model relative to the three-dimensional model to angularly match the three-dimensional model and the special effects model comprises:
inquiring the angle parameter applicable to the special effect model;
and rotating the special effect model to enable an included angle between a first connecting line of a preset target key point in the special effect model and a second connecting line of a preset reference key point in the three-dimensional model to accord with the angle parameter.
4. The special effect processing method according to claim 1, wherein the rendering the special effect model includes:
and rendering the special effect model according to the light effect of the three-dimensional model.
5. A three-dimensional model-based special effects processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring the acquired two-dimensional face image and depth information corresponding to the face image;
the reconstruction module is used for performing three-dimensional reconstruction on a human face according to the depth information and the human face image so as to obtain a three-dimensional model corresponding to the human face, wherein the key points extracted from the depth information and the key points extracted from the color information corresponding to the human face image are subjected to registration and fusion processing so as to generate the three-dimensional model;
the recognition module is used for recognizing the expression type corresponding to the two-dimensional face image;
the fusion module is used for acquiring a corresponding special effect model according to the expression category; adjusting the angle of the special effect model relative to the three-dimensional model so that the three-dimensional model and the special effect model are angle-matched; inquiring the corresponding relation between different pre-established special effect models and key points of a to-be-pasted drawing, and acquiring the key points of the to-be-pasted drawing corresponding to the special effect models in the three-dimensional model; in the three-dimensional model, taking the region where the key point of the to-be-pasted picture corresponding to the special effect model is located as the region of the to-be-pasted picture; according to the area to be pasted of the three-dimensional model, deforming the special effect model so that the deformed special effect model covers the area to be pasted; and after rendering the special effect model, pasting a picture to the three-dimensional model.
6. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which when executed by the processor implements the three-dimensional model based special effects processing method of any of claims 1 to 4.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the three-dimensional model-based special effects processing method according to any one of claims 1 to 4.
CN201810934012.XA 2018-08-16 2018-08-16 Special effect processing method, device and electronic device based on 3D model Active CN109147037B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810934012.XA CN109147037B (en) 2018-08-16 2018-08-16 Special effect processing method, device and electronic device based on 3D model
PCT/CN2019/088118 WO2020034698A1 (en) 2018-08-16 2019-05-23 Three-dimensional model-based special effect processing method and device, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810934012.XA CN109147037B (en) 2018-08-16 2018-08-16 Special effect processing method, device and electronic device based on 3D model

Publications (2)

Publication Number Publication Date
CN109147037A CN109147037A (en) 2019-01-04
CN109147037B true CN109147037B (en) 2020-09-18

Family

ID=64789563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810934012.XA Active CN109147037B (en) 2018-08-16 2018-08-16 Special effect processing method, device and electronic device based on 3D model

Country Status (2)

Country Link
CN (1) CN109147037B (en)
WO (1) WO2020034698A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109147037B (en) * 2018-08-16 2020-09-18 Oppo广东移动通信有限公司 Special effect processing method, device and electronic device based on 3D model
CN109816586A (en) * 2019-01-30 2019-05-28 重庆传音通讯技术有限公司 Image processing method, device, terminal device and storage medium
CN110310318B (en) * 2019-07-03 2022-10-04 北京字节跳动网络技术有限公司 Special effect processing method and device, storage medium and terminal
CN111639613B (en) * 2020-06-04 2024-04-16 上海商汤智能科技有限公司 Augmented reality AR special effect generation method and device and electronic equipment
CN112004020B (en) * 2020-08-19 2022-08-12 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113538696B (en) * 2021-07-20 2024-08-13 广州博冠信息科技有限公司 Special effect generation method and device, storage medium and electronic equipment
CN114494556A (en) * 2022-01-30 2022-05-13 北京大甜绵白糖科技有限公司 Special effect rendering method, device and equipment and storage medium
CN114677386A (en) * 2022-03-25 2022-06-28 北京字跳网络技术有限公司 Special effect image processing method and device, electronic equipment and storage medium
CN114863005B (en) * 2022-04-19 2024-12-06 佛山虎牙虎信科技有限公司 A method, device, storage medium and equipment for rendering body special effects
CN115018749A (en) * 2022-07-22 2022-09-06 北京字跳网络技术有限公司 Image processing method, device, equipment, computer readable storage medium and product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088040A (en) * 1996-09-17 2000-07-11 Atr Human Information Processing Research Laboratories Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image
CN105118082A (en) * 2015-07-30 2015-12-02 科大讯飞股份有限公司 Personalized video generation method and system
CN106920274A (en) * 2017-01-20 2017-07-04 南京开为网络科技有限公司 Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0930585B1 (en) * 1998-01-14 2004-03-31 Canon Kabushiki Kaisha Image processing apparatus
CN101021952A (en) * 2007-03-23 2007-08-22 北京中星微电子有限公司 Method and apparatus for realizing three-dimensional video special efficiency
CN101452582B (en) * 2008-12-18 2013-09-18 北京中星微电子有限公司 Method and device for implementing three-dimensional video specific action
CN102054291A (en) * 2009-11-04 2011-05-11 厦门市美亚柏科信息股份有限公司 Method and device for reconstructing three-dimensional face based on single face image
US20140088750A1 (en) * 2012-09-21 2014-03-27 Kloneworld Pte. Ltd. Systems, methods and processes for mass and efficient production, distribution and/or customization of one or more articles
US9378576B2 (en) * 2013-06-07 2016-06-28 Faceshift Ag Online modeling for real-time facial animation
CN104346824A (en) * 2013-08-09 2015-02-11 汉王科技股份有限公司 Method and device for automatically synthesizing three-dimensional expression based on single facial image
CN104978764B (en) * 2014-04-10 2017-11-17 华为技术有限公司 3 d human face mesh model processing method and equipment
CN104732203B (en) * 2015-03-05 2019-03-26 中国科学院软件研究所 A kind of Emotion identification and tracking based on video information
CN108154550B (en) * 2017-11-29 2021-07-06 奥比中光科技集团股份有限公司 RGBD camera-based real-time three-dimensional face reconstruction method
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model
CN109147037B (en) * 2018-08-16 2020-09-18 Oppo广东移动通信有限公司 Special effect processing method, device and electronic device based on 3D model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088040A (en) * 1996-09-17 2000-07-11 Atr Human Information Processing Research Laboratories Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image
CN105118082A (en) * 2015-07-30 2015-12-02 科大讯飞股份有限公司 Personalized video generation method and system
CN106920274A (en) * 2017-01-20 2017-07-04 南京开为网络科技有限公司 Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device

Also Published As

Publication number Publication date
WO2020034698A1 (en) 2020-02-20
CN109147037A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109147037B (en) Special effect processing method, device and electronic device based on 3D model
CN108765273B (en) Virtual cosmetic surgery method and device for photographing faces
TWI751161B (en) Terminal equipment, smart phone, authentication method and system based on face recognition
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
CN109118569B (en) Rendering method and device based on three-dimensional model
JP7036879B2 (en) Techniques for displaying text more efficiently in virtual image generation systems
US11403819B2 (en) Three-dimensional model processing method, electronic device, and readable storage medium
US11069151B2 (en) Methods and devices for replacing expression, and computer readable storage media
CN109102559B (en) Three-dimensional model processing method and device
CN108764180A (en) Face identification method, device, electronic equipment and readable storage medium storing program for executing
CN108765272A (en) Image processing method and device, electronic equipment and readable storage medium
WO2021036314A1 (en) Facial image processing method and apparatus, image device, and storage medium
TWI421781B (en) Make-up simulation system, make-up simulation method, make-up simulation method and make-up simulation program
CN109272579B (en) Three-dimensional model-based makeup method and device, electronic equipment and storage medium
WO2020034786A1 (en) Three-dimensional model processing method, apparatus, electronic device and storage medium
JP7483301B2 (en) Image processing and image synthesis method, device, and computer program
WO2020034738A1 (en) Three-dimensional model processing method and apparatus, electronic device and readable storage medium
CN108876709A (en) Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing
CN109191393A (en) U.S. face method based on threedimensional model
CN107481318A (en) Method, device and terminal equipment for replacing user avatar
CN109242760A (en) Processing method, device and the electronic equipment of facial image
JP5966657B2 (en) Image generating apparatus, image generating method, and program
CN113327277B (en) A three-dimensional reconstruction method and device for bust
CN113763517B (en) Facial expression editing method and electronic equipment
RU2703327C1 (en) Method of processing a two-dimensional image and a user computing device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant