[go: up one dir, main page]

CN114155569B - Cosmetic progress detection method, device, equipment and storage medium - Google Patents

Cosmetic progress detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN114155569B
CN114155569B CN202111015242.4A CN202111015242A CN114155569B CN 114155569 B CN114155569 B CN 114155569B CN 202111015242 A CN202111015242 A CN 202111015242A CN 114155569 B CN114155569 B CN 114155569B
Authority
CN
China
Prior art keywords
image
makeup
frame image
area
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111015242.4A
Other languages
Chinese (zh)
Other versions
CN114155569A (en
Inventor
刘聪
苗锋
张梦洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing So Young Technology Co Ltd
Original Assignee
Beijing So Young Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing So Young Technology Co Ltd filed Critical Beijing So Young Technology Co Ltd
Priority to CN202111306864.2A priority Critical patent/CN115731591A/en
Priority to CN202111308461.1A priority patent/CN115731142A/en
Priority to CN202111308454.1A priority patent/CN115937919A/en
Priority to CN202111308470.0A priority patent/CN115761827A/en
Priority to CN202111015242.4A priority patent/CN114155569B/en
Publication of CN114155569A publication Critical patent/CN114155569A/en
Application granted granted Critical
Publication of CN114155569B publication Critical patent/CN114155569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请提出一种化妆进度检测方法、装置、设备及存储介质,该方法包括:获取用户当前进行特定妆容的实时化妆视频;根据实时化妆视频的初始帧图像和当前帧图像,确定用户进行特定妆容的当前化妆进度。本申请将用户化妆过程的当前帧图像与初始帧图像做对比来确定化妆进度。仅通过图像处理即可检测出化妆进度,化妆进度检测的准确性很高,对于高光、修容、腮红、粉底、遮瑕、眼影、眼线、眉毛等化妆过程均能实时检测用户的化妆进度。无需使用深度学习模型,运算量小,成本低,减少了服务器的处理压力,提高了化妆进度检测的效率,能够满足化妆进度检测的实时性要求。

Figure 202111015242

The present application provides a method, device, device and storage medium for detecting makeup progress. The method includes: acquiring a real-time makeup video of a user currently performing a specific makeup look; and determining whether the user is performing a specific makeup look according to an initial frame image and a current frame image of the real-time makeup video. of the current makeup progress. The present application compares the current frame image of the user's makeup process with the initial frame image to determine the makeup progress. The makeup progress can be detected only through image processing, and the accuracy of makeup progress detection is very high. For makeup processes such as highlighting, contouring, blush, foundation, concealer, eye shadow, eyeliner, and eyebrows, the user's makeup progress can be detected in real time. There is no need to use a deep learning model, the calculation amount is small, the cost is low, the processing pressure of the server is reduced, the efficiency of makeup progress detection is improved, and the real-time requirements of makeup progress detection can be met.

Figure 202111015242

Description

一种化妆进度检测方法、装置、设备及存储介质A makeup progress detection method, device, equipment and storage medium

技术领域technical field

本申请属于图像处理技术领域,具体涉及一种化妆进度检测方法、装置、设备及存储介质。The present application belongs to the technical field of image processing, and in particular relates to a makeup progress detection method, device, equipment and storage medium.

背景技术Background technique

化妆已经成为很多人日常生活必不可少的环节,化妆过程的步骤繁多,如果能够将化妆进度实时反馈给用户,将可以极大减少化妆对用户精力的消耗,节省用户的化妆时间。Makeup has become an essential part of many people's daily life. There are many steps in the makeup process. If the makeup progress can be fed back to the user in real time, it will greatly reduce the energy consumption of the user and save the user's makeup time.

目前,相关技术中存在一些使用深度学习模型提供虚拟试妆、肤色侦测、个性化产品推荐等功能,这些功能均需要预先收集大量的人脸图片对深度学习模型进行训练。At present, there are some related technologies that use deep learning models to provide functions such as virtual makeup testing, skin color detection, and personalized product recommendations. These functions require pre-collecting a large number of face pictures to train deep learning models.

但人脸图片是用户的隐私数据,很难收集到庞大的人脸图片。且模型训练需耗费大量计算资源,成本高。模型的精度与实时性成反比,化妆进度检测需要实时捕获用户人脸面部信息来确定用户当前的化妆进度,实时性要求很高,能够满足实时性要求的深度学习模型,其检测的准确性不高。However, face pictures are user privacy data, and it is difficult to collect huge face pictures. Moreover, model training requires a large amount of computing resources and is expensive. The accuracy of the model is inversely proportional to the real-time performance. Make-up progress detection needs to capture the user's face and facial information in real time to determine the user's current make-up progress. The real-time requirements are very high. The deep learning model that can meet the real-time requirements is not accurate enough. high.

发明内容Contents of the invention

本申请提出一种化妆进度检测方法、装置、设备及存储介质,通过初始帧图像与当前帧图像之间的差别,确定出当前化妆进度,化妆进度检测的准确性很高,且运算量小,成本低,减少了服务器的处理压力,提高了化妆进度检测的效率,能够满足化妆进度检测的实时性要求。This application proposes a makeup progress detection method, device, equipment, and storage medium. The current makeup progress is determined through the difference between the initial frame image and the current frame image. The accuracy of makeup progress detection is high, and the calculation amount is small. The cost is low, the processing pressure of the server is reduced, the efficiency of makeup progress detection is improved, and the real-time requirement of makeup progress detection can be met.

本申请第一方面实施例提出了一种化妆进度检测方法,包括:The embodiment of the first aspect of the present application proposes a makeup progress detection method, including:

获取用户当前进行特定妆容的实时化妆视频;Obtain real-time makeup videos of users currently performing specific makeup looks;

根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度。According to the initial frame image and the current frame image of the real-time makeup video, the current makeup progress of the user performing the specific makeup is determined.

在本申请的一些实施例中,所述特定妆容包括高光妆容、或修容妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:In some embodiments of the present application, the specific makeup includes high-gloss makeup or grooming makeup; and according to the initial frame image and the current frame image of the real-time makeup video, it is determined that the current time for the user to perform the specific makeup is Makeup progress, including:

获取所述特定妆容对应的至少一个目标上妆区域;Obtain at least one target makeup area corresponding to the specific makeup;

根据所述目标上妆区域,从所述初始帧图像中获取所述特定妆容对应的第一目标区域图像,及从所述当前帧图像中获取所述特定妆容对应的第二目标区域图像;Acquiring a first target area image corresponding to the specific makeup from the initial frame image according to the target makeup area, and acquiring a second target area image corresponding to the specific makeup from the current frame image;

根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述根据所述目标上妆区域,从所述初始帧图像中获取所述特定妆容对应的第一目标区域图像,包括:In some embodiments of the present application, the acquiring the first target area image corresponding to the specific makeup from the initial frame image according to the target makeup area includes:

检测所述初始帧图像对应的第一人脸关键点;Detecting the first human face key point corresponding to the initial frame image;

根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像;Acquiring a face region image corresponding to the initial frame image according to the first face key point;

根据所述第一人脸关键点和所述目标上妆区域,从所述人脸区域图像中获取所述特定妆容对应的第一目标区域图像。According to the first human face key point and the target makeup area, a first target area image corresponding to the specific makeup is acquired from the human face area image.

在本申请的一些实施例中,所述根据所述第一人脸关键点和所述目标上妆区域,从所述人脸区域图像中提取所述特定妆容对应的第一目标区域图像,包括:In some embodiments of the present application, the extraction of the first target area image corresponding to the specific makeup from the face area image according to the first human face key point and the target makeup area includes :

从所述第一人脸关键点中确定出位于所述人脸区域图像中所述目标上妆区域对应的区域轮廓上的一个或多个目标关键点;Determining one or more target key points located on the contour of the area corresponding to the target makeup area in the human face area image from the first human face key points;

根据所述目标上妆区域对应的目标关键点,生成所述人脸区域图像对应的掩膜图像;Generate a mask image corresponding to the face area image according to the target key points corresponding to the target makeup area;

对所述掩膜图像和所述人脸区域图像进行与运算,获得所述特定妆容对应的第一目标区域图像。An AND operation is performed on the mask image and the face area image to obtain a first target area image corresponding to the specific makeup.

在本申请的一些实施例中,所述根据所述目标上妆区域对应的目标关键点,生成所述人脸区域图像对应的掩膜图像,包括:In some embodiments of the present application, the generating the mask image corresponding to the face area image according to the target key points corresponding to the target makeup area includes:

若存在目标上妆区域对应的目标关键点的数目为多个,则根据所述目标关键点,确定所述目标上妆区域在所述人脸区域图像中的每个边缘坐标;将所述每个边缘坐标围成的区域内的所有像素点的像素值均修改为预设值,得到所述目标上妆区域对应的掩膜区域;If there are multiple target key points corresponding to the target makeup area, then according to the target key points, determine each edge coordinate of the target makeup area in the face area image; The pixel values of all the pixel points in the area surrounded by two edge coordinates are all modified to preset values, so as to obtain the mask area corresponding to the target makeup area;

若存在目标上妆区域对应的目标关键点的数目为一个,则以所述目标关键点为中心,绘制预设大小的椭圆区域,将所述椭圆区域内的所有像素点的像素值均修改为预设值,得到所述目标上妆区域对应的掩膜区域;If there is one target key point corresponding to the target makeup area, then take the target key point as the center, draw an ellipse area of preset size, and modify the pixel values of all pixels in the ellipse area to A preset value to obtain a mask area corresponding to the target makeup area;

将所述掩膜区域之外的所有像素点的像素值均修改为零,得到所述人脸区域图像对应的掩膜图像。Modifying the pixel values of all pixels outside the mask area to zero to obtain a mask image corresponding to the face area image.

在本申请的一些实施例中,所述特定妆容包括腮红妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:In some embodiments of the present application, the specific makeup includes blush makeup; the determining the current makeup progress of the user performing the specific makeup according to the initial frame image and the current frame image of the real-time makeup video includes :

获取所述特定妆容对应的至少一个目标上妆区域;Obtain at least one target makeup area corresponding to the specific makeup;

根据所述目标上妆区域,生成美妆掩码图;Generate a beauty mask map according to the target makeup area;

根据所述美妆掩码图、所述初始帧图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度。According to the makeup mask image, the initial frame image and the current frame image, determine the current makeup progress corresponding to the current frame image.

在本申请的一些实施例中,所述根据所述美妆掩码图、所述初始帧图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the beauty mask image, the initial frame image and the current frame image includes:

以所述美妆掩码图为参照,从所述初始帧图像中获取上妆的第一目标区域图像,以及从所述当前帧图像中获取上妆的第二目标区域图像;Taking the makeup mask image as a reference, acquiring a makeup-on first target area image from the initial frame image, and acquiring a makeup-on second target area image from the current frame image;

根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述特定妆容包括眼线妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:In some embodiments of the present application, the specific makeup includes eyeliner makeup; the determining the current makeup progress of the user performing the specific makeup according to the initial frame image and the current frame image of the real-time makeup video includes:

获取所述初始帧图像和所述当前帧图像对应的美妆掩码图;Acquiring the beauty mask corresponding to the initial frame image and the current frame image;

根据所述初始帧图像,模拟生成眼线上妆完成后的结果图像;According to the initial frame image, simulate and generate the result image after eyeliner makeup is completed;

根据所述美妆掩码图、所述结果图像、所述初始帧图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度。According to the makeup mask map, the result image, the initial frame image and the current frame image, determine the current makeup progress corresponding to the current frame image.

在本申请的一些实施例中,所述根据所述美妆掩码图、所述结果图像、所述初始帧图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the beauty mask image, the result image, the initial frame image and the current frame image includes :

以所述初始帧图像对应的美妆掩码图为参照,从所述初始帧图像中获取上妆的第一目标区域图像;Taking the makeup mask image corresponding to the initial frame image as a reference, acquiring the image of the first target area of makeup from the initial frame image;

根据所述当前帧图像对应的美妆掩码图,从所述当前帧图像中获取上妆的第二目标区域图像;Acquiring a makeup-on second target area image from the current frame image according to the makeup mask image corresponding to the current frame image;

根据所述结果图像获取眼线上妆的第三目标区域图像;Acquiring the third target area image of eyeliner makeup according to the result image;

根据所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image, the second target area image and the third target area image, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述根据所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the first target area image, the second target area image and the third target area image includes:

分别将所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像转换为HLS颜色空间下包含饱和度通道的图像;Converting the first target area image, the second target area image, and the third target area image into images containing a saturation channel in the HLS color space, respectively;

根据转换后的所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the converted first target area image, the second target area image and the third target area image, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述根据转换后的所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the current makeup progress corresponding to the current frame image is determined according to the converted first target area image, the second target area image and the third target area image ,include:

分别计算转换后的所述第一目标区域图像对应的第一平均像素值、所述第二目标区域图像对应的第二平均像素值和所述第三目标区域图像对应的第三平均像素值;respectively calculating a first average pixel value corresponding to the converted first target area image, a second average pixel value corresponding to the second target area image, and a third average pixel value corresponding to the third target area image;

计算第二平均像素值与所述第一平均像素值之间的第一差值,以及计算所述第三平均像素值与所述第一平均像素值之间的第二差值;calculating a first difference between a second average pixel value and the first average pixel value, and calculating a second difference between the third average pixel value and the first average pixel value;

计算所述第一差值与所述第二差值之间的比值,得到所述当前帧图像对应的当前化妆进度。Calculate the ratio between the first difference and the second difference to obtain the current makeup progress corresponding to the current frame image.

在本申请的一些实施例中,所述根据所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度之前,还包括:In some embodiments of the present application, before determining the current makeup progress corresponding to the current frame image according to the first target area image, the second target area image and the third target area image, further include:

对所述第一目标区域图像和所述第二目标区域图像进行对齐处理;performing alignment processing on the first target area image and the second target area image;

对所述第一目标区域图像和所述第三目标区域图像进行对齐处理。Perform alignment processing on the first target area image and the third target area image.

在本申请的一些实施例中,所述对所述第一目标区域图像和所述第二目标区域图像进行对齐处理,包括:In some embodiments of the present application, the aligning the first target area image and the second target area image includes:

分别对所述第一目标区域图像和所述第二目标区域图像进行二值化处理,得到所述第一目标区域图像对应的第一二值化掩膜图像及所述第二目标区域图像对应的第二二值化掩膜图像;Perform binarization processing on the first target area image and the second target area image respectively to obtain a first binarized mask image corresponding to the first target area image and a corresponding second target area image The second binarized mask image of ;

对所述第一二值化掩膜图像和所述第二二值化掩膜图像进行与运算,得到所述第一目标区域图像与所述第二目标区域图像的相交区域对应的第二掩膜图像。Perform an AND operation on the first binarized mask image and the second binarized mask image to obtain a second mask corresponding to the intersection area of the first target area image and the second target area image film image.

在本申请的一些实施例中,所述对所述第一目标区域图像和所述第二目标区域图像进行对齐处理,还包括:In some embodiments of the present application, the aligning the first target area image and the second target area image further includes:

获取所述初始帧图像对应的人脸区域图像及所述结果图像对应的人脸区域图像;Acquiring the face area image corresponding to the initial frame image and the face area image corresponding to the result image;

对所述第二掩膜图像和所述初始帧图像对应的人脸区域图像进行与运算,得到所述初始帧图像对应的新的第一目标区域图像;performing an AND operation on the second mask image and the face area image corresponding to the initial frame image to obtain a new first target area image corresponding to the initial frame image;

对所述第二掩膜图像和所述结果图像对应的人脸区域图像进行与运算,得到所述结果图像对应的新的第二目标区域图像。An AND operation is performed on the second mask image and the face area image corresponding to the result image to obtain a new second target area image corresponding to the result image.

在本申请的一些实施例中,所述获取所述初始帧图像和所述当前帧图像对应的美妆掩码图,包括:In some embodiments of the present application, the acquiring the beauty mask corresponding to the initial frame image and the current frame image includes:

获取用户选择的眼线样式图;Obtain the eyeliner style map selected by the user;

若所述初始帧图像中用户的眼部状态为睁眼状态,则获取所述眼线样式图对应的睁眼样式图;将所述睁眼样式图确定为所述初始帧图像对应的美妆掩码图;If the user's eye state in the initial frame image is the eye-open state, then obtain the eye-opening pattern map corresponding to the eyeliner pattern map; determine the eye-opening pattern map as the beauty mask corresponding to the initial frame image code map;

若所述初始帧图像中用户的眼部状态为闭眼状态,则获取所述眼线样式图对应的闭眼样式图,并将所述闭眼样式图确定为所述初始帧图像对应的美妆掩码图。If the eye state of the user in the initial frame image is the eye-closed state, then obtain the eye-closed pattern map corresponding to the eyeliner pattern map, and determine the eye-closed pattern map as the beauty makeup corresponding to the initial frame image mask map.

在本申请的一些实施例中,所述特定妆容包括眼影妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:In some embodiments of the present application, the specific makeup includes eye shadow makeup; the determining the current makeup progress of the user performing the specific makeup according to the initial frame image and the current frame image of the real-time makeup video includes:

获取眼影掩码图;Obtain the eye shadow mask map;

根据眼影上妆的每个目标上妆区域,分别从所述眼影掩码图中拆分出每个所述目标上妆区域对应的美妆掩码图;According to each target makeup area of eye shadow makeup, respectively split the beauty mask map corresponding to each target makeup area from the eye shadow mask map;

根据所述初始帧图像、所述当前帧图像及每个所述目标上妆区域对应的美妆掩码图,确定所述当前帧图像对应的当前化妆进度。The current makeup progress corresponding to the current frame image is determined according to the initial frame image, the current frame image, and the beauty mask map corresponding to each target makeup area.

在本申请的一些实施例中,所述根据所述初始帧图像、所述当前帧图像及每个所述目标上妆区域对应的美妆掩码图,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the current makeup corresponding to the current frame image is determined according to the initial frame image, the current frame image, and the beauty mask map corresponding to each of the target makeup areas. progress, including:

分别以每个所述目标上妆区域对应的美妆掩码图为参照,从所述初始帧图像中获取每个所述目标上妆区域对应的第一目标区域图像;Taking the beauty mask image corresponding to each target makeup area as a reference, acquiring a first target area image corresponding to each target makeup area from the initial frame image;

分别以每个所述目标上妆区域对应的美妆掩码图为参照,从所述当前帧图像中获取每个所述目标上妆区域对应的第二目标区域图像;Respectively using the beauty mask image corresponding to each target makeup area as a reference, acquiring a second target area image corresponding to each target makeup area from the current frame image;

根据每个所述目标上妆区域对应的第一目标区域图像及第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述根据每个所述目标上妆区域对应的第一目标区域图像及第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each of the target makeup areas includes:

分别将每个所述目标上妆区域对应的第一目标区域图像和第二目标区域图像转换为HLS颜色空间下包含预设单通道分量的图像;Respectively converting the first target area image and the second target area image corresponding to each target makeup area into images containing preset single-channel components under the HLS color space;

根据转换后的每个所述目标上妆区域对应的第一目标区域图像和第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the converted first target area image and second target area image corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述根据转换后的每个所述目标上妆区域对应的第一目标区域图像和第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image corresponding to each target makeup area includes :

分别计算转换后同一目标上妆区域对应的第一目标区域图像和第二目标区域图像中位置相同的像素点对应的所述预设单通道分量的差值绝对值;Respectively calculate the difference absolute value of the preset single-channel component corresponding to the pixels in the same position in the first target area image corresponding to the same target makeup area after conversion and in the second target area image;

统计每个目标上妆区域对应的差值绝对值满足预设化妆完成条件的像素点数目;Count the number of pixels whose absolute value of the difference corresponding to each target makeup area satisfies the preset makeup completion condition;

分别计算每个目标上妆区域对应的所述像素点数目与对应目标上妆区域中的像素点总数目之间的比值,得到每个目标上妆区域对应的化妆进度;Calculating the ratio between the number of pixels corresponding to each target makeup area and the total number of pixels in the corresponding target makeup area to obtain the makeup progress corresponding to each target makeup area;

根据每个目标上妆区域对应的化妆进度及每个目标上妆区域对应的预设权重,计算所述当前帧图像对应的当前化妆进度。According to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is calculated.

在本申请的一些实施例中,以美妆掩码图为参照,从所述初始帧图像中获取第一目标区域图像,包括:In some embodiments of the present application, the first target area image is obtained from the initial frame image with reference to the beauty mask image, including:

检测所述初始帧图像对应的第一人脸关键点;Detecting the first human face key point corresponding to the initial frame image;

根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像;Acquiring a face region image corresponding to the initial frame image according to the first face key point;

以美妆掩码图为参照,从所述人脸区域图像中获取上妆的第一目标区域图像。Taking the makeup mask image as a reference, the image of the first target area for applying makeup is acquired from the face area image.

在本申请的一些实施例中,所述以美妆掩码图为参照,从所述人脸区域图像中获取上妆的第一目标区域图像,包括:In some embodiments of the present application, the acquisition of the makeup-on first target area image from the face area image with reference to the beauty mask image includes:

分别将美妆掩码图和所述人脸区域图像转换为二值化图像;Converting the beauty mask image and the face area image into a binary image respectively;

对所述美妆掩码图对应的二值化图像和所述人脸区域图像对应的二值化图像进行与运算,获得所述美妆掩码图与所述人脸区域图像的相交区域对应的第一掩膜图像;Perform an AND operation on the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face area image to obtain the correspondence between the intersection area of the beauty mask map and the face area image The first mask image of ;

对所述第一掩膜图像与所述初始帧图像对应的人脸区域图像进行与运算,获得第一目标区域图像。An AND operation is performed on the face area image corresponding to the first mask image and the initial frame image to obtain a first target area image.

在本申请的一些实施例中,所述对所述美妆掩码图对应的二值化图像和所述人脸区域图像对应的二值化图像进行与运算之前,还包括:In some embodiments of the present application, before performing the AND operation on the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face region image, it further includes:

根据所述美妆掩码图对应的标准人脸关键点,确定所述美妆掩码图中位于每个上妆区域的轮廓上的一个或多个第一定位点;According to the standard human face key points corresponding to the beauty mask map, one or more first positioning points located on the outline of each makeup area in the beauty mask map are determined;

根据所述第一人脸关键点,从所述人脸区域图像中确定出与每个所述第一定位点对应的第二定位点;Determining a second anchor point corresponding to each of the first anchor points from the face area image according to the first face key point;

对所述美妆掩码图进行拉伸处理,将每个所述第一定位点拉伸至对应的每个所述第二定位点对应的位置处。Stretching is performed on the beauty mask image, and each of the first positioning points is stretched to a position corresponding to each of the second positioning points.

在本申请的一些实施例中,所述以美妆掩码图为参照,从所述人脸区域图像中获取上妆的第一目标区域图像,包括:In some embodiments of the present application, the acquisition of the makeup-on first target area image from the face area image with reference to the beauty mask image includes:

将所述美妆掩码图拆分为多个子掩码图,每个所述子掩码图中包括至少一个目标上妆区域;Splitting the beauty mask image into multiple sub-mask images, each of which includes at least one target makeup area;

分别将每个所述子掩码图及所述人脸区域图像转换为二值化图像;converting each of the sub-mask images and the face region image into a binary image;

分别对每个所述子掩码图对应的二值化图像与所述人脸区域图像对应的二值化图像进行与运算,获得每个所述子掩码图各自对应的子掩膜图像;performing an AND operation on the binarized image corresponding to each of the sub-masks and the binarized image corresponding to the face region image to obtain a respective sub-mask image corresponding to each of the sub-masks;

分别对每个所述子掩膜图像与所述初始帧图像对应的人脸区域图像进行与运算,获得所述初始帧图像对应的多个子目标区域图像;performing an AND operation on each of the sub-mask images and the face region images corresponding to the initial frame image to obtain a plurality of sub-target region images corresponding to the initial frame image;

将所述多个子目标区域图像合并为所述初始帧图像对应的第一目标区域图像。Merging the multiple sub-target area images into a first target area image corresponding to the initial frame image.

在本申请的一些实施例中,所述分别对每个所述子掩码图对应的二值化图像与所述人脸区域图像对应的二值化图像进行与运算之前,还包括:In some embodiments of the present application, before performing the AND operation on the binarized image corresponding to each of the sub-mask images and the binarized image corresponding to the face region image, it further includes:

根据所述美妆掩码图对应的标准人脸关键点,确定第一子掩码图中位于目标上妆区域的轮廓上的一个或多个第一定位点,所述第一子掩码图为所述多个子掩码图中的任一子掩码图;According to the standard human face key points corresponding to the beauty mask map, determine one or more first positioning points located on the contour of the target makeup area in the first sub-mask map, the first sub-mask map It is any sub-mask in the plurality of sub-masks;

根据所述第一人脸关键点,从所述人脸区域图像中确定出与每个所述第一定位点对应的第二定位点;Determining a second anchor point corresponding to each of the first anchor points from the face area image according to the first face key point;

对所述第一子掩码图进行拉伸处理,将每个所述第一定位点拉伸至对应的每个所述第二定位点对应的位置处。Stretching is performed on the first sub-mask image, and each of the first positioning points is stretched to a position corresponding to each of the second positioning points.

在本申请的一些实施例中,所述特定妆容包括眉毛妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:In some embodiments of the present application, the specific makeup includes eyebrow makeup; the determining the current makeup progress of the user performing the specific makeup according to the initial frame image and the current frame image of the real-time makeup video includes:

从所述初始帧图像中获取眉毛对应的第一目标区域图像,及从所述当前帧图像中获取眉毛对应的第二目标区域图像;Obtaining a first target area image corresponding to the eyebrows from the initial frame image, and acquiring a second target area image corresponding to the eyebrows from the current frame image;

根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述从所述初始帧图像中获取眉毛对应的第一目标区域图像,包括:In some embodiments of the present application, the obtaining the first target region image corresponding to the eyebrows from the initial frame image includes:

检测所述初始帧图像对应的第一人脸关键点;Detecting the first human face key point corresponding to the initial frame image;

根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像;Acquiring a face region image corresponding to the initial frame image according to the first face key point;

根据所述第一人脸关键点中包括的眉毛关键点,从所述人脸区域图像中获取眉毛对应的第一目标区域图像。According to the eyebrow key points included in the first human face key points, a first target area image corresponding to eyebrows is acquired from the human face area image.

在本申请的一些实施例中,所述根据所述第一人脸关键点中包括的眉毛关键点,从所述人脸区域图像中截取眉毛对应的第一目标区域图像,包括:In some embodiments of the present application, according to the eyebrow key points included in the first human face key points, intercepting the first target area image corresponding to the eyebrows from the face area image includes:

对所述第一人脸关键点包括的眉头至眉峰之间的眉毛关键点进行插值,得到多个插值点;Interpolating the eyebrow key points between the eyebrows and eyebrow peaks included in the first human face key points to obtain a plurality of interpolation points;

从所述人脸区域图像中截取出眉头至眉峰之间所有眉毛关键点及所述多个插值点连接而成的闭合区域,得到眉头至眉峰之间的部分眉毛图像;Intercepting all eyebrow key points between the eyebrow head and the eyebrow peak and the closed area formed by the connection of the plurality of interpolation points from the image of the human face area, obtaining a partial eyebrow image between the eyebrow head and the eyebrow peak;

从所述人脸区域图像中截取出眉峰至眉尾之间的所有眉毛关键点连接而成的闭合区域,得到眉峰至眉尾之间的部分眉毛图像;Intercepting the closed area formed by connecting all eyebrow key points between the eyebrow peak and the eyebrow tail from the image of the human face region, obtaining a partial eyebrow image between the eyebrow peak and the eyebrow tail;

将所述眉头至眉峰之间的部分眉毛图像与所述眉峰至眉尾之间的部分眉毛图像拼接为眉毛对应的第一目标区域图像。The partial eyebrow image between the eyebrow head and the eyebrow peak and the partial eyebrow image between the eyebrow peak and the eyebrow tail are spliced into a first target region image corresponding to the eyebrows.

在本申请的一些实施例中,所述根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image includes:

分别将所述第一目标区域图像和所述第二目标区域图像转换为HSV颜色空间下包含预设单通道分量的图像;respectively converting the first target area image and the second target area image into images containing preset single-channel components in HSV color space;

根据转换后的所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the converted first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述根据转换后的所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image includes:

分别计算转换后的所述第一目标区域图像和所述第二目标区域图像中位置相同的像素点对应的所述预设单通道分量的差值绝对值;Calculating the absolute value of the difference between the preset single-channel components corresponding to pixels at the same position in the converted first target area image and the second target area image;

统计对应的差值绝对值满足预设化妆完成条件的像素点数目;Count the number of pixels whose absolute value of the corresponding difference satisfies the preset makeup completion condition;

计算统计的所述像素点数目与所述第一目标区域图像中所有目标上妆区域中的像素点总数目之间的比值,得到所述当前帧图像对应的当前化妆进度。Calculate the ratio between the counted number of pixels and the total number of pixels in all target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.

在本申请的一些实施例中,所述根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度之前,还包括:In some embodiments of the present application, before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, further includes:

分别对所述第一目标区域图像和所述第二目标区域图像进行二值化处理,得到所述第一目标区域图像对应的第一二值化掩膜图像和所述第二目标区域图像对应的第二二值化掩膜图像;Performing binarization processing on the first target area image and the second target area image respectively, to obtain a first binarized mask image corresponding to the first target area image and corresponding to the second target area image The second binarized mask image of ;

对所述第一二值化掩膜图像和所述第二二值化掩膜图像进行与运算,得到所述第一目标区域图像与所述第二目标区域图像的相交区域对应的第二掩膜图像;Perform an AND operation on the first binarized mask image and the second binarized mask image to obtain a second mask corresponding to the intersection area of the first target area image and the second target area image film image;

获取所述初始帧图像对应的人脸区域图像及所述当前帧图像对应的人脸区域图像;Acquiring the face area image corresponding to the initial frame image and the face area image corresponding to the current frame image;

对所述第二掩膜图像和所述初始帧图像对应的人脸区域图像进行与运算,得到所述初始帧图像对应的新第一目标区域图像;performing an AND operation on the second mask image and the face area image corresponding to the initial frame image to obtain a new first target area image corresponding to the initial frame image;

对所述第二掩膜图像和所述当前帧图像对应的人脸区域图像进行与运算,得到所述当前帧图像对应的新第二目标区域图像。An AND operation is performed on the second mask image and the face area image corresponding to the current frame image to obtain a new second target area image corresponding to the current frame image.

在本申请的一些实施例中,所述确定所述当前帧图像对应的当前化妆进度之前,还包括:In some embodiments of the present application, before determining the current makeup progress corresponding to the current frame image, it also includes:

分别对所述第一目标区域图像和所述第二目标区域图像中的上妆区域进行边界腐蚀处理。Boundary erosion processing is performed on makeup areas in the first target area image and the second target area image respectively.

在本申请的一些实施例中,所述特定妆容包括粉底妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:In some embodiments of the present application, the specific makeup includes foundation makeup; the determining the current makeup progress of the user performing the specific makeup according to the initial frame image and the current frame image of the real-time makeup video includes:

根据所述初始帧图像,模拟生成完成所述特定妆容后的结果图像;According to the initial frame image, simulate and generate a result image after completing the specific makeup;

分别获取所述初始帧图像、所述结果图像和所述当前帧图像对应的整体图像亮度;Obtaining the overall image brightness corresponding to the initial frame image, the result image, and the current frame image respectively;

分别获取所述初始帧图像、所述结果图像和所述当前帧图像对应的人脸区域亮度;Obtaining the brightness of the face area corresponding to the initial frame image, the result image, and the current frame image respectively;

根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的整体图像亮度和人脸区域亮度,确定所述当前帧图像对应的当前化妆进度。The current makeup progress corresponding to the current frame image is determined according to the overall image brightness and face area brightness corresponding to the initial frame image, the result image and the current frame image respectively.

在本申请的一些实施例中,所述分别获取所述初始帧图像、所述结果图像和所述当前帧图像对应的整体图像亮度,包括:In some embodiments of the present application, said obtaining the overall image brightness corresponding to the initial frame image, the result image, and the current frame image respectively includes:

分别将所述初始帧图像、所述结果图像和所述当前帧图像转换为灰度图像;respectively converting the initial frame image, the result image and the current frame image into grayscale images;

分别计算转换后所述初始帧图像、所述结果图像和所述当前帧图像各自对应的灰度图像中像素点的灰度平均值;Calculating the grayscale average value of the pixels in the grayscale images respectively corresponding to the converted initial frame image, the resultant image, and the current frame image;

将所述初始帧图像、所述结果图像和所述当前帧图像各自对应的灰度平均值分别确定为所述初始帧图像、所述结果图像和所述当前帧图像各自对应的整体图像亮度。The respective grayscale average values corresponding to the initial frame image, the result image and the current frame image are respectively determined as the respective overall image luminances corresponding to the initial frame image, the result image and the current frame image.

在本申请的一些实施例中,所述分别获取所述初始帧图像、所述结果图像和所述当前帧图像对应的人脸区域亮度,包括:In some embodiments of the present application, said obtaining the brightness of the face area corresponding to the initial frame image, the result image and the current frame image respectively includes:

分别获取所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像;Respectively acquire the face area images corresponding to the initial frame image, the result image and the current frame image;

分别将所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像转换为人脸灰度图像;respectively converting the face area images corresponding to the initial frame image, the result image, and the current frame image into face grayscale images;

分别计算所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸灰度图像中像素点的灰度平均值,得到所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域亮度。Calculate the gray average value of the pixels in the face grayscale images corresponding to the initial frame image, the result image and the current frame image respectively, to obtain the initial frame image, the result image and the current frame image The brightness of the face area corresponding to each frame image.

在本申请的一些实施例中,所述根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的整体图像亮度和人脸区域亮度,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the current makeup corresponding to the current frame image is determined according to the overall image brightness and face area brightness corresponding to the initial frame image, the result image, and the current frame image. progress, including:

根据所述初始帧图像对应的整体图像亮度和人脸区域亮度以及所述当前帧图像对应的整体图像亮度和人脸区域亮度,确定所述当前帧图像对应的第一环境变化亮度;Determine the first environmental change brightness corresponding to the current frame image according to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the current frame image;

根据所述初始帧图像对应的整体图像亮度和人脸区域亮度以及所述结果图像对应的整体图像亮度和人脸区域亮度,确定所述结果图像对应的第二环境变化亮度;Determine the second environmental change brightness corresponding to the result image according to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the result image;

根据所述第一环境变化亮度、所述第二环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述当前帧图像对应的人脸区域亮度、所述结果图像对应的人脸区域亮度,确定所述当前帧图像对应的当前化妆进度。According to the brightness of the first environmental change, the brightness of the second environmental change, the brightness of the face area corresponding to the initial frame image, the brightness of the face area corresponding to the current frame image, and the face area corresponding to the result image Brightness, to determine the current makeup progress corresponding to the current frame image.

在本申请的一些实施例中,所述根据所述初始帧图像对应的整体图像亮度和人脸区域亮度以及所述当前帧图像对应的整体图像亮度和人脸区域亮度,确定所述当前帧图像对应的第一环境变化亮度,包括:In some embodiments of the present application, the current frame image is determined according to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the current frame image The brightness of the corresponding first environment changes, including:

计算所述初始帧图像对应的整体图像亮度与其对应的人脸区域亮度之间的差值,得到所述初始帧图像的环境亮度;Calculating the difference between the overall image brightness corresponding to the initial frame image and the brightness of the corresponding face area to obtain the ambient brightness of the initial frame image;

计算所述当前帧图像对应的整体图像亮度与其对应的人脸区域亮度之间的差值,得到所述当前帧图像的环境亮度;Calculating the difference between the overall image brightness corresponding to the current frame image and the corresponding face area brightness to obtain the ambient brightness of the current frame image;

将所述当前帧图像的环境亮度与所述初始帧图像的环境亮度之间的差值绝对值确定为所述当前帧图像对应的第一环境变化亮度。Determine the absolute value of the difference between the ambient brightness of the current frame image and the ambient brightness of the initial frame image as the first ambient change brightness corresponding to the current frame image.

在本申请的一些实施例中,所述根据所述第一环境变化亮度、所述第二环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述当前帧图像对应的人脸区域亮度、所述结果图像对应的人脸区域亮度,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the changing brightness according to the first environment, the changing brightness according to the second environment, the brightness of the face area corresponding to the initial frame image, and the face area corresponding to the current frame image Brightness, the brightness of the face area corresponding to the result image, determine the current makeup progress corresponding to the current frame image, including:

根据所述第一环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述当前帧图像对应的人脸区域亮度,确定所述当前帧图像对应的上妆亮度变化值;According to the brightness of the first environmental change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the current frame image, determine the makeup brightness change value corresponding to the current frame image;

根据所述第二环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述结果图像对应的人脸区域亮度,确定所述结果图像对应的上妆亮度变化值;According to the brightness of the second environmental change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the result image, determine the change value of the makeup brightness corresponding to the result image;

计算所述当前帧图像对应的上妆亮度变化值与所述结果图像对应的上妆亮度变化值的比值,得到所述当前帧图像对应的当前化妆进度。Calculate the ratio of the makeup brightness change value corresponding to the current frame image to the makeup brightness change value corresponding to the result image to obtain the current makeup progress corresponding to the current frame image.

在本申请的一些实施例中,所述根据所述第一环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述当前帧图像对应的人脸区域亮度,确定所述当前帧图像对应的上妆亮度变化值,包括:In some embodiments of the present application, the current frame image is determined according to the brightness of the first environmental change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the current frame image The corresponding brightness change value of makeup includes:

计算所述当前帧图像对应的人脸区域亮度与所述初始帧图像对应的人脸区域亮度之间的差值,得到所述当前帧图像对应的总亮度变化值;calculating the difference between the brightness of the face area corresponding to the current frame image and the brightness of the face area corresponding to the initial frame image to obtain a total brightness change value corresponding to the current frame image;

计算所述总亮度变化值与所述第一环境变化亮度之间的差值,得到所述当前帧图像对应的上妆亮度变化值。Calculate the difference between the total brightness change value and the first environmental change brightness to obtain the makeup brightness change value corresponding to the current frame image.

在本申请的一些实施例中,所述方法还包括:In some embodiments of the present application, the method also includes:

若所述第一环境变化亮度大于预设阈值,则将上一帧图像对应的化妆进度确定为所述当前帧图像对应的当前化妆进度;If the brightness of the first environmental change is greater than a preset threshold, then determine the makeup progress corresponding to the previous frame image as the current makeup progress corresponding to the current frame image;

发送第一提示信息给所述用户的终端,所述第一提示信息用于提示所述用户回到所述初始帧图像对应的亮度环境下上妆。Sending first prompt information to the user's terminal, where the first prompt information is used to prompt the user to return to the brightness environment corresponding to the initial frame image to apply makeup.

在本申请的一些实施例中,所述特定妆容包括遮瑕妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:In some embodiments of the present application, the specific makeup includes concealer makeup; the determination of the current makeup progress of the user performing the specific makeup according to the initial frame image and the current frame image of the real-time makeup video includes:

分别获取所述初始帧图像和所述当前帧图像各自对应的脸部瑕疵信息;Respectively acquiring facial blemish information corresponding to the initial frame image and the current frame image;

根据所述初始帧图像对应的脸部瑕疵信息和所述当前帧图像对应的脸部瑕疵信息,计算所述当前帧图像与所述初始帧图像之间的脸部瑕疵差异值;According to the facial blemish information corresponding to the initial frame image and the facial blemish information corresponding to the current frame image, calculate the facial blemish difference value between the current frame image and the initial frame image;

若所述脸部瑕疵差异值大于预设阈值,则根据所述脸部瑕疵差异值和所述初始帧图像对应的脸部瑕疵信息,计算所述当前帧图像对应的当前化妆进度;If the facial blemish difference value is greater than a preset threshold, then calculate the current makeup progress corresponding to the current frame image according to the facial blemish difference value and the facial blemish information corresponding to the initial frame image;

若所述脸部瑕疵差异值小于或等于所述预设阈值,则获取所述用户完成遮瑕上妆后的结果图像,根据所述初始帧图像、所述结果图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度。If the facial blemish difference value is less than or equal to the preset threshold, then obtain the result image after the user completes the concealment and makeup, and determine according to the initial frame image, the result image and the current frame image The current makeup progress corresponding to the current frame image.

在本申请的一些实施例中,所述脸部瑕疵信息包括瑕疵类别及对应的瑕疵数目;所述根据所述初始帧图像对应的脸部瑕疵信息和所述当前帧图像对应的脸部瑕疵信息,计算所述当前帧图像与所述初始帧图像之间的脸部瑕疵差异值,包括:In some embodiments of the present application, the facial blemish information includes blemish categories and corresponding blemish numbers; the facial blemish information corresponding to the initial frame image and the facial blemish information corresponding to the current frame image , calculating the facial blemish difference value between the current frame image and the initial frame image, including:

分别计算每种瑕疵类别下所述初始帧图像对应的瑕疵数目与所述当前帧图像对应的瑕疵数目之间的差值;Calculating the difference between the number of blemishes corresponding to the initial frame image and the number of blemishes corresponding to the current frame image under each blemish category;

计算每种瑕疵类别对应的差值之和,将得到的和值作为所述当前帧图像与所述初始帧图像之间的脸部瑕疵差异值。Calculate the sum of the differences corresponding to each blemish category, and use the obtained sum as the facial blemish difference value between the current frame image and the initial frame image.

在本申请的一些实施例中,所述根据所述脸部瑕疵差异值和所述初始帧图像对应的脸部瑕疵信息,计算所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the calculating the current makeup progress corresponding to the current frame image according to the facial blemish difference value and the facial blemish information corresponding to the initial frame image includes:

计算所述初始帧图像对应的脸部瑕疵信息中各瑕疵类别对应的瑕疵数目之和,得到总瑕疵数;Calculating the sum of the number of blemishes corresponding to each blemish category in the facial blemish information corresponding to the initial frame image to obtain the total number of blemishes;

计算所述脸部瑕疵差异值与所述总瑕疵数之间的比值,将所述比值作为所述当前帧图像对应的当前化妆进度。Calculate the ratio between the facial blemish difference value and the total number of blemishes, and use the ratio as the current makeup progress corresponding to the current frame image.

在本申请的一些实施例中,所述获取所述用户完成遮瑕上妆后的结果图像,根据所述初始帧图像、所述结果图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the acquisition of the result image after the user completes the concealment and makeup, and determining the corresponding frame image of the current frame image according to the initial frame image, the result image and the current frame image Current makeup progress, including:

根据所述初始帧图像,模拟生成所述用户完成遮瑕上妆后的结果图像;According to the initial frame image, simulate and generate the result image after the user completes the concealer and makeup;

分别获取所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像;Respectively acquire the face area images corresponding to the initial frame image, the result image and the current frame image;

根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像,确定所述当前帧图像对应的当前化妆进度。According to the face area images corresponding to the initial frame image, the result image and the current frame image, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the face area images corresponding to the initial frame image, the result image and the current frame image includes:

分别将所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像转换为HLS颜色空间下包含饱和度通道的图像;Respectively converting the face area images corresponding to the initial frame image, the result image and the current frame image into images containing saturation channels in the HLS color space;

通过预设滤波算法分别计算转换后所述初始帧图像、所述结果图像和所述当前帧图像各自的人脸区域图像对应的平滑因子;Calculating the smoothing factors corresponding to the respective face area images of the initial frame image, the result image, and the current frame image after conversion through a preset filtering algorithm;

根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的平滑因子,确定所述当前帧图像对应的当前化妆进度。According to the respective smoothing factors corresponding to the initial frame image, the result image and the current frame image, the current makeup progress corresponding to the current frame image is determined.

在本申请的一些实施例中,所述根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的平滑因子,确定所述当前帧图像对应的当前化妆进度,包括:In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the respective smoothing factors of the initial frame image, the result image and the current frame image includes:

计算所述当前帧图像对应的平滑因子与所述初始帧图像对应的平滑因子之间的第一差值;calculating a first difference between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image;

计算所述结果图像对应的平滑因子与所述初始帧图像对应的平滑因子之间的第二差值;calculating a second difference between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image;

计算所述第一差值与所述第二差值之间的比值,将所述比值作为所述当前帧图像对应的当前化妆进度。Calculate the ratio between the first difference and the second difference, and use the ratio as the current makeup progress corresponding to the current frame image.

在本申请的一些实施例中,所述分别获取所述初始帧图像和所述当前帧图像各自对应的脸部瑕疵信息,包括:In some embodiments of the present application, the respectively obtaining the facial blemish information corresponding to the initial frame image and the current frame image includes:

分别获取所述初始帧图像和所述当前帧图像各自对应的人脸区域图像;Respectively acquire the face area images corresponding to the initial frame image and the current frame image;

通过预设的皮肤检测模型分别检测所述初始帧图像和所述当前帧图像各自对应的人脸区域图像中各瑕疵类别对应的瑕疵数目,得到所述初始帧图像和所述当前帧图像各自对应的脸部瑕疵信息。The number of blemishes corresponding to each blemish category in the face area image corresponding to the initial frame image and the current frame image is respectively detected by the preset skin detection model, and the respective corresponding blemishes of the initial frame image and the current frame image are obtained. facial blemish information.

在本申请的一些实施例中,所述获取所述初始帧图像对应的人脸区域图像,包括:In some embodiments of the present application, the acquiring the face area image corresponding to the initial frame image includes:

根据所述初始帧图像对应的第一人脸关键点,对所述初始帧图像及所述第一人脸关键点进行旋转矫正;Performing rotation correction on the initial frame image and the first human face key point according to the first human face key point corresponding to the initial frame image;

根据矫正后的所述第一人脸关键点,从矫正后的所述初始帧图像中截取包含人脸区域的图像;According to the corrected first human face key point, intercepting an image containing a human face area from the corrected initial frame image;

将所述包含人脸区域的图像缩放至预设尺寸,得到所述初始帧图像对应的人脸区域图像。The image containing the face area is scaled to a preset size to obtain a face area image corresponding to the initial frame image.

在本申请的一些实施例中,所述根据所述第一人脸关键点,对所述初始帧图像及所述第一人脸关键点进行旋转矫正,包括:In some embodiments of the present application, performing rotation correction on the initial frame image and the first facial key point according to the first human face key point includes:

根据所述第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标;Determine left eye center coordinates and right eye center coordinates respectively according to the left eye key point and right eye key point included in the first human face key point;

根据所述左眼中心坐标和所述右眼中心坐标,确定所述初始帧图像对应的旋转角度及旋转中心点坐标;Determine the rotation angle and rotation center point coordinates corresponding to the initial frame image according to the left eye center coordinates and the right eye center coordinates;

根据所述旋转角度和所述旋转中心点坐标,对所述初始帧图像及所述第一人脸关键点进行旋转矫正。Rotation correction is performed on the initial frame image and the first human face key point according to the rotation angle and the coordinates of the rotation center point.

在本申请的一些实施例中,所述根据矫正后的所述第一人脸关键点,从矫正后的所述初始帧图像中截取包含人脸区域的图像,包括:In some embodiments of the present application, according to the corrected first human face key point, intercepting an image containing a human face area from the corrected initial frame image includes:

根据矫正后的所述第一人脸关键点,对矫正后的所述初始帧图像中包含的人脸区域进行图像截取。According to the corrected key points of the first human face, image interception is performed on the human face area included in the corrected initial frame image.

在本申请的一些实施例中,所述根据矫正后的所述第一人脸关键点,对矫正后的所述初始帧图像中包含的人脸区域进行图像截取,包括:In some embodiments of the present application, the image interception of the face area contained in the corrected initial frame image according to the corrected first human face key point includes:

从矫正后的所述第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值;Determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key point;

根据所述最小横坐标值、所述最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的所述初始帧图像中人脸区域对应的截取框;According to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value, determine the clipping frame corresponding to the face area in the corrected initial frame image;

根据所述截取框,从矫正后的所述初始帧图像中截取出包含所述人脸区域的图像。According to the clipping frame, an image including the face area is clipped from the rectified initial frame image.

在本申请的一些实施例中,所述方法还包括:In some embodiments of the present application, the method also includes:

将所述截取框放大预设倍数;Enlarging the interception frame by a preset multiple;

根据放大后的所述截取框,从矫正后的所述初始帧图像中截取出包含所述人脸区域的图像。According to the enlarged clipping frame, an image including the face area is clipped from the corrected initial frame image.

在本申请的一些实施例中,所述方法还包括:In some embodiments of the present application, the method also includes:

根据所述包含人脸区域的图像的尺寸及所述预设尺寸,对矫正后的所述第一人脸关键点进行缩放平移处理。According to the size of the image including the face area and the preset size, zooming and translation processing is performed on the corrected first key points of the face.

在本申请的一些实施例中,所述方法还包括:In some embodiments of the present application, the method also includes:

检测所述初始帧图像和所述当前帧图像中是否均仅包含同一个用户的人脸图像;Detecting whether both the initial frame image and the current frame image only contain the face image of the same user;

如果是,则执行所述确定所述用户进行所述特定妆容的当前化妆进度的操作;If yes, then perform the operation of determining the current makeup progress of the user performing the specific makeup look;

如果否,则发送提示信息给所述用户的终端,所述提示信息用于提示所述用户保持所述实时化妆视频中仅出现同一个用户的人脸。If not, send prompt information to the user's terminal, where the prompt information is used to prompt the user to keep only the face of the same user appearing in the real-time makeup video.

本申请第二方面的实施例提供了一种化妆进度检测装置,包括:The embodiment of the second aspect of the present application provides a makeup progress detection device, including:

视频获取模块,用于获取用户当前进行特定妆容的实时化妆视频;The video acquisition module is used to acquire the real-time makeup video of the user currently performing a specific makeup look;

化妆进度确定模块,用于根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度。The makeup progress determination module is configured to determine the current makeup progress of the user performing the specific makeup look according to the initial frame image and the current frame image of the real-time makeup video.

本申请第三方面的实施例提供了一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序以实现上述第一方面所述的方法。The embodiment of the third aspect of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor runs the computer program to Implement the method described in the first aspect above.

本申请第四方面的实施例提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行实现上述第一方面所述的方法。The embodiment of the fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, and the program is executed by a processor to implement the method described in the first aspect above.

本申请实施例中提供的技术方案,至少具有如下技术效果或优点:The technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:

在本申请实施例中,将用户化妆过程的当前帧图像与初始帧图像做对比来确定化妆进度。仅通过图像处理即可检测出化妆进度,化妆进度检测的准确性很高,对于高光、修容、腮红、粉底、遮瑕、眼影、眼线、眉毛等化妆过程均能实时检测用户的化妆进度。无需使用深度学习模型,运算量小,成本低,减少了服务器的处理压力,提高了化妆进度检测的效率,能够满足化妆进度检测的实时性要求。In the embodiment of the present application, the progress of the makeup is determined by comparing the current frame image of the user's makeup process with the initial frame image. The makeup progress can be detected only through image processing, and the accuracy of makeup progress detection is very high. It can detect the user's makeup progress in real time for the makeup process such as highlights, contouring, blush, foundation, concealer, eye shadow, eyeliner, and eyebrows. There is no need to use a deep learning model, the amount of calculation is small, the cost is low, the processing pressure on the server is reduced, the efficiency of makeup progress detection is improved, and the real-time requirements of makeup progress detection can be met.

本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变的明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth in part in the description which follows, and in part will be obvious from the description which follows, or may be learned by practice of the present application.

附图说明Description of drawings

通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本申请的限制。而且在整个附图中,用相同的参考符号表示相同的部件。Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiment. The drawings are only for the purpose of illustrating the preferred embodiments and are not to be considered as limiting the application. Also throughout the drawings, the same reference numerals are used to designate the same components.

在附图中:In the attached picture:

图1示出了本申请一实施例所提供的一种化妆进度检测方法的流程图;Fig. 1 shows a flowchart of a makeup progress detection method provided by an embodiment of the present application;

图2示出了本申请一实施例所提供的用于检测高光、修容等妆容的化妆进度检测方法的流程图;Fig. 2 shows a flow chart of a makeup progress detection method for detecting makeup such as highlights and grooming provided by an embodiment of the present application;

图3示出了本申请一实施例所提供的客户端显示的供用户选择目标上妆区域的显示界面的示意图;Fig. 3 shows a schematic diagram of a display interface for a user to select a target makeup area displayed by a client provided by an embodiment of the present application;

图4示出了本申请一实施例所提供的求解图像的旋转角度的示意图;FIG. 4 shows a schematic diagram of solving the rotation angle of an image provided by an embodiment of the present application;

图5示出了本申请一实施例所提供的两次坐标系转换的示意图;FIG. 5 shows a schematic diagram of two coordinate system transformations provided by an embodiment of the present application;

图6示出了本申请一实施例所提供的用于检测高光、修容等妆容的化妆进度检测方法的模块流程示意图;Fig. 6 shows a schematic diagram of a module flow chart of a makeup progress detection method for detecting highlights, trimming and other makeup provided by an embodiment of the present application;

图7示出了本申请一实施例所提供的用于检测腮红等妆容的化妆进度检测方法的流程图;Fig. 7 shows a flow chart of a makeup progress detection method for detecting blush and other makeup provided by an embodiment of the present application;

图8示出了本申请一实施例所提供的客户端显示的供用户选择上妆区域的显示界面的另一示意图;Fig. 8 shows another schematic diagram of the display interface displayed by the client for the user to select a makeup area provided by an embodiment of the present application;

图9示出了本申请一实施例所提供的用于检测腮红等妆容的化妆进度检测方法的模块流程示意图;Fig. 9 shows a schematic diagram of a module flow chart of a makeup progress detection method for detecting blush and other makeup provided by an embodiment of the present application;

图10示出了本申请一实施例所提供的用于检测眼线妆容的化妆进度检测方法的流程图;Fig. 10 shows a flowchart of a makeup progress detection method for detecting eyeliner makeup provided by an embodiment of the present application;

图11示出了本申请一实施例所提供的用于检测眼线妆容的化妆进度检测方法的模块流程示意图;Fig. 11 shows a schematic diagram of a module flow chart of a makeup progress detection method for detecting eyeliner makeup provided by an embodiment of the present application;

图12示出了本申请一实施例所提供的用于检测眼影妆容的化妆进度检测方法的流程图;Fig. 12 shows a flowchart of a makeup progress detection method for detecting eye shadow makeup provided by an embodiment of the present application;

图13示出了本申请一实施例所提供的用于检测眼影妆容的化妆进度检测方法的模块流程示意图;Fig. 13 shows a schematic diagram of a module flow chart of a makeup progress detection method for detecting eye shadow makeup provided by an embodiment of the present application;

图14示出了本申请一实施例所提供的用于检测眼影妆容的化妆进度检测装置的结构示意图;Fig. 14 shows a schematic structural diagram of a makeup progress detection device for detecting eyeshadow makeup provided by an embodiment of the present application;

图15示出了本申请一实施例所提供的用于检测眉毛妆容的化妆进度检测方法的流程图;Fig. 15 shows a flowchart of a makeup progress detection method for detecting eyebrow makeup provided by an embodiment of the present application;

图16示出了本申请一实施例所提供的用于检测眉毛妆容的化妆进度检测方法的模块流程示意图;Fig. 16 shows a schematic diagram of a module flow chart of a makeup progress detection method for detecting eyebrow makeup provided by an embodiment of the present application;

图17示出了本申请一实施例所提供的用于检测眉毛妆容的化妆进度检测装置的结构示意图;Fig. 17 shows a schematic structural diagram of a makeup progress detection device for detecting eyebrow makeup provided by an embodiment of the present application;

图18示出了本申请一实施例所提供的用于检测粉底、散粉等妆容的化妆进度检测方法的流程图;Fig. 18 shows a flow chart of a makeup progress detection method for detecting foundation, loose powder and other makeup provided by an embodiment of the present application;

图19示出了本申请一实施例所提供的用于检测粉底、散粉等妆容的化妆进度检测方法的另一流程图;Fig. 19 shows another flow chart of a makeup progress detection method for detecting foundation, loose powder and other makeup provided by an embodiment of the present application;

图20示出了本申请一实施例所提供的用于检测粉底、散粉等妆容的化妆进度检测方法的模块流程示意图;Fig. 20 shows a schematic diagram of a module flow chart of a makeup progress detection method for detecting foundation, loose powder and other makeup provided by an embodiment of the present application;

图21示出了本申请一实施例所提供的用于检测遮瑕妆容的化妆进度检测方法的流程图;Fig. 21 shows a flowchart of a makeup progress detection method for detecting concealer makeup provided by an embodiment of the present application;

图22示出了本申请一实施例所提供的用于检测遮瑕妆容的化妆进度检测方法的模块流程示意图;Fig. 22 shows a schematic diagram of a module flow chart of a makeup progress detection method for detecting concealer makeup provided by an embodiment of the present application;

图23示出了本申请一实施例所提供的一种妆容颜色识别方法的流程图;Fig. 23 shows a flowchart of a makeup color recognition method provided by an embodiment of the present application;

图24示出了本申请一实施例所提供的一种妆容颜色识别装置的结构示意图;Fig. 24 shows a schematic structural view of a makeup color recognition device provided by an embodiment of the present application;

图25示出了本申请一实施例所提供的祛除面部瑕疵的图像处理方法的流程图;Fig. 25 shows a flowchart of an image processing method for removing facial blemishes provided by an embodiment of the present application;

图26(a)示出了用户的人脸图像,图26(b)示出了图26(a)所示人脸图像对应的瑕疵纹理贴图;Figure 26(a) shows the user's face image, and Figure 26(b) shows the blemish texture map corresponding to the face image shown in Figure 26(a);

图27示出了本申请一实施例所提供的祛除面部瑕疵的图像处理装置的结构示意图;Fig. 27 shows a schematic structural diagram of an image processing device for removing facial blemishes provided by an embodiment of the present application;

图28示出了本申请一实施例所提供的一种化妆进度检测装置的结构示意图;Fig. 28 shows a schematic structural diagram of a makeup progress detection device provided by an embodiment of the present application;

图29示出了本申请一实施例所提供的一种电子设备的结构示意图;Fig. 29 shows a schematic structural diagram of an electronic device provided by an embodiment of the present application;

图30示出了本申请一实施例所提供的一种存储介质的示意图。FIG. 30 shows a schematic diagram of a storage medium provided by an embodiment of the present application.

具体实施方式Detailed ways

下面将参照附图更详细地描述本申请的示例性实施方式。虽然附图中显示了本申请的示例性实施方式,然而应当理解,可以以各种形式实现本申请而不应被这里阐述的实施方式所限制。相反,提供这些实施方式是为了能够更透彻地理解本申请,并且能够将本申请的范围完整的传达给本领域的技术人员。Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. Although exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided for thorough understanding of the application and to fully convey the scope of the application to those skilled in the art.

需要注意的是,除非另有说明,本申请使用的技术术语或者科学术语应当为本申请所属领域技术人员所理解的通常意义。It should be noted that, unless otherwise specified, technical terms or scientific terms used in this application shall have the usual meanings understood by those skilled in the art to which this application belongs.

下面结合附图来描述根据本申请实施例提出的一种化妆进度检测方法、装置、设备及存储介质。A makeup progress detection method, device, device and storage medium according to the embodiments of the present application will be described below with reference to the accompanying drawings.

目前相关技术中存在一些虚拟试妆功能,可以应用在销售柜台或者手机应用软件中,采用人脸识别技术对用户提供虚拟试妆服务,可以将多种妆容进行搭配和实时的面部贴合展示。此外还是提供人脸皮肤检测服务,但这些服务只能解决用户挑选适合自己的化妆品,或者选取适合自己的皮肤保养方案的需求。基于这些服务可以帮助用户挑选适合自己的高光/修容化妆产品,但是无法对上妆的进度进行显示,不能满足用户实时化妆的需求。相关技术中还存在一些使用深度学习模型提供虚拟试妆、肤色侦测、个性化产品推荐等功能,这些功能均需要预先收集大量的人脸图片对深度学习模型进行训练。但人脸图片是用户的隐私数据,很难收集到庞大的人脸图片。且模型训练需耗费大量计算资源,成本高。模型的精度与实时性成反比,化妆进度检测需要实时捕获用户人脸面部信息来确定用户当前的化妆进度,实时性要求很高,能够满足实时性要求的深度学习模型,其检测的准确性不高。At present, there are some virtual makeup trial functions in related technologies, which can be applied to sales counters or mobile phone application software. Face recognition technology is used to provide users with virtual makeup trial services, which can match various makeup looks and display facial fit in real time. In addition, face skin detection services are provided, but these services can only solve the needs of users to choose cosmetics that suit them, or choose a skin care plan that suits them. Based on these services, users can choose highlighting/contour makeup products that suit them, but they cannot display the progress of makeup, and cannot meet the needs of users for real-time makeup. In related technologies, there are still some functions that use deep learning models to provide virtual makeup testing, skin color detection, and personalized product recommendations. These functions require pre-collecting a large number of face pictures to train deep learning models. However, face pictures are user privacy data, and it is difficult to collect huge face pictures. Moreover, model training requires a large amount of computing resources and is expensive. The accuracy of the model is inversely proportional to the real-time performance. Make-up progress detection needs to capture the user's face and facial information in real time to determine the user's current make-up progress. The real-time requirements are very high. The deep learning model that can meet the real-time requirements is not accurate enough. high.

基于此,本申请实施例提供了一种化妆进度检测方法,该方法将用户化妆过程的当前帧图像与初始帧图像(即第一帧图像)做对比来确定化妆进度。仅通过图像处理即可检测出化妆进度,化妆进度检测的准确性很高,对于高光、修容、腮红、粉底、遮瑕、眼影、眼线、眉毛等化妆过程均能实时检测用户的化妆进度。无需使用深度学习模型,运算量小,成本低,减少了服务器的处理压力,提高了化妆进度检测的效率,能够满足化妆进度检测的实时性要求。Based on this, an embodiment of the present application provides a makeup progress detection method, which compares the current frame image of the user's makeup process with the initial frame image (ie, the first frame image) to determine the makeup progress. The makeup progress can be detected only through image processing, and the accuracy of makeup progress detection is very high. It can detect the user's makeup progress in real time for the makeup process such as highlights, contouring, blush, foundation, concealer, eye shadow, eyeliner, and eyebrows. There is no need to use a deep learning model, the amount of calculation is small, the cost is low, the processing pressure on the server is reduced, the efficiency of makeup progress detection is improved, and the real-time requirements of makeup progress detection can be met.

参见图1,该方法具体包括以下步骤:Referring to Figure 1, the method specifically includes the following steps:

步骤101:获取用户当前进行特定妆容的实时化妆视频。Step 101: Obtain a real-time makeup video of the user currently performing a specific makeup.

步骤102:根据实时化妆视频的初始帧图像和当前帧图像,确定用户进行特定妆容的当前化妆进度。Step 102: According to the initial frame image and the current frame image of the real-time makeup video, determine the current makeup progress of the user performing a specific makeup look.

特定妆容可以为高光妆容、修容妆容、腮红妆容、粉底妆容、遮瑕妆容、眼影妆容、眼线妆容、眉毛妆容等。下面分别对检测不同妆容的化妆进度的过程进行详细说明。Specific makeup can be highlight makeup, trimming makeup, blush makeup, foundation makeup, concealer makeup, eye shadow makeup, eyeliner makeup, eyebrow makeup, etc. The process of detecting the makeup progress of different makeup looks will be described in detail below.

实施例一Embodiment one

本申请实施例提供一种化妆进度检测方法,该方法用于检测高光妆容、修容妆容或其他任意会产生明暗变化的妆容对应的化妆进度。参见图2,该实施例具体包括以下步骤:An embodiment of the present application provides a method for detecting makeup progress, which is used to detect the makeup progress corresponding to highlight makeup, trimming makeup or any other makeup that may produce light and dark changes. Referring to Fig. 2, this embodiment specifically comprises the following steps:

步骤201:获取特定妆容对应的至少一个目标上妆区域及用户当前进行特定妆容的实时化妆视频。Step 201: Obtain at least one target makeup area corresponding to a specific makeup look and a real-time makeup video of the user currently performing the specific makeup look.

本申请实施例的执行主体为服务器。用户的手机或电脑等终端上安装有与服务器提供的化妆进度检测服务相适配的客户端。当用户需要使用化妆进度检测服务时,用户打开终端上的该客户端,客户端显示特定妆容对应的所有目标上妆区域,特定妆容可以包括高光妆容或修容妆容等。目标上妆区域可以包括额头区域、鼻梁区域、鼻尖区域、左侧脸颊区域、右侧脸颊区域、下巴区域等。用户从显示的多个目标上妆区域中选择自己需要化特定妆容的一个或多个目标上妆区域。The execution subject of this embodiment of the application is a server. A client that is compatible with the makeup progress detection service provided by the server is installed on the terminal such as the user's mobile phone or computer. When the user needs to use the makeup progress detection service, the user opens the client on the terminal, and the client displays all target makeup areas corresponding to the specific makeup. The specific makeup can include highlight makeup or trimming makeup, etc. The target makeup area may include a forehead area, a nose bridge area, a nose tip area, a left cheek area, a right cheek area, a chin area, and the like. The user selects one or more target makeup areas where he needs to apply specific makeup from the multiple displayed target makeup areas.

作为一种示例,客户端可以以文本选择项的形式来显示特定妆容对应的所有目标上妆区域。如图3所示的显示界面,该显示界面中包括多个目标上妆区域对应的文本选择项及提交按键,用户可以单击选择出自己需要的目标上妆区域的文本选择项,然后点击提交按键。客户端检测到提交按键触发的提交指令后,从该显示界面中获取用户选择的一个或多个目标上妆区域。As an example, the client may display all target makeup areas corresponding to a specific makeup in the form of text selection items. The display interface shown in Figure 3, the display interface includes multiple text selection items corresponding to the target makeup area and the submit button, the user can click to select the text selection item of the target makeup area that he needs, and then click Submit button. After the client detects the submit instruction triggered by the submit button, it acquires one or more target makeup areas selected by the user from the display interface.

作为另一种示例,客户端可以显示一张脸部图像,在该脸部图像上标识出特定妆容对应的所有目标上妆区域。用户通过单击操作从显示的该脸部图像中选择出自己需要的目标上妆区域。客户端显示的显示界面中可以包括一张脸部图像和提交按键,该脸部图像中可以通过实线圈出特定妆容对应的每个目标上妆区域,用户点击自己需要的目标上妆区域后,被选中的目标上妆区域的轮廓线可以显示为预设颜色(如红色或黄色),或者被选中的目标上妆区域的全部会显示为预设颜色。用户选择出自己需要的目标上妆区域后,点击提交按键。客户端检测到提交按键触发的提交指令后,从显示的脸部图像中获取用户选择的一个或多个目标上妆区域。As another example, the client may display a face image on which all target makeup areas corresponding to a specific makeup look are identified. The user selects the desired target makeup area from the displayed face image by clicking. The display interface displayed by the client can include a face image and a submit button. Each target makeup area corresponding to a specific makeup can be drawn in the face image by a solid line. After the user clicks the target makeup area that he needs, The outline of the selected target makeup area can be displayed in a preset color (such as red or yellow), or all of the selected target makeup area can be displayed in a preset color. After the user selects the target makeup area he needs, he clicks the submit button. After the client detects the submit command triggered by the submit button, it acquires one or more target makeup areas selected by the user from the displayed face image.

用户的终端通过上述任一方式获得用户选择的一个或多个目标上妆区域之后,将该一个或多个目标上妆区域的区域标识信息发送给服务器,该区域标识信息中可以包括用户选择的每个目标上妆区域的名称或编号等标识信息。服务器接收用户的终端发送的该区域标识信息,根据该区域标识信息确定出用户选择的特定妆容对应的至少一个目标上妆区域。After the user's terminal obtains one or more target makeup areas selected by the user through any of the above methods, it sends the area identification information of the one or more target makeup areas to the server. The area identification information may include the Identification information such as the name or number of each target makeup area. The server receives the area identification information sent by the user's terminal, and determines at least one target makeup application area corresponding to the specific makeup look selected by the user according to the area identification information.

通过上述方式由用户自定义选择自己需要化特定妆容的区域,能够满足不同用户对高光、修容等特定妆容的个性化需求。Through the above method, the user can customize the area where he needs to apply a specific makeup look, which can meet the individual needs of different users for specific makeup looks such as highlighting and contouring.

在本申请的另一些实施例中,也可以不由用户自己选择目标上妆区域,而是预先在服务器中配置好特定妆容对应的多个目标上妆区域。如此在用户需要检测化妆进度时,无需从用户的终端获取目标上妆区域,节省带宽,同时简化了用户操作,缩短了处理时间。In some other embodiments of the present application, instead of the user selecting the target makeup application area, multiple target makeup application areas corresponding to a specific makeup look may be pre-configured in the server. In this way, when the user needs to detect the makeup progress, there is no need to obtain the target makeup area from the user's terminal, which saves bandwidth, simplifies user operations, and shortens processing time.

上述客户端的显示界面中还设置有视频上传接口,当检测到用户点击该视频上传接口时,调用终端的摄像装置拍摄用户的化妆视频,在拍摄过程中用户在自己脸部的上述目标上妆区域中进行上述特定妆容的化妆操作。用户的终端将拍摄的化妆视频以视频流的形式传输给服务器。服务器接收用户的终端传输的该化妆视频的每一帧图像。The display interface of the above-mentioned client is also provided with a video upload interface. When it is detected that the user clicks on the video upload interface, the camera device of the terminal is called to take a video of the user's makeup. Perform makeup operations for the specific makeup above. The user's terminal transmits the captured makeup video to the server in the form of a video stream. The server receives each frame of the makeup video transmitted by the user's terminal.

在本申请实施例中,服务器将接收到的第一帧图像作为初始帧图像,以该初始帧图像作为参考来比对后续接收到的每一帧图像对应的特定妆容的当前化妆进度。由于对于后续每一帧图像的处理方式都相同,因此本申请实施例以当前时刻接收到的当前帧图像为例来阐述化妆进度检测的过程。In the embodiment of the present application, the server takes the received first frame image as the initial frame image, and uses the initial frame image as a reference to compare the current makeup progress of the specific makeup corresponding to each subsequently received frame image. Since the processing method for each subsequent image frame is the same, the embodiment of the present application uses the current frame image received at the current moment as an example to illustrate the process of makeup progress detection.

服务器通过本步骤获得特定妆容对应的至少一个目标上妆区域及用户化妆视频的初始帧图像和当前帧图像后,通过如下步骤202和203的操作来确定用户的当前化妆进度。After the server obtains at least one target makeup area corresponding to a specific makeup look and the initial frame image and current frame image of the user's makeup video through this step, the server determines the user's current makeup progress through the following steps 202 and 203 .

步骤202:根据目标上妆区域,从初始帧图像中获取特定妆容对应的第一目标区域图像,及从当前帧图像中获取特定妆容对应的第二目标区域图像。Step 202: Obtain a first target area image corresponding to the specific makeup from the initial frame image according to the target makeup area, and acquire a second target area image corresponding to the specific makeup from the current frame image.

服务器具体通过如下步骤S1-S3的操作来获取初始帧图像对应的第一目标区域图像,包括:The server specifically obtains the first target area image corresponding to the initial frame image through the following steps S1-S3, including:

S1:检测初始帧图像对应的第一人脸关键点。S1: Detect the first face key point corresponding to the initial frame image.

服务器中配置了预先训练好的用于检测人脸关键点的检测模型,通过该检测模型提供人脸关键点检测的接口服务。服务器获取到用户化妆视频的初始帧图像后,调用人脸关键点检测的接口服务,通过检测模型识别出初始帧图像中用户脸部的所有人脸关键点。为了与当前帧图像对应的人脸关键点进行区分,本申请实施例将初始帧图像对应的所有人脸关键点称为第一人脸关键点。将当前帧图像对应的所有人脸关键点称为第二人脸关键点。The server is configured with a pre-trained detection model for detecting key points of the face, and the interface service of key point detection of the face is provided through the detection model. After the server obtains the initial frame image of the user's makeup video, it calls the interface service of face key point detection, and uses the detection model to identify all face key points of the user's face in the initial frame image. In order to distinguish the face key points corresponding to the current frame image, all face key points corresponding to the initial frame image are referred to as first human face key points in the embodiment of the present application. All face key points corresponding to the current frame image are referred to as second face key points.

其中,识别出的人脸关键点包括用户脸部轮廓上的关键点及嘴巴、鼻子、眼睛、眉毛等部位的关键点。识别出的人脸关键点的数目可以为106个。Among them, the recognized key points of the human face include key points on the outline of the user's face and key points of the mouth, nose, eyes, eyebrows and other parts. The number of recognized face key points may be 106.

S2:根据第一人脸关键点,获取初始帧图像对应的人脸区域图像。S2: According to the key points of the first face, obtain the face area image corresponding to the initial frame image.

服务器具体通过如下步骤S20-S22的操作来获取初始帧图像对应的人脸区域图像,包括:The server specifically obtains the face area image corresponding to the initial frame image through the following steps S20-S22, including:

S20:根据第一人脸关键点,对初始帧图像及第一人脸关键点进行旋转矫正。S20: According to the key points of the first face, perform rotation correction on the initial frame image and the key points of the first face.

由于用户在通过终端拍摄化妆视频时,不能保证每一帧图像中人脸的姿态角度均相同,为了提高当前帧图像与初始帧图像比对的准确性,需要对每一帧图像中的人脸进行旋转矫正,以使矫正后每一帧图像中人脸眼睛的连线都处在同一水平线上,从而确保每一帧图像中人脸的姿态角度相同,避免因姿态角度不同导致化妆进度检测误差较大的问题。When the user shoots a makeup video through the terminal, the pose angle of the face in each frame of image cannot be guaranteed to be the same. In order to improve the accuracy of the comparison between the current frame image and the initial frame image, it is necessary to Perform rotation correction so that the lines connecting the faces and eyes in each frame of image after correction are on the same horizontal line, so as to ensure that the pose angle of the face in each frame of image is the same, and avoid errors in makeup progress detection due to different pose angles Bigger problem.

具体地,根据第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标。从第一人脸关键点确定出左眼区域的全部左眼关键点及右眼区域的全部右眼关键点。对确定出的全部左眼关键点的横坐标取均值,以及对全部左眼关键点的纵坐标取均值,将左眼对应的横坐标的均值和纵坐标的均值组成一个坐标,将该坐标确定为左眼中心坐标。按照同样的方式确定出右眼中心坐标。Specifically, according to the left eye key point and the right eye key point included in the first human face key point, the left eye center coordinates and the right eye center coordinates are respectively determined. All the left-eye key points in the left-eye area and all the right-eye key points in the right-eye area are determined from the first face key points. Take the mean value of the abscissa of all the determined key points of the left eye, and take the mean of the ordinate of all the key points of the left eye, combine the mean value of the abscissa and the mean value of the ordinate corresponding to the left eye into a coordinate, and determine the coordinate is the coordinates of the center of the left eye. Determine the coordinates of the center of the right eye in the same manner.

然后根据左眼中心坐标和右眼中心坐标,确定初始帧图像对应的旋转角度及旋转中心点坐标。如图4所示,根据左眼中心坐标和右眼中心坐标计算出两个坐标的水平差值dx和竖直差值dy,以及计算出左眼中心坐标和右眼中心坐标的两眼连线长度d。依据两眼连线长度d、水平差值dx和竖直差值dy,计算出两眼连线与水平方向的夹角θ,该夹角θ即为初始帧图像对应的旋转角度。然后根据左眼中心坐标和右眼中心坐标计算出两眼连线的中心点坐标,该中点点坐标即为初始帧图像对应的旋转中心点坐标。Then, according to the left eye center coordinates and the right eye center coordinates, the rotation angle and the rotation center point coordinates corresponding to the initial frame image are determined. As shown in Figure 4, the horizontal difference dx and the vertical difference dy of the two coordinates are calculated according to the left eye center coordinates and the right eye center coordinates, and the line connecting the two eyes between the left eye center coordinates and the right eye center coordinates is calculated length d. According to the length d of the connecting line between the two eyes, the horizontal difference dx and the vertical difference dy, the angle θ between the connecting line between the two eyes and the horizontal direction is calculated, and the angle θ is the corresponding rotation angle of the initial frame image. Then, the center point coordinates of the line connecting the two eyes are calculated according to the left eye center coordinates and the right eye center coordinates, and the midpoint coordinates are the rotation center point coordinates corresponding to the initial frame image.

根据计算出的旋转角度和旋转中心点坐标,对初始帧图像及第一人脸关键点进行旋转矫正。具体将旋转角度和旋转中心点坐标输入用于计算图片的旋转矩阵的预设函数中,该预设函数可以为OpenCV中的函数cv2.getRotationMatrix2D()。通过调用该预设函数获得初始帧图像对应的旋转矩阵。然后计算初始帧图像和该旋转矩阵的乘积,得到矫正后的初始帧图像。利用旋转矩阵对初始帧图像进行矫正的操作,也可以通过调用OpenCV中的函数cv2.warpAffine()来完成。Perform rotation correction on the initial frame image and the key points of the first face according to the calculated rotation angle and the coordinates of the rotation center point. Specifically, the rotation angle and the coordinates of the rotation center point are input into a preset function used to calculate the rotation matrix of the picture, and the preset function may be the function cv2.getRotationMatrix2D() in OpenCV. Obtain the rotation matrix corresponding to the initial frame image by calling this preset function. Then calculate the product of the initial frame image and the rotation matrix to obtain the rectified initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be done by calling the function cv2.warpAffine() in OpenCV.

对于第一人脸关键点,需要对每个第一人脸关键点逐一进行矫正,以与矫正后的初始帧图像相对应。在对第一人脸关键点逐一矫正时,需要进行两次坐标系的转换,第一次将以初始帧图像左上角为原点的坐标系转化为以左下角为原点的坐标系,第二次进一步将以左下角为原点的坐标系转化为以上述旋转中心点坐标为坐标原点的坐标系,如图5所示。经过两次坐标系转换后对每个第一人脸关键点进行如下公式(1)的转换,可完成对第一人脸关键点的旋转矫正。For the key points of the first human face, each key point of the first human face needs to be corrected one by one to correspond to the corrected initial frame image. When correcting the key points of the first face one by one, two transformations of the coordinate system are required. The first time the coordinate system with the origin at the upper left corner of the initial frame image is transformed into the coordinate system with the origin at the lower left corner. The second time Further transform the coordinate system with the lower left corner as the origin into the coordinate system with the above-mentioned rotation center point coordinates as the coordinate origin, as shown in FIG. 5 . After two coordinate system conversions, the conversion of the following formula (1) is performed on each key point of the first human face, and the rotation correction of the key points of the first human face can be completed.

Figure RE-GDA0003326777920000221
Figure RE-GDA0003326777920000221

在公式(1)中,x0、y0分别为旋转矫正前第一人脸关键点的横坐标和纵坐标, x、y分别为旋转矫正第一人脸关键点的横坐标和纵坐标,θ为上述旋转角度。In formula (1), x 0 and y 0 are respectively the abscissa and ordinate of the key point of the first face before rotation correction, and x and y are respectively the abscissa and ordinate of the key point of the first face after rotation correction, θ is the above-mentioned rotation angle.

经过矫正后的初始帧图像和第一人脸关键点是基于整张图像的,整张图像不仅包含用户的人脸信息,还包括其他多余的图像信息,因此需要通过如下步骤S21 对矫正后的图像进行人脸区域的裁剪。The corrected initial frame image and the key points of the first face are based on the entire image, which not only contains the user’s face information, but also includes other redundant image information, so the corrected image needs to be corrected through the following steps S21 The image is cropped for the face area.

S21:根据矫正后的第一人脸关键点,从矫正后的初始帧图像中截取包含人脸区域的图像。S21: According to the corrected first key point of the human face, intercept an image including the human face area from the corrected initial frame image.

首先从矫正后的第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值。然后根据最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的初始帧图像中人脸区域对应的截取框。具体地,将最小横坐标值和最小纵坐标值组成一个坐标点,将该坐标点作为人脸区域对应的截取框的左上角顶点。将最大横坐标值和最大纵坐标值组成另一个坐标点,将该坐标点作为人脸区域对应的截取框的右下角顶点。根据上述左上角顶点和右下角顶点在矫正后的初始帧图像中确定截取框的位置,从矫正后的初始帧图像中截取出位于该截取框内的图像,即截取出包含人脸区域的图像。Firstly, the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value are determined from the corrected first face key points. Then according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value, determine the clipping frame corresponding to the face area in the corrected initial frame image. Specifically, the minimum abscissa value and the minimum ordinate value form a coordinate point, and the coordinate point is used as the upper left vertex of the interception frame corresponding to the face area. The maximum abscissa value and the maximum ordinate value form another coordinate point, and this coordinate point is used as the vertex of the lower right corner of the interception frame corresponding to the face area. Determine the position of the interception frame in the corrected initial frame image according to the above-mentioned upper left corner vertex and lower right corner vertex, and intercept the image located in the interception frame from the corrected initial frame image, that is, intercept the image containing the face area .

在本申请的另一些实施例中,为了确保将用户的全部的人脸区域截取出来,避免截取不完整导致后续化妆进度检测误差很大的情况发生,还可以将上述截取框放大预设倍数,该预设倍数可以为1.15或1.25等。本申请实施例并不限制预设倍数的具体取值,实际应用中可根据需求来设定该预设倍数。将上述截取框向周围放大预设倍数之后,从矫正后的初始帧图像中截取出位于放大后的截取框中的图像,从而截取出包含用户的完整人脸区域的图像。In other embodiments of the present application, in order to ensure that all of the user's face area is intercepted, and to avoid incomplete interception resulting in a large error in the follow-up makeup progress detection, the above interception frame can also be enlarged by a preset multiple, The preset multiple can be 1.15 or 1.25 and so on. The embodiment of the present application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical applications. After the aforementioned clipping frame is enlarged by a preset multiple, the image located in the enlarged clipping frame is clipped from the corrected initial frame image, so as to clip an image including the complete face area of the user.

S22:将包含人脸区域的图像缩放至预设尺寸,得到初始帧图像对应的人脸区域图像。S22: Scale the image including the face area to a preset size to obtain the face area image corresponding to the initial frame image.

通过上述方式从初始帧图像中截取出包含用户的人脸区域的图像后,将包含人脸区域的图像缩放至预设尺寸,得到初始帧图像对应的人脸区域图像。该预设尺寸可以为390×390或400×400等。本申请实施例并不限制预设尺寸的具体取值,实际应用中可根据需求进行设定。After the image including the user's face area is intercepted from the initial frame image in the above manner, the image including the face area is scaled to a preset size to obtain a face area image corresponding to the initial frame image. The preset size may be 390×390 or 400×400 or the like. The embodiment of the present application does not limit the specific value of the preset size, which can be set according to requirements in practical applications.

为了使第一人脸关键点与缩放后的人脸区域图像相适应,将截取的包含人脸区域的图像缩放至预设尺寸后,还需根据缩放前包含人脸区域的图像的尺寸及该预设尺寸,对上述矫正后的第一人脸关键点进行缩放平移处理。具体地,依据缩放前包含人脸区域的图像的尺寸及图像所需缩放至的预设尺寸,确定每个第一人脸关键点的平移方向及平移距离,进而根据每个第一人脸关键点对应的平移方向和平移距离,分别对每个第一人脸关键点进行平移操作,并记录平移后的每个第一人脸关键点的坐标。In order to make the first human face key point adapt to the scaled human face area image, after the captured image containing the human face area is scaled to a preset size, it is also necessary to use the size of the image containing the human face area before scaling and the The preset size is used to zoom and translate the corrected key points of the first human face. Specifically, according to the size of the image containing the face area before zooming and the preset size to which the image needs to be zoomed, determine the translation direction and translation distance of each first face key point, and then according to each first face key point The translation direction and the translation distance corresponding to the point, the translation operation is performed on each first human face key point respectively, and the coordinates of each first human face key point after translation are recorded.

通过上述方式从初始帧图像中获得人脸区域图像,并通过旋转矫正和平移缩放等操作使第一人脸关键点与获得的人脸区域图像相适应,之后通过如下步骤S3 的方式从人脸区域图像中提取出目标上妆区域对应的图像区域。Obtain the face area image from the initial frame image in the above way, and make the first face key point adapt to the obtained face area image through operations such as rotation correction and translation scaling, and then obtain the face area image through the following step S3. The image area corresponding to the target makeup area is extracted from the area image.

在本申请的另一些实施例中,执行步骤S3之前,还可以先对人脸区域图像进行高斯滤波处理,用于去除人脸区域图像中的噪声。具体地,根据预设尺寸的高斯核,对初始帧图像对应的人脸区域图像进行高斯滤波处理。In some other embodiments of the present application, before step S3 is performed, Gaussian filtering may be performed on the image of the face area to remove noise in the image of the face area. Specifically, Gaussian filtering is performed on the face area image corresponding to the initial frame image according to a Gaussian kernel of a preset size.

高斯滤波的高斯核是高斯滤波处理的关键参数,高斯核选取过小,则不能达到很好的滤波效果,高斯核选取过大,虽然能过滤掉图像中的噪声信息,但同时会平滑掉图像中有用的信息。本申请实施例选取预设尺寸的高斯核,预设尺寸可以为9×9。此外高斯滤波函数的另一组参数sigmaX、sigmaY均设置为0,经过高斯滤波后,图像信息更加平滑,进而提高后续获取化妆进度的准确性。The Gaussian kernel of Gaussian filter is the key parameter of Gaussian filter processing. If the Gaussian kernel is too small, the filtering effect cannot be achieved. If the Gaussian kernel is too large, although the noise information in the image can be filtered out, the image will be smoothed at the same time. Useful information in . In this embodiment of the present application, a Gaussian kernel with a preset size is selected, and the preset size may be 9×9. In addition, another set of parameters of the Gaussian filter function, sigmaX and sigmaY, are set to 0. After Gaussian filtering, the image information will be smoother, thereby improving the accuracy of subsequent acquisition of makeup progress.

通过上述方式获得人脸区域图像,或者对人脸区域图像进行高斯滤波处理之后,通过步骤S3从人脸区域图像中提取特定妆容对应的目标区域图像。The face region image is obtained in the above manner, or after the Gaussian filtering is performed on the face region image, the target region image corresponding to the specific makeup is extracted from the face region image through step S3.

S3:根据第一人脸关键点和目标上妆区域,从人脸区域图像中提取特定妆容对应的第一目标区域图像。S3: According to the key points of the first face and the target makeup area, extract the first target area image corresponding to the specific makeup from the face area image.

高光妆容或修容妆容等特定妆容是对面部的固定区域进行上妆的一种化妆方式,如高光妆容或修容妆容通常是对额头区域、左侧脸颊区域、右侧脸颊区域、鼻梁区域、鼻尖区域或下巴区域等特定区域进行高光或修容。因此可以直接从人脸区域图像中抠取出需要进行特定妆容的这些特定区域,进而避免无效区域对特定妆容的化妆进度检测的干扰,提高化妆进度检测的准确性。Specific makeup such as highlight makeup or trimming makeup is a makeup method that applies makeup to a fixed area of the face. For example, highlighting makeup or trimming makeup is usually applied to the forehead area, left cheek area, right cheek area, nose bridge area, Highlight or contour specific areas such as the nose tip area or the chin area. Therefore, it is possible to directly extract these specific areas that require specific makeup from the face area image, thereby avoiding the interference of invalid areas on the makeup progress detection of specific makeup, and improving the accuracy of makeup progress detection.

服务器具体通过如下步骤S30-S32的操作来获得第一目标区域图像,包括:The server specifically obtains the first target area image through the following steps S30-S32, including:

S30:从第一人脸关键点中确定出位于人脸区域图像中目标上妆区域对应的区域轮廓上的一个或多个目标关键点。S30: Determine one or more target key points located on the contour of the area corresponding to the target makeup area in the face area image from the first human face key points.

首先从人脸区域图像中确定出目标上妆区域对应的区域位置,然后从第一人脸关键点中确定出位于该区域位置的一个或多个第一人脸关键点。将确定出的一个或多个第一人脸关键点确定为该目标上妆区域对应的区域轮廓上的一个或多个目标关键点。Firstly, the area position corresponding to the target makeup area is determined from the human face area image, and then one or more first human face key points located at the position of the area are determined from the first human face key points. The determined one or more first human face key points are determined as one or more target key points on the contour of the area corresponding to the target makeup area.

对于步骤201中获取的特定妆容对应的每个目标上妆区域,都按照上述方式分别确定出每个目标上妆区域对应的目标关键点。For each target makeup area corresponding to the specific makeup acquired in step 201, the target key points corresponding to each target makeup area are respectively determined in the above-mentioned manner.

S31:根据目标上妆区域对应的目标关键点,生成人脸区域图像对应的掩膜图像。S31: Generate a mask image corresponding to the face area image according to the target key points corresponding to the target makeup area.

由于步骤201中获得了特定妆容对应的一个或多个目标上妆区域,对于不同的目标上妆区域,在生成人脸区域图像对应的掩膜图像时的具体操作不同。对于特定妆容对应的每个目标上妆区域,首先判断该目标上妆区域对应的目标关键点的数目是否为多个,通常额头区域、左侧脸颊区域、右侧脸颊区域、鼻梁区域、下巴区域等目标上妆区域对应的目标关键点的数目为多个。对于这些目标上妆区域,根据目标上妆区域对应的每个目标关键点,确定该目标上妆区域在人脸区域图像中的每个边缘坐标。具体按照预设偏移规则将目标关键点的坐标适当地向左、右或上、下偏移部分像素,从而获得该目标上妆区域对应的边缘坐标。Since one or more target makeup areas corresponding to the specific makeup are obtained in step 201, the specific operations when generating the mask image corresponding to the face area image are different for different target makeup areas. For each target makeup area corresponding to a specific makeup, first determine whether the number of target key points corresponding to the target makeup area is multiple, usually forehead area, left cheek area, right cheek area, nose bridge area, and chin area The number of target key points corresponding to the target makeup area is multiple. For these target makeup areas, each edge coordinate of the target makeup area in the face area image is determined according to each target key point corresponding to the target makeup area. Specifically, the coordinates of the target key points are appropriately shifted to the left, right, or up and down by some pixels according to the preset offset rules, so as to obtain the edge coordinates corresponding to the target makeup area.

本申请实施例预先在服务器中配置了不同目标上妆区域对应的预设偏移规则,预设偏移规则中规定了目标上妆区域对应的目标关键点的偏移方向及偏移的像素数目。In the embodiment of the present application, preset offset rules corresponding to different target makeup areas are pre-configured in the server, and the preset offset rules specify the offset direction and the number of offset pixels of the target key points corresponding to the target makeup area .

例如,对于额头区域,其对应的目标关键点包括脸部轮廓在额头两侧处的人脸关键点以及发际线处的人脸关键点,额头区域对应的预设偏移规则可以规定发际线处的目标关键点向下偏移,额头左侧的目标关键点向右偏移,额头右侧的目标关键点向左偏移,且规定目标关键点偏移的像素数目为4个像素。按照该预设偏移规则对额头区域对应的每个目标关键点进行偏移,每个目标关键点偏移之后的坐标点即为额头区域对应的边缘坐标。For example, for the forehead area, the corresponding target key points include the key points of the face contour on both sides of the forehead and the key points of the face at the hairline. The preset offset rule corresponding to the forehead area can specify that the hairline The target key point at the line is shifted downward, the target key point on the left side of the forehead is shifted to the right, and the target key point on the right side of the forehead is shifted to the left, and the number of pixels to be shifted by the target key point is specified as 4 pixels. Each target key point corresponding to the forehead area is offset according to the preset offset rule, and the coordinate point after the offset of each target key point is the edge coordinate corresponding to the forehead area.

再如,对于鼻梁区域,其对应的目标关键点可以包括鼻梁上竖直排列的多个人脸关键点,鼻梁区域对应的预设偏移规则可以规定竖直排列的每个目标关键点分别向右偏移和向左偏移一定的像素数目,可以规定目标关键点偏移的像素数目为3个像素或4个像素等。按照该预设偏移规则对鼻梁区域对应的每个目标关键点进行偏移,每个目标关键点偏移之后的坐标点即为鼻梁区域对应的边缘坐标。For another example, for the bridge of the nose area, the corresponding target key points may include multiple face key points arranged vertically on the bridge of the nose, and the preset offset rule corresponding to the bridge of the nose area may stipulate that each target key point arranged vertically is to the right Offset and offset to the left by a certain number of pixels, the number of pixels to be offset by the target key point can be specified as 3 pixels or 4 pixels, etc. Each target key point corresponding to the nose bridge area is offset according to the preset offset rule, and the coordinate point after the offset of each target key point is the edge coordinate corresponding to the nose bridge area.

通过上述方式获得包括多个目标关键点的目标上妆区域对应的边缘坐标后,通过区域填充函数将每个边缘坐标围成的区域内的所有像素点的像素值均修改为预设值,得到该目标上妆区域对应的掩膜区域。其中,预设值可以为255。After the edge coordinates corresponding to the target makeup area including multiple target key points are obtained by the above method, the pixel values of all pixels in the area surrounded by each edge coordinate are modified to preset values by the area filling function, and The mask area corresponding to the target makeup area. Wherein, the preset value may be 255.

若判断出某个目标上妆区域对应的目标关键点的数目为一个,通常鼻尖区域等目标上妆区域对应的目标关键点的数目为一个。对于只有一个目标关键点的目标上妆区域,以该目标关键点为中心,绘制预设大小的椭圆区域。将该椭圆区域内的所有像素点的像素值均修改为预设值,得到该目标上妆区域对应的掩膜区域。If it is determined that the number of target key points corresponding to a target makeup area is one, usually the number of target key points corresponding to the target makeup area such as the nose tip area is one. For a target makeup area with only one target key point, draw an elliptical area of preset size with the target key point as the center. The pixel values of all the pixels in the ellipse area are modified to preset values to obtain the mask area corresponding to the target makeup area.

通过上述方式获得特定妆容对应的每个目标上妆区域对应的掩膜区域后,将所有掩膜区域之外的所有像素点的像素值均修改为零,即得到人脸区域图像对应的掩膜图像。After obtaining the mask area corresponding to each target makeup area corresponding to the specific makeup in the above way, modify the pixel values of all pixels outside the mask area to zero, that is, obtain the mask corresponding to the face area image image.

S32:对掩膜图像和人脸区域图像进行与运算,获得特定妆容对应的第一目标区域图像。S32: Perform an AND operation on the mask image and the face area image to obtain a first target area image corresponding to the specific makeup.

获得人脸区域图像对于的掩膜图像后,对掩膜图像和人脸区域图像进行与运算,即分别将掩膜图像和人脸区域图像中坐标相同的像素点的像素值进行与运算。由于掩膜图像中只有特定妆容对应的目标上妆区域中像素点的像素值为预设值,其他位置的像素点的像素值均为零。因此进行与运算后,得到的第一目标区域图像中仅目标上妆区域对应的位置处的像素点的像素值不为零,其他位置的像素点的像素值均为零。After obtaining the mask image of the face region image pair, an AND operation is performed on the mask image and the face region image, that is, an AND operation is performed on the pixel values of the pixel points with the same coordinates in the mask image and the face region image respectively. Since only the pixel values of the pixel points in the target makeup area corresponding to the specific makeup look in the mask image have preset values, the pixel values of other pixel points are all zero. Therefore, after performing the AND operation, only the pixel values at the positions corresponding to the target makeup area in the obtained first target area image are not zero, and the pixel values at other positions are all zero.

本申请实施例中掩膜图像中目标上妆区域中的像素点的像素值取255,则对掩膜图像与人脸区域图像进行与运算之后,得到的第一目标区域图像中目标上妆区域中的像素点的像素值即为人脸区域图像中目标上妆区域中的像素点的像素值。相当于从人脸区域图像中抠取出了特定妆容对应的目标上妆区域的图像。In the embodiment of the present application, the pixel value of the pixel points in the target makeup area in the mask image is 255, then after performing an AND operation on the mask image and the face area image, the target makeup area in the first target area image is obtained The pixel value of the pixel in is the pixel value of the pixel in the target makeup area in the face area image. It is equivalent to extracting the image of the target makeup area corresponding to the specific makeup from the face area image.

通过上述方式获得的初始帧图像对应的第一目标区域图像的颜色空间为RGB 颜色空间,对于高光妆容主要是提亮肤色,而修容妆容主要是调暗肤色以形成阴影,使得五官更加立体。对于高光妆容、修容妆容等特定妆容,主要是亮度的变化。而RGB颜色空间是日常最为常用的颜色空间,采用红、绿、蓝颜色分量的线性组合来表示颜色,自然环境下获取的图像信息对亮度比较敏感,同一张图片经过更改其亮度信息后,RGB三个分量都会随之相应地改变。而HSV颜色空间是由Hue(色调)、Saturation(饱和度)和Value(亮度)三个分量组成,如果图片亮度信息发生变化,此时仅亮度分量发生改变,而色调和饱和度分量值不会发生明显的变化,相比RGB颜色空间,HSV颜色空间可以分离出亮度信息。The color space of the first target area image corresponding to the initial frame image obtained by the above method is the RGB color space. For highlight makeup, it is mainly to brighten the skin color, while for repairing makeup, it is mainly to darken the skin color to form shadows, making the facial features more three-dimensional. For specific makeup looks such as highlight makeup and trimming makeup, it is mainly the change of brightness. The RGB color space is the most commonly used color space in daily life. The linear combination of red, green and blue color components is used to represent the color. The image information obtained in the natural environment is sensitive to brightness. After changing the brightness information of the same picture, RGB All three components will change accordingly. The HSV color space is composed of three components: Hue (hue), Saturation (saturation) and Value (brightness). If the brightness information of the picture changes, only the brightness component will change at this time, while the values of the hue and saturation components will not change. Significant changes occur, compared to the RGB color space, the HSV color space can separate the brightness information.

因此本申请实施例获得初始帧图像对应的第一目标区域图像之后,还将第一目标区域图像的颜色空间由RGB颜色空间转换到HSV颜色空间。然后从转换后的第一目标区域图像的HSV颜色空间中分离出特定妆容对应的通道分量,得到仅包含该通道分量的单通道的第一目标区域图像。Therefore, in the embodiment of the present application, after obtaining the first target area image corresponding to the initial frame image, the color space of the first target area image is converted from the RGB color space to the HSV color space. Then, the channel component corresponding to the specific makeup is separated from the converted HSV color space of the first target area image to obtain a single-channel first target area image including only the channel component.

例如,高光妆容、修容妆容等特定妆容对应的通道分量为亮度分量,而其他改变面部颜色的特定妆容对应的通道分量为色调分量等。For example, the channel component corresponding to specific makeup such as highlight makeup and trimming makeup is the brightness component, while the channel component corresponding to other specific makeup that changes the color of the face is the hue component, etc.

将第一目标区域图像转换到HSV颜色空间转换,并通过通道分离,分离出特定妆容对应的通道分量,便于后续对目标上妆区域是否完成了特定妆容的化妆过程的判断。Convert the image of the first target area to HSV color space conversion, and separate the channel components corresponding to the specific makeup through channel separation, so as to facilitate the subsequent judgment on whether the target makeup area has completed the makeup process of the specific makeup.

对于当前帧图像,与上述初始帧图像的处理过程相同,检测出当前帧图像对应的第二人脸关键点,依据第二人脸关键点,截取出当前帧图像对应的人脸区域图像,根据第二人脸关键点图像生成当前帧图像对应的掩膜图像。根据该掩膜图像从当前帧图像对应的人脸区域图像中抠取出特定妆容对应的第二目标区域图像。并将第二目标区域图像转换至HSV颜色空间,且分离出特定妆容对应的通道分量。对于当前帧图像的处理操作的细节,均与上述对初始帧图像的处理细节相同,在此不再赘述。For the current frame image, the processing process is the same as that of the above-mentioned initial frame image, the second face key point corresponding to the current frame image is detected, and the face area image corresponding to the current frame image is intercepted according to the second face key point, according to The second face key point image generates a mask image corresponding to the current frame image. According to the mask image, the second target area image corresponding to the specific makeup is extracted from the face area image corresponding to the current frame image. And convert the second target area image to HSV color space, and separate the channel components corresponding to the specific makeup. The details of the processing operation for the current frame image are the same as the above-mentioned processing details for the initial frame image, and will not be repeated here.

通过上述方式获得初始帧图像对应的第一目标区域图像和当前帧图像对应的第二目标区域图像之后,通过如下方式确定当前帧图像对应的化妆进度。After the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image are obtained in the above manner, the makeup progress corresponding to the current frame image is determined in the following manner.

步骤203:根据第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。Step 203: According to the first target area image and the second target area image, determine the current makeup progress corresponding to the current frame image.

具体地,分别计算第一目标区域图像和第二目标区域图像中位置相同的像素点对应的上述通道分量的差值绝对值。例如,若特定妆容为高光妆容或修容妆容,则计算第一目标区域图像和第二目标区域图像中坐标相同的像素点之间的亮度分量的差值绝对值。Specifically, the absolute values of the differences of the channel components corresponding to the pixel points in the same position in the first target area image and the second target area image are respectively calculated. For example, if the specific makeup is high-gloss makeup or trimming makeup, the absolute value of the difference in brightness component between pixels with the same coordinates in the first target area image and the second target area image is calculated.

根据每个像素点对应的差值绝对值,确定已完成特定妆容的区域面积。具体地,统计对应的差值绝对值满足特定妆容对应的预设化妆完成条件的像素点数目。其中,特定妆容为高光妆容时,预设化妆完成条件为像素点对应的差值绝对值大于第一预设阈值,第一预设阈值可以为11或12等。特定妆容为修容妆容时,预设化妆完成条件为像素点对应的差值绝对值小于第二预设阈值,第二预设阈值可以为7或8等。According to the absolute value of the difference corresponding to each pixel point, the area of the area where the specific makeup has been completed is determined. Specifically, the number of pixels whose absolute value of the corresponding difference satisfies the preset makeup completion condition corresponding to the specific makeup look is counted. Wherein, when the specific makeup is a high-gloss makeup, the preset makeup completion condition is that the absolute value of the difference corresponding to the pixel points is greater than the first preset threshold, and the first preset threshold can be 11 or 12, etc. When the specific makeup is cosmetic makeup, the preset makeup completion condition is that the absolute value of the difference corresponding to the pixel points is less than the second preset threshold, and the second preset threshold can be 7 or 8, etc.

将统计的满足预设化妆完成条件的像素点数目确定为已完成特定妆容的区域面积。统计第一目标区域图像或第二目标区域图像中所有目标上妆区域中的所有像素点的总数,将该总数确定为所有目标上妆区域对应的总区域面积。然后计算已完成特定妆容的区域面积与目标上妆区域对应的总区域面积之间的比值,将该比值确定为用户对应的特定妆容的当前化妆进度。Determining the counted number of pixels satisfying the preset makeup completion condition as the area of the area where the specific makeup has been completed. The total number of all pixels in all target makeup areas in the first target area image or the second target area image is counted, and the total number is determined as the total area corresponding to all target makeup areas. Then calculate the ratio between the area area where the specific makeup has been completed and the total area corresponding to the target makeup area, and determine the ratio as the current makeup progress of the specific makeup corresponding to the user.

在本申请的另一些实施例中,为了进一步提高化妆进度检测的准确性,还对上述第一目标区域图像和第二目标区域图像中的目标上妆区域进行进一步对齐。具体地,分别对仅包含上述通道分量的单通道的第一目标区域图像和第二目标区域图像进行二值化处理,即将第一目标区域图像和第二目标区域图像中目标上妆区域中的像素点对应的上述通道分量的值均修改为1,将其余位置处的像素点的上述通道分量的值均修改为0。通过二值化处理得到第一目标区域图像对应的第一二值化掩膜图像和第二目标区域图像对应的第二二值化掩膜图像。In some other embodiments of the present application, in order to further improve the accuracy of makeup progress detection, further alignment is performed on the target makeup areas in the first target area image and the second target area image. Specifically, binarization is performed on the first target region image and the second target region image of a single channel that only contain the above-mentioned channel components, that is, the target makeup region in the first target region image and the second target region image The values of the above-mentioned channel components corresponding to the pixels are all modified to 1, and the values of the above-mentioned channel components of the pixels at other positions are all modified to 0. A first binarized mask image corresponding to the first target area image and a second binarized mask image corresponding to the second target area image are obtained through binarization processing.

然后对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,即分别将第一二值化掩膜图像和第二二值化掩膜图像中相同位置处的像素点进行与运算,得到相交区域掩膜图像。该相交区域掩膜图像中像素点的上述通道分量不为零的区域,即为第一目标区域图像和第二目标区域中重合的目标上妆区域。Then, the AND operation is performed on the first binarized mask image and the second binarized mask image, that is, the pixels at the same position in the first binarized mask image and the second binarized mask image are respectively Perform an AND operation to obtain the mask image of the intersecting area. The area in which the channel component of the pixels in the intersection area mask image is not zero is the overlapped target makeup area in the first target area image and the second target area.

通过步骤202的操作获得初始帧图像对应的人脸区域图像及当前帧图像对应的人脸区域图像。对相交区域掩膜图像和初始帧图像对应的人脸区域图像进行与运算,得到初始帧图像对应的新的第一目标区域图像;对相交区域掩膜图像和当前帧图像对应的人脸区域图像进行与运算,得到当前帧图像对应的新的第二目标区域图像。The face region image corresponding to the initial frame image and the face region image corresponding to the current frame image are obtained through the operation of step 202 . Perform an AND operation on the intersecting area mask image and the face area image corresponding to the initial frame image to obtain a new first target area image corresponding to the initial frame image; An AND operation is performed to obtain a new second target area image corresponding to the current frame image.

由于相交区域掩膜图像中包含初始帧图像和当前帧图像中重合的目标上妆区域,因此通过相交区域掩膜图像按照上述方式从初始帧图像和当前帧图像中分别抠取出新的第一目标区域图像和新的第二目标区域图像,使得新的第一目标区域图像和新的第二目标区域图像中目标上妆区域的位置是完全一致的,如此后续比对当前帧图像中目标上妆区域与初始帧图像中目标上妆区域的变化来确定化妆进度,确保了进行比对的区域是完全一致的,大大提高了化妆进度检测的准确性。Since the intersecting area mask image contains the overlapped target makeup area in the initial frame image and the current frame image, the new first target is extracted from the initial frame image and the current frame image respectively through the intersecting area mask image in the above-mentioned manner area image and the new second target area image, so that the position of the target makeup area in the new first target area image and the new second target area image is exactly the same, so that the subsequent comparison of the target makeup in the current frame image The makeup progress is determined by the change of the area and the target makeup area in the initial frame image, which ensures that the compared areas are completely consistent and greatly improves the accuracy of makeup progress detection.

通过上述方式对初始帧图像和当前帧图像中的目标上妆区域进行对齐,得到新的第一目标区域图像和新的第二目标区域图像后,再次通过上述步骤203的操作来确定当前帧图像对应的当前化妆进度。Align the target makeup area in the initial frame image and the current frame image in the above-mentioned manner, and after obtaining the new first target area image and the new second target area image, determine the current frame image again through the operation of the above-mentioned step 203 Corresponding to the current makeup progress.

通过上述任一方式确定出当前化妆进度后,服务器发送该当前化妆进度给用户的终端。用户的终端接收到当前化妆进度后,显示该当前化妆进度。当前化妆进度可以为比值或百分数。终端可以通过进度条的形式来显示当前化妆进度。After the current makeup progress is determined by any of the above methods, the server sends the current makeup progress to the user's terminal. After receiving the current makeup progress, the user's terminal displays the current makeup progress. The current makeup progress can be a ratio or a percentage. The terminal can display the current makeup progress in the form of a progress bar.

在用户化妆的过程中,通过本申请实施例提供的化妆进度检测方法,实时检测第一帧图像之后的每帧图像相对于第一帧图像的化妆进度,并将检测的化妆进度显示给用户,使用户能够直观地看到自己的化妆进度,提高化妆效率。During the process of the user's makeup, the makeup progress detection method provided by the embodiment of the present application can detect the makeup progress of each frame image after the first frame image relative to the first frame image in real time, and display the detected makeup progress to the user, It enables users to visually see the progress of their makeup and improve the efficiency of makeup.

为了便于理解本申请实施例提供的方法,下面结合附图进行说明。如图6所示,服务器可以主要包含人脸矫正和裁剪、高斯滤波、目标上妆区域抠取、HSV 颜色空间转换、目标上妆区域对齐、化妆进度计算等六个模块。人脸矫正和裁剪模块是对服务器获取到的初始帧图像和当前帧图像进行人脸区域的矫正和裁剪,得到初始帧图像对应的人脸区域图像和当前帧图像对应人脸区域图像。高斯滤波模块是对人脸矫正和裁剪模块获得的人脸区域图像进行高斯平滑,用于去除图像中的噪声。目标上妆区域抠取模块是根据初始帧图像对应的第一人脸关键点信息,对经过高斯滤波后的初始帧图像对应的人脸区域图像,进行目标上妆区域的抠取,得到对应的第一目标区域图像。同样地,目标上妆区域抠取模块还抠取当前帧图像对应的第二目标区域图像。HSV颜色空间转换模块是对目标上妆区域抠取模块获得的第一目标区域图像和第二目标区域图像的RGB颜色空间转换为HSV颜色空间。目标上妆区域对齐模块是将经过HSV颜色空间后的仅包含特定妆容对应的通道分量的单通道的第一目标区域图像和第二目标区域图像进行像素对齐操作。化妆进度计算模块是对经过目标上妆区域对齐模块对齐后的单通道的第一目标区域图像和第二目标区域图像进行作差计算,得到化妆进度百分比。In order to facilitate understanding of the method provided by the embodiment of the present application, the following description will be made in conjunction with the accompanying drawings. As shown in Figure 6, the server can mainly include six modules including face correction and cropping, Gaussian filtering, target makeup area extraction, HSV color space conversion, target makeup area alignment, and makeup progress calculation. The face correction and cropping module corrects and crops the face area on the initial frame image and the current frame image acquired by the server, and obtains the face area image corresponding to the initial frame image and the face area image corresponding to the current frame image. The Gaussian filtering module performs Gaussian smoothing on the face area image obtained by the face correction and cropping module, and is used to remove noise in the image. The target makeup area extraction module is based on the first face key point information corresponding to the initial frame image, for the face area image corresponding to the initial frame image after Gaussian filtering, the target makeup area is extracted to obtain the corresponding Image of the first target area. Similarly, the target makeup area extracting module also extracts the second target area image corresponding to the current frame image. The HSV color space conversion module converts the RGB color space of the first target area image and the second target area image obtained by the target makeup area extraction module into the HSV color space. The target makeup area alignment module performs a pixel alignment operation on the single-channel first target area image and the second target area image that only contain the channel components corresponding to the specific makeup after passing through the HSV color space. The makeup progress calculation module calculates the difference between the single-channel first target area image and the second target area image aligned by the target makeup area alignment module to obtain the makeup progress percentage.

在本申请实施例中,利用人脸关键点,对视频帧中用户的人脸区域进行矫正和裁剪,提高了识别视频帧中人脸区域的准确性。且基于人脸关键点,从人脸区域图像中确定出了目标上妆区域,并对初始帧图像中的目标上妆区域和当前帧图像中的目标上妆区域进行了像素对齐,提高了对每一帧图像中用户脸部的目标上妆区域识别的准确性。通过采用高斯平滑和HSV颜色空间转换,使得后续能够清晰获取初始帧图像与当前帧图像中目标上妆区域在特定妆容对应的通道分量上的差异,提高了化妆进度检测的精度。且本申请没有采用深度学习的方式,无需预先收集大量数据,本申请通过对用户化妆的实时画面的捕获,经过服务器端的计算,将检测结果返回给用户。相比深度学习的模型推理方案,本申请在算法处理环节耗费更少的计算成本,减少了服务器的处理压力。In the embodiment of the present application, the user's face area in the video frame is corrected and cropped by using key points of the face, so as to improve the accuracy of recognizing the face area in the video frame. And based on the key points of the face, the target makeup area is determined from the face area image, and the pixel alignment of the target makeup area in the initial frame image and the target makeup area in the current frame image is carried out, which improves the accuracy of the target makeup area. Accuracy of target makeup area recognition on the user's face in each image frame. By adopting Gaussian smoothing and HSV color space conversion, it is possible to clearly obtain the difference between the initial frame image and the target makeup area in the current frame image in the channel component corresponding to the specific makeup, which improves the accuracy of makeup progress detection. Moreover, this application does not adopt the method of deep learning, and does not need to collect a large amount of data in advance. This application captures the real-time picture of the user's make-up, and returns the detection result to the user through server-side calculation. Compared with the model reasoning scheme of deep learning, this application consumes less computing cost in the algorithm processing link, and reduces the processing pressure of the server.

实施例二Embodiment two

本申请实施例提供一种化妆进度检测方法,该方法用于检测腮红妆容或者京剧等特殊领域的彩色妆容对应的化妆进度。参见图7,该实施例具体包括以下步骤:The embodiment of the present application provides a makeup progress detection method, which is used to detect the makeup progress corresponding to the blush makeup or the colored makeup in special fields such as Beijing opera. Referring to Figure 7, this embodiment specifically includes the following steps:

步骤301:获取至少一个目标上妆区域,以及获取用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像。Step 301: Obtain at least one target makeup area, and obtain an initial frame image and a current frame image in a real-time makeup video in which the user is currently performing a specific makeup look.

本申请实施例的执行主体为服务器。用户的手机或电脑等终端上安装有与服务器提供的化妆进度检测服务相适配的客户端。当用户需要使用化妆进度检测服务时,用户打开终端上的该客户端,客户端显示预设类型的化妆对应的多个目标上妆区域,如显示腮红对应的多个目标上妆区域。显示的目标上妆区域可以按照脸部区域进行分类,如可以分为鼻头区域、两侧脸颊区域、下巴区域等。每个区域分类下可以包括多个形状和/或尺寸不同的目标上妆区域的轮廓。用户从显示的多个目标上妆区域中选择自己需要化妆的一个或多个目标上妆区域。客户端将用户选择的目标上妆区域发送给服务器。The execution subject of this embodiment of the application is a server. A client that is compatible with the makeup progress detection service provided by the server is installed on the terminal such as the user's mobile phone or computer. When the user needs to use the makeup progress detection service, the user opens the client on the terminal, and the client displays multiple target makeup areas corresponding to preset types of makeup, such as displaying multiple target makeup areas corresponding to blush. The displayed target makeup area can be classified according to the face area, for example, it can be divided into the nose area, the cheek area on both sides, the chin area, and the like. Each area category may include contours of multiple target makeup areas with different shapes and/or sizes. The user selects one or more target makeup application areas that he needs to apply makeup from among the displayed multiple target makeup application areas. The client sends the target makeup area selected by the user to the server.

作为一种示例,如图8所示的显示界面,该显示界面中包括鼻头区域、两侧脸颊区域和下巴区域各自对应的目标上妆区域的轮廓,用户可以在各区域对应的多个轮廓中选择自己需要上妆的脸部区域以及在选择的脸部区域上妆的轮廓,选择之后点击确认按键提交自己选择的目标上妆区域的轮廓。客户端检测到用户提交的一个或多个目标上妆区域,并发送给服务器。As an example, in the display interface shown in Figure 8, the display interface includes the contours of the target makeup areas corresponding to the nose region, the cheek regions on both sides and the chin region, and the user can select the corresponding contours of each region. Select the face area that you want to apply makeup and the outline of the makeup on the selected face area, and then click the OK button to submit the outline of the target makeup area you selected. The client detects one or more target makeup areas submitted by the user and sends them to the server.

作为另一种示例,本申请实施例还可以基于预设标准人脸图像制作多个美妆样式图,美妆样式图中包括一个或多个目标上妆区域的轮廓。预设标准人脸图像为脸部无遮挡、五官清晰且两眼连线与水平线平行的人脸图像。客户端显示的界面中可以同时显示多个美妆样式图,用户从显示的多个美妆样式图中选择一个美妆样式图。客户端将用户选择的美妆样式图发送给服务器。服务器接收客户端发送的美妆样式图,从该美妆样式图中获取该用户选择的一个或多个目标上妆区域。As another example, in the embodiment of the present application, multiple beauty makeup pattern maps may be created based on preset standard human face images, and the beauty makeup pattern maps include outlines of one or more target makeup areas. The preset standard face image is a face image with an unobstructed face, clear facial features, and a line connecting the eyes parallel to the horizontal line. The interface displayed by the client may simultaneously display multiple beauty style diagrams, and the user selects a beauty makeup style diagram from the multiple displayed beauty makeup style diagrams. The client sends the beauty makeup style image selected by the user to the server. The server receives the beauty makeup pattern image sent by the client, and obtains one or more target makeup areas selected by the user from the beauty makeup pattern image.

通过上述任一方式由用户自定义选择自己需要化妆的目标上妆区域,能够满足不同用户对腮红等预设类型的妆容的个性化上妆需求。By any of the above methods, the user can customize and select the target makeup area that he needs to make up, which can meet the personalized makeup requirements of different users for preset types of makeup such as blush.

在本申请的另一些实施例中,也可以不由用户选择目标上妆区域,而是在服务器中预先设置固定的目标上妆区域,设置好目标上妆区域的位置和形状。用户打开客户端后,客户端提示用户在服务器设置的这些目标上妆区域对应的部位进行上妆。服务器在接收到用户检测化妆进度的请求时,直接从本地配置文件中获取预先设置的一个或多个目标上妆区域。In other embodiments of the present application, instead of the user selecting the target makeup area, a fixed target makeup area can be preset in the server, and the position and shape of the target makeup area can be set. After the user opens the client, the client prompts the user to apply makeup on the parts corresponding to the target makeup areas set by the server. When the server receives the request from the user to detect the makeup progress, it directly obtains one or more preset target makeup areas from the local configuration file.

预先在服务器中配置好目标上妆区域,在用户需要检测化妆进度时,无需从用户的终端获取目标上妆区域,节省带宽,同时简化了用户操作,缩短了处理时间。The target makeup area is pre-configured in the server. When the user needs to check the progress of makeup, there is no need to obtain the target makeup area from the user's terminal, which saves bandwidth, simplifies user operations, and shortens the processing time.

上述客户端的显示界面中还设置有视频上传接口,当检测到用户点击该视频上传接口时,调用终端的摄像装置拍摄用户的化妆视频,在拍摄过程中用户在自己脸部的上述目标上妆区域中进行腮红等预设类型的化妆操作。用户的终端将拍摄的化妆视频以视频流的形式传输给服务器。服务器接收用户的终端传输的该化妆视频的每一帧图像。The display interface of the above-mentioned client is also provided with a video upload interface. When it is detected that the user clicks on the video upload interface, the camera device of the terminal is called to take a video of the user's makeup. Perform preset types of makeup operations such as blush. The user's terminal transmits the captured makeup video to the server in the form of a video stream. The server receives each frame of the makeup video transmitted by the user's terminal.

在本申请的另一些实施例中,服务器获得用户的化妆视频的初始帧图像和当前帧图像后,还检测初始帧图像和当前帧图像中是否均仅包含同一个用户的人脸图像。首先检测初始帧图像和当前帧图像中是否均仅包含一个人脸图像,若初始帧图像和/或当前帧图像中包含多个人脸图,或者,初始帧图像和/或当前帧图像中不包含人脸图像,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户保持化妆视频中仅出现同一个用户的人脸。例如,提示信息可以为“请保持镜头内仅出现同一个人的人脸”。In other embodiments of the present application, after the server obtains the initial frame image and the current frame image of the user's makeup video, it also detects whether the initial frame image and the current frame image contain only the face image of the same user. First, detect whether the initial frame image and the current frame image contain only one face image, if the initial frame image and/or the current frame image contain multiple face images, or, the initial frame image and/or the current frame image do not contain face image, then send prompt information to the user's terminal. The user's terminal receives and displays the prompt information to prompt the user to keep only the face of the same user appearing in the makeup video. For example, the prompt information may be "please keep only the face of the same person appearing in the camera".

若检测出初始帧图像和当前帧图像中均仅包含一个人脸图像,则进一步判断初始帧图像中的人脸图像与当前帧图像中的人脸图像是否属于同一个用户。具体地,可以通过人脸识别技术提取初始帧图像中人脸图像对应的人脸特征信息,以及提取当前帧图像中人脸图像对应的人脸特征信息,计算这两帧图像中提取的人脸特征信息的相似度,若计算的相似度大于或等于设定值,则确定初始帧图像和当前帧图像中的人脸属于同一用户,后续通过下述步骤302和303的操作来确定当前帧图像对应的当前化妆进度。若计算的相似度小于设定值,则确定初始帧图像和当前帧图像中的人脸属于不同用户,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户保持化妆视频中仅出现同一个用户的人脸。If it is detected that both the initial frame image and the current frame image contain only one face image, it is further determined whether the face image in the initial frame image and the face image in the current frame image belong to the same user. Specifically, the face feature information corresponding to the face image in the initial frame image can be extracted by face recognition technology, and the face feature information corresponding to the face image in the current frame image can be extracted, and the face feature information extracted from the two frame images can be calculated. The similarity of feature information, if the calculated similarity is greater than or equal to the set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user, and the current frame image is determined by the following steps 302 and 303 Corresponding to the current makeup progress. If the calculated similarity is less than the set value, it is determined that the faces in the initial frame image and the current frame image belong to different users, and a prompt message is sent to the user's terminal. The user's terminal receives and displays the prompt information to prompt the user to keep only the face of the same user appearing in the makeup video.

在本申请实施例中,服务器将接收到的第一帧图像作为初始帧图像,以该初始帧图像作为参考来比对后续接收到的每一帧图像对应的特定妆容的当前化妆进度。由于对于后续每一帧图像的处理方式都相同,因此本申请实施例以当前时刻接收到的当前帧图像为例来阐述化妆进度检测的过程。In the embodiment of the present application, the server takes the received first frame image as the initial frame image, and uses the initial frame image as a reference to compare the current makeup progress of the specific makeup corresponding to each subsequently received frame image. Since the processing method for each subsequent image frame is the same, the embodiment of the present application uses the current frame image received at the current moment as an example to illustrate the process of makeup progress detection.

服务器通过本步骤获得至少一个目标上妆区域及用户化妆视频的初始帧图像和当前帧图像后,通过如下步骤302和303的操作来确定用户的当前化妆进度。After obtaining at least one target makeup area and the initial frame image and current frame image of the user's makeup video through this step, the server determines the user's current makeup progress through the following steps 302 and 303 .

步骤302:根据获得的目标上妆区域,生成美妆掩码图。Step 302: Generate a beauty mask map according to the obtained target makeup area.

具体地,根据每个目标上妆区域的位置和形状,在预设空白人脸图像中绘制每个目标上妆区域的轮廓。预设空白人脸图像可以是去除上述预设标准人脸图像中的像素而形成的。在预设空白人脸图像中绘制出每个目标上妆区域的轮廓之后,在绘制的每个轮廓内进行像素填充,同一个目标上妆区域的轮廓内填充像素值相同的像素点,不同的目标上妆区域之间填充的像素点的像素值互不相同。将填充操作后的图像作为美妆掩码图。Specifically, according to the position and shape of each target makeup area, the outline of each target makeup area is drawn in a preset blank face image. The preset blank face image may be formed by removing pixels in the preset standard face image. After the outline of each target makeup area is drawn in the preset blank face image, pixel filling is performed in each drawn outline, and the outline of the same target makeup area is filled with pixels with the same pixel value, different The pixel values of the filled pixels between target makeup areas are different from each other. The image after the padding operation is used as a beauty mask image.

步骤303:根据美妆掩码图、初始帧图像和当前帧图像,确定当前帧图像对应的当前化妆进度。Step 303: According to the makeup mask image, the initial frame image and the current frame image, determine the current makeup progress corresponding to the current frame image.

首先以美妆掩码图为参照,分别从初始帧图像中获取上妆的第一目标区域图像,以及从当前帧图像中获取上妆的第二目标区域图像。即以美妆掩码图为掩码,分别从初始帧图像和当前帧图像中截取出用户需要化妆的目标上妆区域的图像。然后根据截取出的第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。Firstly, with the beauty mask image as a reference, the image of the first target area with makeup applied is obtained from the initial frame image, and the image of the second target area with makeup applied is obtained from the current frame image. That is, using the makeup mask image as a mask, the images of the target makeup area where the user needs to make up are intercepted from the initial frame image and the current frame image respectively. Then, according to the intercepted first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

其中,从初始帧图像中获取第一目标区域图像的过程,与上述实施例一中步骤102的操作相似,先通过步骤S1检测初始帧图像对应的第一人脸关键点。通过步骤S2的操作根据第一人脸关键点,获取初始帧图像对应的人脸区域图像。获得初始帧图像对应的人脸区域图像的具体操作过程可以参考实施例一中的相关说明,在此不再赘述。从当前帧图像中获取第二目标区域图像的过程,与从初始帧图像中获取第一目标区域图像的过程相同。Wherein, the process of obtaining the first target area image from the initial frame image is similar to the operation of step 102 in the first embodiment above, and the first human face key point corresponding to the initial frame image is detected through step S1. Through the operation of step S2, the face area image corresponding to the initial frame image is acquired according to the first face key point. For the specific operation process of obtaining the face region image corresponding to the initial frame image, reference may be made to the relevant description in Embodiment 1, and details are not repeated here. The process of acquiring the second target area image from the current frame image is the same as the process of acquiring the first target area image from the initial frame image.

然后以步骤302生成的美妆掩码图为参照,从初始帧图像对应的人脸区域图像中获取上妆的第一目标区域图像。腮红等妆容是对面部的固定区域进行上妆的一种化妆方式,如在鼻头区域、两侧脸颊区域、下巴区域等特定区域进行腮红上妆。因此可以直接从人脸区域图像中抠取出需要进行上妆的这些特定区域,进而避免无效区域对化妆进度检测的干扰,提高化妆进度检测的准确性。Then, referring to the makeup mask image generated in step 302, the image of the first target area for makeup is obtained from the face area image corresponding to the initial frame image. Makeup such as blush is a way of applying makeup to fixed areas of the face, such as applying blush to specific areas such as the nose area, the cheek area on both sides, and the chin area. Therefore, these specific areas that need to be applied makeup can be directly extracted from the face area image, thereby avoiding the interference of invalid areas on the makeup progress detection, and improving the accuracy of the makeup progress detection.

服务器具体通过如下步骤S40-S42的操作来获得第一目标区域图像,包括:The server obtains the first target area image specifically through the following steps S40-S42, including:

S40:分别将美妆掩码图和人脸区域图像转换为二值化图像。S40: Convert the beauty mask image and the face region image into binary images respectively.

S41:对美妆掩码图对应的二值化图像和人脸区域图像对应的二值化图像进行与运算,获得美妆掩码图与该人脸区域图像的相交区域对应的第一掩膜图像。S41: Perform an AND operation on the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face region image to obtain a first mask corresponding to the intersection area between the beauty makeup mask map and the face region image image.

分别将美妆掩码图对应的二值化图像和该人脸区域图像对应的二值化图像中坐标相同的像素点的像素值进行与运算。由于美妆掩码图中只有目标上妆区域中的像素点的像素值不为零,其他区域的像素点均为零。因此与运算得到的第一掩膜图像相当于从初始帧图像对应的人脸区域图像中截取出了各个目标上妆区域。An AND operation is performed on the pixel values of the pixel points with the same coordinates in the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face region image. Since only the pixel values of the pixels in the target makeup area in the beauty mask image are not zero, the pixels in other areas are all zero. Therefore, the first mask image obtained by the calculation is equivalent to intercepting each target makeup area from the face area image corresponding to the initial frame image.

在本申请的另一些实施例中,由于美妆掩码图是基于预设标准人脸图像生成的,因此美妆掩码图中的目标上妆区域很可能无法与初始帧图像中用户实际上妆的区域完全重合,从而影响化妆进度检测的准确性。因此在将美妆掩码图对应的二值化图像和人脸区域图像对应的二值化图像进行与运算之前,还可以先对美妆掩码图中的目标上妆区域与初始帧图像中的对应区域进行对齐操作。In other embodiments of the present application, since the beauty mask image is generated based on a preset standard face image, the target makeup area in the beauty mask image may not be the same as the user's actual face image in the initial frame image. The area of makeup completely overlaps, which affects the accuracy of makeup progress detection. Therefore, before performing an AND operation on the binary image corresponding to the beauty mask image and the binary image corresponding to the face area image, it is also possible to compare the target makeup area in the beauty mask image with the initial frame image The corresponding area is aligned.

具体地,根据美妆掩码图对应的标准人脸关键点,确定美妆掩码图中位于每个目标上妆区域的轮廓上的一个或多个第一定位点。美妆掩码图对应的标准人脸关键点即为预设标准人脸图像对应的标准人脸关键点。对于美妆掩码图中的任一目标上妆区域,首先确定该目标上妆区域的轮廓上是否包含标准人脸关键点,若包含,则将位于轮廓上的标准人脸关键点确定为该目标上妆区域对应的第一定位点。若不包含,则利用该目标上妆区域周围的标准人脸关键点,通过线性变换的方式生成位于该目标上妆区域的轮廓上的第一定位点。具体可以将周围的标准人脸关键点上移、下移、左移或右移等平移操作获得第一定位点。Specifically, according to the standard human face key points corresponding to the beauty mask image, one or more first positioning points located on the outline of each target makeup area in the beauty mask image are determined. The standard face key points corresponding to the beauty mask image are the standard face key points corresponding to the preset standard face image. For any target makeup area in the beauty mask map, first determine whether the contour of the target makeup area contains standard human face key points, and if so, determine the standard human face key points on the contour as the The first anchor point corresponding to the target makeup area. If not included, then use the standard human face key points around the target makeup area to generate the first positioning point on the contour of the target makeup area by means of linear transformation. Specifically, a translation operation such as moving up, down, left, or right surrounding standard face key points can be used to obtain the first positioning point.

例如,对于鼻头区域,可以将位于鼻头的关键点往左移动一定的像素距离,得到位于左侧鼻翼上的一个点。以及将鼻头的关键点往右移动一定的像素距离,得到位于右侧鼻翼上的一个点。将鼻头的关键点、左侧鼻翼上的这个点和右侧鼻翼上的该点作为鼻头区域对应的三个第一定位点。For example, for the nose region, the key point located at the nose can be moved to the left by a certain pixel distance to obtain a point on the left nose. And move the key point of the nose to the right by a certain pixel distance to get a point on the right side of the nose. The key point of the nose, the point on the left alar and the point on the right alar are used as the three first positioning points corresponding to the nose region.

在本申请实施例中,每个目标上妆区域对应的第一定位点的数目可以为预设数目个,预设数目可以为3或4等。In the embodiment of the present application, the number of first positioning points corresponding to each target makeup area may be a preset number, and the preset number may be 3 or 4, etc.

通过上述方式获得美妆掩码图中每个目标上妆区域对应的第一定位点后,根据初始帧图像对应的第一人脸关键点,从初始帧图像中确定出与每个第一定位点对应的第二定位点。由于美妆掩码图对应的标准人脸关键点和初始帧图像对应的第一人脸关键点都是通过相同的检测模型获得的,不同位置的关键点都具有各自的编号。因此对于属于标准人脸关键点的第一定位点,从初始帧图像对应的第一人脸关键点中确定出与该第一定位点对应的标准人脸关键点的编号相同的第一人脸关键点,将确定出的第一人脸关键点作为该第一定位点对应的第二定位点。对于利用标准人脸关键点进行线性变换得到的第一定位点,则从初始帧图像对应的第一人脸关键点中确定出该第一定位点对应的第一人脸关键点,将对该第一人脸关键点进行相同的线性变换得到的点确定为该第一定位点对应的第二定位点。After obtaining the first positioning point corresponding to each target makeup area in the beauty mask image by the above method, according to the first human face key point corresponding to the initial frame image, determine from the initial frame image the corresponding first positioning point Point corresponds to the second anchor point. Since the standard face key points corresponding to the beauty mask map and the first face key points corresponding to the initial frame image are obtained through the same detection model, the key points at different positions have their own numbers. Therefore, for the first anchor point belonging to the standard human face key point, determine the first human face with the same number as the standard human face key point corresponding to the first anchor point from the first human face key point corresponding to the initial frame image key point, the determined first face key point is used as the second positioning point corresponding to the first positioning point. For the first anchor point obtained by linear transformation using standard face key points, the first face key point corresponding to the first anchor point is determined from the first face key points corresponding to the initial frame image, and the A point obtained by performing the same linear transformation on the first face key point is determined as a second positioning point corresponding to the first positioning point.

通过上述方式确定每个第一定位点对应的第二定位点之后,对美妆掩码图进行拉伸处理,将每个第一定位点拉伸至对应的每个第二定位点对应的位置处,即使得拉伸后美妆掩码图中每个第一定位点的位置都与其对应的第二定位点的位置相同。After determining the second anchor point corresponding to each first anchor point in the above way, stretch the beauty mask image, and stretch each first anchor point to the position corresponding to each second anchor point , that is, the position of each first anchor point in the stretched beauty mask image is the same as the position of its corresponding second anchor point.

通过上述方式能够将美妆掩码图中的目标上妆区域与初始帧图像中用户实际上妆的区域对齐,从而确保通过美妆掩码图能够精确地从初始帧图像中提取出上妆的第一目标区域图像,进而提高化妆进度检测的准确性。Through the above method, the target makeup area in the beauty mask image can be aligned with the actual makeup area of the user in the initial frame image, so as to ensure that the makeup area can be accurately extracted from the initial frame image through the beauty mask image. The first target area image, thereby improving the accuracy of makeup progress detection.

将美妆掩码图与初始帧图像对齐之后,再通过步骤S41的操作获得美妆掩码图与初始帧图像的人脸区域图像之间相交区域对应的第一掩膜图像,然后通过步骤S42的方式来扣取初始帧图像对应的第一目标区域图像。After aligning the beauty mask image with the initial frame image, the first mask image corresponding to the intersection area between the beauty mask image and the face area image of the initial frame image is obtained through the operation of step S41, and then through step S42 The first target area image corresponding to the initial frame image is deducted in a manner.

S42:对第一掩膜图像与初始帧图像对应的人脸区域图像进行与运算,获得初始帧图像对应的第一目标区域图像。S42: Perform an AND operation on the face area image corresponding to the first mask image and the initial frame image to obtain a first target area image corresponding to the initial frame image.

由于第一掩膜图像为二值化图像,因此对第一掩膜图像和初始帧图像对应的人脸区域图像进行与运算,从初始帧图像对应的人脸区域图像中截取出彩色的各个目标上妆区域的图像,即得到了初始帧图像对应的第一目标区域图像。Since the first mask image is a binarized image, an AND operation is performed on the first mask image and the face area image corresponding to the initial frame image, and each color target is intercepted from the face area image corresponding to the initial frame image The image of the makeup area, that is, the first target area image corresponding to the initial frame image is obtained.

在本申请的另一些实施例中,由于美妆掩码图中各个目标上妆区域是不连贯的,因此还可以将美妆掩码图拆成多个子掩码图,每个子掩码图中包含的目标上妆区域均不相同。然后利用拆分出的子掩码图来从初始帧图像对应的人脸区域图像中获取第一目标区域图像。具体可以通过如下步骤S43-S47的操作来实现,包括:In other embodiments of the present application, since each target makeup area in the beauty mask map is incoherent, the beauty mask map can also be split into multiple sub-mask maps, and each sub-mask map The targeted makeup areas included are all different. Then, the split sub-mask image is used to obtain the first target area image from the face area image corresponding to the initial frame image. Specifically, it can be realized through the following steps S43-S47, including:

S43:将美妆掩码图拆分为多个子掩码图,每个子掩码图中包括至少一个目标上妆区域。S43: Split the beauty mask image into multiple sub-mask images, and each sub-mask image includes at least one target makeup area.

美妆掩码图中包括多个互不连贯的目标上妆区域,对互不连贯的目标上妆区域进行拆分,得到多个子掩码图中,每个子掩码图中可以仅包括一个目标上妆区域,也可以包括一个以上的目标上妆区域。各子掩码图包括的目标上妆区域互不相同,且子掩码图中除目标上妆区域内的像素点的像素值不为零外,其他区域的像素点的像素值均为零。The beauty mask image includes multiple incoherent target makeup areas, split the incoherent target makeup areas to obtain multiple sub-mask images, and each sub-mask image can only include one target The makeup area may also include more than one target makeup area. The target makeup areas included in each sub-mask are different from each other, and the pixel values of the pixels in the sub-masks are all zero except the pixel values of the pixels in the target makeup area are not zero.

S44:分别将每个子掩码图及人脸区域图像转换为二值化图像。S44: Convert each sub-mask image and face region image into a binarized image respectively.

S45:分别对每个子掩码图对应的二值化图像与人脸区域图像对应的二值化图像进行与运算,获得每个子掩码图各自对应的子掩膜图像。S45: Perform an AND operation on the binarized image corresponding to each sub-mask image and the binarized image corresponding to the face region image to obtain a sub-mask image corresponding to each sub-mask image.

对于任意一个子掩码图,将该子掩码图的二值化图像和该人脸区域图像对应的二值化图像中坐标相同的像素点的像素值进行与运算。由于子妆掩码图中只有目标上妆区域中的像素点的像素值不为零,其他区域的像素点均为零。因此与运算得到的子掩膜图像相当于从初始帧图像对应的人脸区域图像中截取出了该子掩膜图像对应的目标上妆区域。For any sub-mask, an AND operation is performed on the pixel values of the pixel points with the same coordinates in the binarized image of the sub-mask and in the binarized image corresponding to the face region image. Since only the pixel values of the pixels in the target makeup area in the sub-makeup mask image are not zero, the pixels in other areas are all zero. Therefore, the calculated sub-mask image is equivalent to intercepting the target makeup area corresponding to the sub-mask image from the face area image corresponding to the initial frame image.

在本申请的另一些实施例中,由于美妆掩码图是基于预设标准人脸图像生成的,子掩码图是从美妆掩码图中拆分出来的,因此子掩码图中的目标上妆区域很可能无法与初始帧图像中用户实际上妆的区域完全重合,从而影响化妆进度检测的准确性。因此在将子掩码图对应的二值化图像和人脸区域图像对应的二值化图像进行与运算之前,还可以先对子掩码图中的目标上妆区域与初始帧图像中的对应区域进行对齐操作。In other embodiments of the present application, since the beauty mask is generated based on a preset standard face image, the sub-mask is split from the beauty mask, so the sub-mask The target makeup area of is probably not completely coincident with the actual makeup area of the user in the initial frame image, thus affecting the accuracy of makeup progress detection. Therefore, before performing an AND operation on the binarized image corresponding to the sub-mask image and the binarized image corresponding to the face area image, the correspondence between the target makeup area in the sub-mask image and the initial frame image can also be first region for alignment.

具体地,根据美妆掩码图对应的标准人脸关键点,确定子掩码图中位于目标上妆区域的轮廓上的一个或多个第一定位点。根据初始帧图像对应的第一人脸关键点,从初始帧图像中确定出与每个第一定位点对应的第二定位点。对子掩码图进行拉伸处理,将每个第一定位点拉伸至对应的每个第二定位点对应的位置处,即使得拉伸后子掩码图中每个第一定位点的位置都与其对应的第二定位点的位置相同。Specifically, according to the standard face key points corresponding to the beauty mask image, one or more first positioning points located on the outline of the target makeup area in the sub-mask image are determined. According to the first human face key point corresponding to the initial frame image, the second positioning point corresponding to each first positioning point is determined from the initial frame image. Stretch the sub-mask image, and stretch each first anchor point to the position corresponding to each second anchor point, that is, after stretching, each first anchor point in the sub-mask image The positions are all the same as the positions of the corresponding second positioning points.

通过上述方式能够将子掩码图中的目标上妆区域与初始帧图像中用户实际上妆的区域对齐,从而确保通过各子掩码图能够精确地从初始帧图像中提取出上妆的第一目标图像,进而提高化妆进度检测的准确性。通过将美妆掩码图拆分成多个子掩码图,分别通过上述方式将每个子掩码图与初始帧图像对齐,相对于直接将美妆掩码图像与初始帧图像对齐的方式,拆分之后分别进行对齐的准确性更高。Through the above method, the target makeup area in the sub-mask image can be aligned with the actual makeup area of the user in the initial frame image, so as to ensure that the first makeup-on area can be accurately extracted from the initial frame image through each sub-mask image. A target image, thereby improving the accuracy of makeup progress detection. By splitting the beauty mask image into multiple sub-mask images, and aligning each sub-mask image with the initial frame image in the above-mentioned manner, compared with directly aligning the beauty mask image with the initial frame image, splitting The accuracy of alignment after the points is higher.

S46:分别对每个子掩膜图像与初始帧图像进行与运算,获得初始帧图像对应的多个子目标区域图像。S46: Perform an AND operation on each sub-mask image and the initial frame image to obtain a plurality of sub-target area images corresponding to the initial frame image.

S47:将多个子目标区域图像合并为初始帧图像对应的第一目标区域图像。S47: Merge multiple sub-target area images into a first target area image corresponding to the initial frame image.

对于当前帧图像,可以采用相同地方式,来获得当前帧图像对应的第二目标区域图像。即将当前帧图像对应的人脸区域图像转换为二值化图像,然后对美妆掩码图对应的二值化图像和当前帧图像的人脸区域图像对应的二值化图像进行与运算,获得美妆掩码图与当前帧图像的人脸区域图像之间相交区域对应的第二掩膜图像。对第二掩膜图像与当前帧图像对应的人脸区域图像进行与运算,获得当前帧图像对应的第二目标区域图像。或者,将上述各子掩码图对应的二值化图像和当前帧图像的人脸区域图像对于的二值化图像进行与运算,获得各子掩码图与当前帧图像的人脸区域图像之间相交区域对应的各子掩膜图像。对各子掩膜图像与当前帧图像对应的人脸区域图像进行与运算,将得到的各子目标区域图像合并为当前帧图像对应的第二目标区域图像。For the current frame image, the same method can be used to obtain the second target area image corresponding to the current frame image. That is, the face area image corresponding to the current frame image is converted into a binary image, and then the binary image corresponding to the beauty mask image and the binary image corresponding to the face area image of the current frame image are ANDed to obtain A second mask image corresponding to the intersection area between the beauty mask image and the face area image of the current frame image. An AND operation is performed on the face area image corresponding to the second mask image and the current frame image to obtain a second target area image corresponding to the current frame image. Or, perform an AND operation on the binarized image corresponding to the above-mentioned sub-masks and the face area image of the current frame image to obtain the difference between each sub-mask and the face area image of the current frame image Each sub-mask image corresponding to the intersection area between them. An AND operation is performed on each sub-mask image and the face region image corresponding to the current frame image, and the obtained sub-target region images are merged into a second target region image corresponding to the current frame image.

在本申请的另一些实施例中,考虑到实际化妆场景中目标上妆区域的边缘可能不会有很清晰的轮廓线,如腮红场景中越靠近边缘颜色越浅,从而使腮红妆容更自然,不会显得很突兀。因此通过上述实施例获得第一目标区域图像和第二目标区域图像之后,还分别对第一目标区域图像和第二目标区域图像中的目标上妆区域进行边界腐蚀处理,使目标上妆区域的边界模糊,使得第一目标区域图像和第二目标区域图像中的目标上妆区域更贴近真实上妆范围,进而提高化妆进度检测的准确性。In other embodiments of the present application, considering that the edge of the target makeup area in the actual makeup scene may not have a very clear outline, for example, the closer to the edge in the blush scene, the lighter the color, so that the blush makeup is more natural , without being obtrusive. Therefore, after the first target area image and the second target area image are obtained through the above embodiment, boundary erosion processing is also performed on the target makeup area in the first target area image and the second target area image, so that the target makeup area The boundary is blurred, so that the target makeup area in the first target area image and the second target area image is closer to the real makeup area, thereby improving the accuracy of makeup progress detection.

通过上述方式获得的初始帧图像对应的第一目标区域图像和当前帧图像对应的第二目标区域图像的颜色空间均为RGB颜色空间。本申请实施例预先通过大量试验确定腮红等预设类型的妆容对颜色空间的各通道分量的影响,发现对RGB颜色空间中各颜色通道的影响差别不大。而HSV颜色空间是由Hue(色调)、Saturation (饱和度)和Value(亮度)三个分量组成,其中一个分量发生变化时,另外两个分量值不会发生明显的变化,相比RGB颜色空间,HSV颜色空间可以分离出其中一个通道分量。且通过试验确定了预设类型的妆容对亮度、色调和饱和度中哪个通道分量的影响最大,并在服务器中配置影响最大的通道分量为预设类型的妆容对应的预设单通道分量。对于腮红等预设类型的妆容,其对应的预设单通道分量可以为亮度分量。The color spaces of the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image obtained in the above manner are RGB color spaces. In the embodiments of the present application, a large number of experiments are carried out to determine the influence of preset types of makeup such as blush on each channel component of the color space, and it is found that the influence on each color channel in the RGB color space is not much different. The HSV color space is composed of three components: Hue (hue), Saturation (saturation) and Value (brightness). When one of the components changes, the values of the other two components will not change significantly. Compared with the RGB color space , the HSV color space can separate out one of the channel components. And it is determined through experiments that the preset type of makeup has the greatest influence on which channel component among brightness, hue and saturation, and the channel component with the greatest influence is configured in the server as the preset single-channel component corresponding to the preset type of makeup. For preset types of makeup such as blush, the corresponding preset single-channel component may be a brightness component.

通过上述任一方式获得初始帧图像对应的第一目标区域图像和当前帧图像对应的第二目标区域图像后,还将第一目标区域图像和第二目标区域图像均由RGB 颜色空间转换为HSV颜色空间下。并从转换后的第一目标区域图像的HSV颜色空间中分离出预设单通道分量,得到仅包含预设单通道分量的第一目标区域图像。以及,从转换后的第二目标区域图像的HSV颜色空间中分离出预设单通道分量,得到仅包含预设单通道分量的第二目标区域图像。After the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image are obtained by any of the above methods, both the first target area image and the second target area image are converted from RGB color space to HSV under the color space. And the preset single-channel component is separated from the converted HSV color space of the first target area image to obtain the first target area image including only the preset single-channel component. And, the preset single-channel component is separated from the converted HSV color space of the second target area image to obtain the second target area image including only the preset single-channel component.

然后根据转换后的第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。Then, according to the converted first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

具体地,分别计算第一目标区域图像和第二目标区域图像中位置相同的像素点对应的上述通道分量的差值绝对值。例如,若预设类型的妆容为腮红,则计算转换后的第一目标区域图像和第二目标区域图像中坐标相同的像素点之间的亮度分量的差值绝对值。Specifically, the absolute values of the differences of the channel components corresponding to the pixel points in the same position in the first target area image and the second target area image are respectively calculated. For example, if the preset type of makeup is blush, the absolute value of the difference in brightness component between pixels with the same coordinates in the converted first target area image and the second target area image is calculated.

根据每个像素点对应的差值绝对值,确定已完成特定妆容的区域面积。具体地,统计对应的差值绝对值满足预设化妆完成条件的像素点数目。其中,预设化妆完成条件为像素点对应的差值绝对值大于第一预设阈值,第一预设阈值可以为7 或8等。According to the absolute value of the difference corresponding to each pixel point, the area of the area where the specific makeup has been completed is determined. Specifically, count the number of pixels whose absolute value of the corresponding difference satisfies the preset makeup completion condition. Wherein, the preset makeup completion condition is that the absolute value of the difference corresponding to the pixel points is greater than a first preset threshold, and the first preset threshold may be 7 or 8 or the like.

将统计的满足预设化妆完成条件的像素点数目确定为已完成特定妆容的区域面积。统计第一目标区域图像或第二目标区域图像中所有目标上妆区域中的所有像素点的总数,将像素点总数目确定为所有目标上妆区域对应的总区域面积。然后计算已完成特定妆容的区域面积与目标上妆区域对应的总区域面积之间的比值,将该比值确定为用户对应的特定妆容的当前化妆进度。即计算统计的像素点数目与第一目标区域图像中所有目标上妆区域中的像素点总数目之间的比值,得到当前帧图像对应的当前化妆进度。Determining the counted number of pixels satisfying the preset makeup completion condition as the area of the area where the specific makeup has been completed. The total number of all pixels in all target makeup areas in the first target area image or the second target area image is counted, and the total number of pixels is determined as the total area corresponding to all target makeup areas. Then calculate the ratio between the area area where the specific makeup has been completed and the total area corresponding to the target makeup area, and determine the ratio as the current makeup progress of the specific makeup corresponding to the user. That is, the ratio between the counted number of pixels and the total number of pixels in all target makeup areas in the first target area image is calculated to obtain the current makeup progress corresponding to the current frame image.

在本申请的另一些实施例中,为了进一步提高化妆进度检测的准确性,还对上述第一目标区域图像和第二目标区域图像中的目标上妆区域进行进一步对齐。具体地,分别对仅包含上述预设单通道分量的第一目标区域图像和第二目标区域图像进行二值化处理,即将第一目标区域图像和第二目标区域图像目标上妆区域中的像素点对应的上述预设单通道分量的值均修改为1,将其余位置处的像素点的上述预设单通道分量的值均修改为0。通过二值化处理得到第一目标区域图像对应的第一二值化掩膜图像和第二目标区域图像对应的第二二值化掩膜图像。In some other embodiments of the present application, in order to further improve the accuracy of makeup progress detection, further alignment is performed on the target makeup areas in the first target area image and the second target area image. Specifically, binary processing is performed on the first target area image and the second target area image that only contain the above-mentioned preset single-channel components, that is, the pixels in the first target area image and the second target area image target the makeup area The values of the above-mentioned preset single-channel components corresponding to the points are all modified to 1, and the values of the above-mentioned preset single-channel components of the pixels at other positions are all modified to 0. A first binarized mask image corresponding to the first target area image and a second binarized mask image corresponding to the second target area image are obtained through binarization processing.

然后对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,即分别将第一二值化掩膜图像和第二二值化掩膜图像中相同位置处的像素点进行与运算,得到第一目标区域图像与第二目标区域图像的相交区域对应的第二掩膜图像。该第二掩膜图像中像素点的预设单通道分量不为零的区域,即为第一目标区域图像和第二目标区域中重合的目标上妆区域。Then, the AND operation is performed on the first binarized mask image and the second binarized mask image, that is, the pixels at the same position in the first binarized mask image and the second binarized mask image are respectively An AND operation is performed to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image. The area in which the preset single-channel component of the pixel in the second mask image is not zero is the overlapping target makeup area in the first target area image and the second target area.

通过步骤303的操作获得初始帧图像对应的人脸区域图像及当前帧图像对应的人脸区域图像。对第二掩膜图像和初始帧图像对应的人脸区域图像进行与运算,得到初始帧图像对应的新的第一目标区域图像;对第二掩膜图像和当前帧图像对应的人脸区域图像进行与运算,得到当前帧图像对应的新的第二目标区域图像。The face area image corresponding to the initial frame image and the face area image corresponding to the current frame image are obtained through the operation of step 303 . Perform an AND operation on the face area image corresponding to the second mask image and the initial frame image to obtain a new first target area image corresponding to the initial frame image; to obtain a new first target area image corresponding to the second mask image and the current frame image An AND operation is performed to obtain a new second target area image corresponding to the current frame image.

由于第二掩膜图像中包含初始帧图像和当前帧图像中重合的目标上妆区域,因此通过第二掩膜图像按照上述方式从初始帧图像和当前帧图像中分别抠取出新的第一目标区域图像和新的第二目标区域图像,使得新的第一目标区域图像和新的第二目标区域图像中目标上妆区域的位置是完全一致的,如此后续比对当前帧图像中目标上妆区域与初始帧图像中目标上妆区域的变化来确定化妆进度,确保了进行比对的区域是完全一致的,大大提高了化妆进度检测的准确性。Since the second mask image contains the overlapped target makeup area in the initial frame image and the current frame image, the new first target is extracted from the initial frame image and the current frame image respectively through the second mask image in the above-mentioned manner area image and the new second target area image, so that the position of the target makeup area in the new first target area image and the new second target area image is exactly the same, so that the subsequent comparison of the target makeup in the current frame image The makeup progress is determined by the change of the area and the target makeup area in the initial frame image, which ensures that the compared areas are completely consistent and greatly improves the accuracy of makeup progress detection.

通过上述方式对初始帧图像和当前帧图像中的目标上妆区域进行对齐,得到新的第一目标区域图像和新的第二目标区域图像后,再次通过上述步骤303的操作来确定当前帧图像对应的当前化妆进度。Align the target makeup area in the initial frame image and the current frame image in the above-mentioned manner, and after obtaining the new first target area image and the new second target area image, determine the current frame image again through the operation of the above-mentioned step 303 Corresponding to the current makeup progress.

通过上述任一方式确定出当前化妆进度后,服务器发送该当前化妆进度给用户的终端。用户的终端接收到当前化妆进度后,显示该当前化妆进度。当前化妆进度可以为比值或百分数。终端可以通过进度条的形式来显示当前化妆进度。After the current makeup progress is determined by any of the above methods, the server sends the current makeup progress to the user's terminal. After receiving the current makeup progress, the user's terminal displays the current makeup progress. The current makeup progress can be a ratio or a percentage. The terminal can display the current makeup progress in the form of a progress bar.

在用户化妆的过程中,通过本申请实施例提供的化妆进度检测方法,实时检测第一帧图像之后的每帧图像相对于第一帧图像的化妆进度,并将检测的化妆进度显示给用户,使用户能够直观地看到自己的化妆进度,提高化妆效率。During the process of the user's makeup, the makeup progress detection method provided by the embodiment of the present application can detect the makeup progress of each frame image after the first frame image relative to the first frame image in real time, and display the detected makeup progress to the user, It enables users to visually see the progress of their makeup and improve the efficiency of makeup.

为了便于理解本申请实施例提供的方法,下面结合附图进行说明。如图9所示,根据初始帧图像及其对应的第一人脸关键点,以及当前帧图像及其对应的第二人脸关键点,分别对初始帧图像和当前帧图像中的人脸进行对齐和裁剪,然后通过拉普拉斯算法对裁剪出的两个人脸区域图像进行平滑和去噪。然后将美妆掩码图分别与两个人脸区域图像进行对齐,根据美妆掩码图分别从两个人脸区域图像中扣取出第一目标区域图像和第二目标区域图像。对第一目标区域图像和第二目标区域图像进行边界腐蚀处理。然后将第一目标区域图像和第二目标区域图像转换为HSV颜色空间下仅包含预设单通道分量的图像。再次对第一目标区域图像和第二目标区域图像进行对齐处理,之后依据二者计算当前化妆进度。In order to facilitate understanding of the method provided by the embodiment of the present application, the following description will be made in conjunction with the accompanying drawings. As shown in Figure 9, according to the initial frame image and its corresponding first face key point, as well as the current frame image and its corresponding second face key point, the faces in the initial frame image and the current frame image are respectively Align and crop, and then smooth and denoise the cropped two face region images by Laplacian algorithm. Then the beauty mask image is aligned with the two face area images respectively, and the first target area image and the second target area image are respectively deducted from the two face area images according to the beauty mask image. Boundary erosion processing is performed on the first target area image and the second target area image. Then convert the first target area image and the second target area image into images containing only preset single-channel components in the HSV color space. Alignment is performed on the first target area image and the second target area image again, and then the current makeup progress is calculated based on the two.

在本申请实施例中,利用人脸关键点,对视频帧中用户的人脸区域进行矫正和裁剪,提高了识别人脸区域的准确性。基于人脸关键点从人脸区域图像中确定出目标上妆区域,并对初始帧图像和当前帧图像中的目标上妆区域进行像素对齐,提高了目标上妆区域识别的准确性。对初始帧图像和当前帧图像中的目标上妆区域对齐,减少因目标上妆区域的位置差别引入的误差。在对抠取目标上妆区域时可以将不连贯的目标上妆区域分开计算,增加获取目标上妆区域的准确率。还将美妆掩码图中的目标上妆区域与人脸区域图像中的目标上妆区域对齐,保证了抠取的目标上妆区域都在人脸区域图像中,不会超出面部边界。且本申请没有采用深度学习的方式,无需预先收集大量数据,本申请通过对用户化妆的实时画面的捕获,经过服务器端的计算,将检测结果返回给用户。相比深度学习的模型推理方案,本申请在算法处理环节耗费更少的计算成本,减少了服务器的处理压力。In the embodiment of the present application, the user's face area in the video frame is corrected and cropped by using key points of the face, which improves the accuracy of identifying the face area. Based on the key points of the face, the target makeup area is determined from the face area image, and the pixel alignment is performed on the target makeup area in the initial frame image and the current frame image, which improves the accuracy of target makeup area recognition. Align the target makeup area in the initial frame image and the current frame image to reduce the error introduced by the position difference of the target makeup area. When picking out the target makeup area, the incoherent target makeup area can be calculated separately to increase the accuracy of obtaining the target makeup area. The target makeup area in the beauty mask image is also aligned with the target makeup area in the face area image to ensure that the selected target makeup areas are all in the face area image and will not exceed the face boundary. Moreover, this application does not adopt the method of deep learning, and does not need to collect a large amount of data in advance. This application captures the real-time picture of the user's make-up, and returns the detection result to the user through server-side calculation. Compared with the model reasoning scheme of deep learning, this application consumes less computing cost in the algorithm processing link, and reduces the processing pressure of the server.

实施例三Embodiment three

本申请实施例提供一种化妆进度检测方法,该方法用于眼线妆容对应的化妆进度。参见图10,该实施例具体包括以下步骤:An embodiment of the present application provides a method for detecting makeup progress, which is used for makeup progress corresponding to eyeliner makeup. Referring to Figure 10, this embodiment specifically includes the following steps:

步骤401:获取用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像,以及获取初始帧图像和当前帧图像对应的美妆掩码图。Step 401: Obtain an initial frame image and a current frame image in a real-time makeup video in which the user is currently performing a specific makeup look, and obtain a beauty mask image corresponding to the initial frame image and the current frame image.

本申请实施例的执行主体为服务器。用户的手机或电脑等终端上安装有与服务器提供的化妆进度检测服务相适配的客户端。当用户需要使用化妆进度检测服务时,用户打开终端上的该客户端,客户端显示多个眼线样式图,眼线样式图是基于预设标准人脸图像制作的,预设标准人脸图像为脸部无遮挡、五官清晰且两眼连线与水平线平行的人脸图像。每个眼线样式图分别对应于不同眼线形状的眼线效果,如有的眼线样式图对应于圆眼形状的眼线效果,有的眼线样式图对应于下垂眼的眼线效果,有的眼线样式图对应于上挑眼的眼线效果,等等。考虑到用户在眼线上妆过程中大部分时间是睁着眼睛的,因此预设标准人脸图像中人脸的眼部状态为睁眼状态,以睁眼状态的预设标准人脸图像来制作眼线样式图。The execution subject of this embodiment of the application is a server. A client that is compatible with the makeup progress detection service provided by the server is installed on the terminal such as the user's mobile phone or computer. When the user needs to use the makeup progress detection service, the user opens the client on the terminal, and the client displays multiple eyeliner style maps. The eyeliner style maps are made based on preset standard face images, and the preset standard face images are face A face image with no occlusion, clear facial features, and a line connecting the eyes parallel to the horizontal line. Each eyeliner style diagram corresponds to the eyeliner effect of different eyeliner shapes. For example, some eyeliner style diagrams correspond to the eyeliner effect of round eyes, some eyeliner style diagrams correspond to the eyeliner effect of drooping eyes, and some eyeliner style diagrams correspond to the eyeliner effects of round eyes. Picky eyeliner effects, and more. Considering that the user keeps his eyes open most of the time during the eyeliner makeup process, the eye state of the face in the preset standard face image is the eye open state, and the preset standard face image with the eye open state is used to create Eyeliner style chart.

客户端显示的界面中可以同时显示多个眼线样式图,用户从显示的多个眼线样式图中选择一个眼线样式图。客户端将用户选择的眼线样式图发送给服务器,服务器接收客户端发送的眼线样式图。The interface displayed by the client can display multiple eyeliner style diagrams at the same time, and the user selects an eyeliner style diagram from the displayed multiple eyeliner style diagrams. The client sends the eyeliner style diagram selected by the user to the server, and the server receives the eyeliner style diagram sent by the client.

上述客户端的显示界面中还设置有视频上传接口,当检测到用户点击该视频上传接口时,调用终端的摄像装置拍摄用户的化妆视频,在拍摄过程中用户在自己脸部进行眼线上妆操作。用户的终端将拍摄的化妆视频以视频流的形式传输给服务器。服务器接收用户的终端传输的该化妆视频的每一帧图像。The display interface of the above-mentioned client is also provided with a video upload interface. When it is detected that the user clicks on the video upload interface, the camera device of the terminal is called to take a video of the user's make-up. During the shooting process, the user performs eyeliner makeup on his face. The user's terminal transmits the captured makeup video to the server in the form of a video stream. The server receives each frame of the makeup video transmitted by the user's terminal.

在本申请实施例中,服务器将接收到的第一帧图像作为初始帧图像,以该初始帧图像作为参考来比对后续接收到的每一帧图像对应的特定妆容的当前化妆进度。由于对于后续每一帧图像的处理方式都相同,因此本申请实施例以当前时刻接收到的当前帧图像为例来阐述化妆进度检测的过程。In the embodiment of the present application, the server takes the received first frame image as the initial frame image, and uses the initial frame image as a reference to compare the current makeup progress of the specific makeup corresponding to each subsequently received frame image. Since the processing method for each subsequent image frame is the same, the embodiment of the present application uses the current frame image received at the current moment as an example to illustrate the process of makeup progress detection.

在本申请的另一些实施例中,服务器获得用户的化妆视频的初始帧图像和当前帧图像后,还检测初始帧图像和当前帧图像中是否均仅包含同一个用户的人脸图像。首先检测初始帧图像和当前帧图像中是否均仅包含一个人脸图像,若初始帧图像和/或当前帧图像中包含多个人脸图,或者,初始帧图像和/或当前帧图像中不包含人脸图像,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户保持化妆视频中仅出现同一个用户的人脸。例如,提示信息可以为“请保持镜头内仅出现同一个人的人脸”。In other embodiments of the present application, after the server obtains the initial frame image and the current frame image of the user's makeup video, it also detects whether the initial frame image and the current frame image contain only the face image of the same user. First, detect whether the initial frame image and the current frame image contain only one face image, if the initial frame image and/or the current frame image contain multiple face images, or, the initial frame image and/or the current frame image do not contain face image, then send prompt information to the user's terminal. The user's terminal receives and displays the prompt information to prompt the user to keep only the face of the same user appearing in the makeup video. For example, the prompt information may be "please keep only the face of the same person appearing in the camera".

若检测出初始帧图像和当前帧图像中均仅包含一个人脸图像,则进一步判断初始帧图像中的人脸图像与当前帧图像中的人脸图像是否属于同一个用户。具体地,可以通过人脸识别技术提取初始帧图像中人脸图像对应的人脸特征信息,以及提取当前帧图像中人脸图像对应的人脸特征信息,计算这两帧图像中提取的人脸特征信息的相似度,若计算的相似度大于或等于设定值,则确定初始帧图像和当前帧图像中的人脸属于同一用户。若计算的相似度小于设定值,则确定初始帧图像和当前帧图像中的人脸属于不同用户,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户保持化妆视频中仅出现同一个用户的人脸。If it is detected that both the initial frame image and the current frame image contain only one face image, it is further determined whether the face image in the initial frame image and the face image in the current frame image belong to the same user. Specifically, the face feature information corresponding to the face image in the initial frame image can be extracted by face recognition technology, and the face feature information corresponding to the face image in the current frame image can be extracted, and the face feature information extracted from the two frame images can be calculated. The similarity of feature information, if the calculated similarity is greater than or equal to the set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user. If the calculated similarity is less than the set value, it is determined that the faces in the initial frame image and the current frame image belong to different users, and a prompt message is sent to the user's terminal. The user's terminal receives and displays the prompt information to prompt the user to keep only the face of the same user appearing in the makeup video.

服务器通过上述方式获得用户选择的眼线样式图,以及获得用户化妆过程中的初始帧图像和当前帧图像后,服务器可以直接将用户选择的眼线样式图确定为初始帧图像和当前帧图像对应的美妆掩码图。After the server obtains the eyeliner style map selected by the user through the above method, and obtains the initial frame image and the current frame image during the user's make-up process, the server can directly determine the eyeliner style map selected by the user as the beauty image corresponding to the initial frame image and the current frame image. Makeup mask illustration.

或者,在另一些实施例中,在制作眼线样式图时,对于每一种眼线形状的眼线效果,都制作了睁眼状态对应的睁眼样式图,以及制作了避免状态对应的闭眼样式图,并在服务器中预先配置了每种眼线样式对应的睁眼样式图和闭眼样式图。客户端显示的界面中显示的眼线样式图可以都为睁眼样式图,也可以都为闭眼样式图。用户从显示的多个眼线样式图中选择自己需要的眼线样式图后,客户端将用户选择的眼线样式图发送给服务器。服务器根据用户选择的眼线样式图以及上述初始帧图像和当前帧图像中用户的眼部状态,分别确定初始帧图像对应的美妆掩码图及当前帧图像对应的美妆掩码图。Or, in some other embodiments, when making an eyeliner style diagram, for each eyeliner shape eyeliner effect, an eye-opening style diagram corresponding to the eye-opening state and an eye-closing style diagram corresponding to the avoiding state are made , and an eye-opening style map and an eye-closing style map corresponding to each eyeliner style are pre-configured in the server. The eyeliner pattern diagrams displayed on the interface displayed by the client may all be eyes-open pattern diagrams, or all eyeliner pattern diagrams may be eye-closed pattern diagrams. After the user selects the desired eyeliner style diagram from the multiple displayed eyeliner style diagrams, the client sends the eyeliner style diagram selected by the user to the server. According to the eyeliner style map selected by the user and the user's eye state in the initial frame image and the current frame image, the server respectively determines the beauty mask image corresponding to the initial frame image and the beauty mask image corresponding to the current frame image.

对于初始帧图像,首先通过图像处理分析初始帧图像中人脸的眼部区域的纹理特征,确定初始帧图像中用户的眼部状态是否为睁眼状态。如果是,则根据用户选择的眼线样式图,从预先配置的多组睁眼样式图和闭眼样式图中,获取用户选择的该眼线样式图对应的睁眼样式图,将该睁眼样式图确定为初始帧图像对应的美妆掩码图。如果确定出初始帧图像中用户的眼部状态为闭眼状态,则根据用户选择的眼线样式图,从预先配置的多组睁眼样式图和闭眼样式图中,获取用户选择的该眼线样式图对应的闭眼样式图,将该闭眼样式图确定为初始帧图像对应的美妆掩码图。For the initial frame image, first analyze the texture features of the eye area of the human face in the initial frame image through image processing, and determine whether the user's eye state in the initial frame image is an eye-open state. If so, according to the eyeliner pattern selected by the user, the eye-opening pattern corresponding to the eyeliner pattern selected by the user is obtained from the pre-configured sets of eye-opening pattern and eye-closing pattern, and the eye-opening pattern It is determined as the beauty mask image corresponding to the initial frame image. If it is determined that the user's eye state in the initial frame image is the eye-closed state, then according to the eyeliner style map selected by the user, the eyeliner style selected by the user is obtained from multiple groups of pre-configured eye-opening style pictures and eye-closed style pictures An eye-closing pattern map corresponding to the figure, and the eye-closing pattern map is determined as a beauty mask map corresponding to the initial frame image.

对于当前帧图像,与上述初始帧图像的操作相同,按照上述方式确定出当前帧图像对应的美妆掩码图。For the current frame image, the operation is the same as that of the above initial frame image, and the beauty mask image corresponding to the current frame image is determined in the above manner.

依据初始帧图像和当前帧图像中用户的眼部状态,来分别确定初始帧图像对应的美妆掩码图及当前帧图像对应的美妆掩码图。如此使得初始帧图像与其对应的美妆掩码图对应的眼部状态一致,以及使当前帧图像与其对应的美妆掩码图对应的眼部状态一致。进而使后续依据美妆掩码图扣取眼线上妆区域时准确性更高,消除初始帧图像和当前帧图像中眼部状态不一致带来的误差,提高眼线化妆进度检测的准确性。According to the user's eye state in the initial frame image and the current frame image, respectively determine the beauty makeup mask image corresponding to the initial frame image and the beauty makeup mask image corresponding to the current frame image. In this way, the eye state corresponding to the initial frame image and its corresponding beauty mask image is consistent, and the current frame image is consistent with the eye state corresponding to the beauty mask image. In turn, the subsequent deduction of the eyeliner makeup area based on the beauty mask image is more accurate, eliminating the error caused by the inconsistency of the eye state in the initial frame image and the current frame image, and improving the accuracy of eyeliner makeup progress detection.

服务器通过本步骤获得用户化妆的初始帧图像及其对应的美妆掩码图,以及获得用户化妆的当前帧图像及其对应的美妆掩码图后,通过如下步骤402和403 的操作来确定用户的当前化妆进度。After the server obtains the initial frame image of the user's makeup and its corresponding beauty mask through this step, and obtains the current frame image of the user's makeup and its corresponding beauty mask, it is determined by the following steps 402 and 403 The user's current makeup progress.

步骤402:根据初始帧图像,模拟生成眼线上妆完成后的结果图像。Step 402: According to the initial frame image, simulate and generate a result image after eyeliner makeup is completed.

利用3D渲染技术在初始帧图像上渲染完成眼线上妆的效果,得到结果图像。Use 3D rendering technology to render the effect of eyeliner makeup on the initial frame image to obtain the result image.

步骤403:根据美妆掩码图、结果图像、初始帧图像和当前帧图像,确定当前帧图像对应的当前化妆进度。Step 403: According to the makeup mask image, the result image, the initial frame image and the current frame image, determine the current makeup progress corresponding to the current frame image.

眼线上妆区域通常包括上眼睑、下眼睑、眼尾处等区域。因此可以直接从人脸区域图像中抠取出需要进行上妆的这些特定区域,进而避免无效区域对眼线化妆进度检测的干扰,提高眼线化妆进度检测的准确性。The eyeliner makeup area usually includes the upper eyelid, lower eyelid, and the end of the eye. Therefore, these specific areas that need to be applied makeup can be directly extracted from the image of the face area, thereby avoiding the interference of invalid areas on the progress detection of eyeliner makeup, and improving the accuracy of detection of eyeliner makeup progress.

首先根据初始帧图像对应的美妆掩码图,从初始帧图像中获取眼线上妆的第一目标区域图像。根据当前帧图像对应的美妆掩码图,从当前帧图像中获取眼线上妆的第二目标区域图像。由于结果图像是在初始帧图像的基础上生成的,因此可以采用初始帧图像对应的美妆掩码图来扣取结果图像中的眼线上妆区域。即根据初始帧图像对应的美妆掩码图,从结果图像中获取眼线上妆的第三目标区域图像。然后根据截取出的第一目标区域图像、第二目标区域图像和第三目标区域图像,确定当前帧图像对应的当前化妆进度。Firstly, according to the makeup mask image corresponding to the initial frame image, the first target area image of the eyeliner makeup is obtained from the initial frame image. According to the beauty mask map corresponding to the current frame image, the second target area image of the eyeliner makeup is acquired from the current frame image. Since the result image is generated on the basis of the initial frame image, the makeup mask corresponding to the initial frame image can be used to deduct the eyeliner makeup area in the result image. That is, according to the makeup mask image corresponding to the initial frame image, the image of the third target area of eyeliner makeup is obtained from the result image. Then, according to the intercepted first target area image, second target area image and third target area image, the current makeup progress corresponding to the current frame image is determined.

其中,从初始帧图像中获取第一目标区域图像的过程,与上述实施例一中步骤102的操作相似,先通过步骤S1检测初始帧图像对应的第一人脸关键点。通过步骤S2的操作根据第一人脸关键点,获取初始帧图像对应的人脸区域图像。获得初始帧图像对应的人脸区域图像的具体操作过程可以参考实施例一中的相关说明,在此不再赘述。然后根据初始帧图像对应的美妆掩码图,从初始帧图像对应的人脸区域图像中获取眼线上妆的第一目标区域图像,具体过程可以采用实施例二中步骤S30-S32的操作来获取第一目标区域图像,或者,也可以采用实施例二中步骤S33-S37的操作来获取第一目标区域图像,在此不再赘述。Wherein, the process of obtaining the first target area image from the initial frame image is similar to the operation of step 102 in the first embodiment above, and the first human face key point corresponding to the initial frame image is detected through step S1. Through the operation of step S2, the face area image corresponding to the initial frame image is acquired according to the first face key point. For the specific operation process of obtaining the face region image corresponding to the initial frame image, reference may be made to the relevant description in Embodiment 1, and details are not repeated here. Then, according to the cosmetic mask map corresponding to the initial frame image, the first target area image of the eyeliner makeup is obtained from the face area image corresponding to the initial frame image. The image of the first target area is acquired, or the operations of steps S33-S37 in the second embodiment can also be used to acquire the image of the first target area, which will not be repeated here.

对于结果图像和当前帧图像,可以采用与初始帧相同地方式,分别获得当前帧图像对应的第二目标区域图像和结果图像对应的第三目标区域图像。For the result image and the current frame image, the second target area image corresponding to the current frame image and the third target area image corresponding to the result image may be respectively obtained in the same manner as the initial frame.

通过上述方式获得的初始帧图像对应的第一目标区域图像、当前帧图像对应的第二目标区域图像和结果图像对应的第三目标区域图像,这些图像的颜色空间均为RGB颜色空间。本申请实施例预先通过大量试验确定了眼线上妆对颜色空间的各通道分量的影响,发现对RGB颜色空间中各颜色通道的影响差别不大。而HLS颜色空间是由Hue(色调)、Saturation(饱和度)和Light(亮度)三个分量组成,通过试验发现眼线上妆能够引起HLS颜色空间的饱和度分量发生明显的变化。The first target area image corresponding to the initial frame image, the second target area image corresponding to the current frame image and the third target area image corresponding to the result image obtained in the above manner, the color spaces of these images are RGB color space. In the embodiments of the present application, a large number of tests are carried out in advance to determine the influence of eyeliner makeup on each channel component of the color space, and it is found that the influence on each color channel in the RGB color space is not much different. The HLS color space is composed of three components: Hue (hue), Saturation (saturation) and Light (brightness). Through experiments, it is found that eyeliner makeup can cause significant changes in the saturation component of the HLS color space.

通过上述任一方式获得初始帧图像对应的第一目标区域图像、当前帧图像对应的第二目标区域图像和结果图像对应的第三目标区域图像后,还将第一目标区域图像、第二目标区域图像和第三目标区域图像均由RGB颜色空间转换为HLS 颜色空间下。并从转换后的第一目标区域图像的HLS颜色空间中分离出饱和度通道,得到仅包含饱和度通道的第一目标区域图像。从转换后的第二目标区域图像的HLS颜色空间中分离出饱和度通道,得到仅包含饱和度通道的第二目标区域图像。以及,从转换后的第三目标区域图像的HLS颜色空间中分离出饱和度通道,得到仅包含饱和度通道的第三目标区域图像。After obtaining the first target area image corresponding to the initial frame image, the second target area image corresponding to the current frame image, and the third target area image corresponding to the result image by any of the above methods, the first target area image, the second target area image Both the area image and the third target area image are converted from the RGB color space to the HLS color space. And the saturation channel is separated from the converted HLS color space of the first target area image to obtain the first target area image including only the saturation channel. The saturation channel is separated from the converted HLS color space of the second target area image to obtain the second target area image including only the saturation channel. And, the saturation channel is separated from the converted HLS color space of the third target area image to obtain the third target area image including only the saturation channel.

然后根据转换后的第一目标区域图像、第二目标区域图像和第三目标区域图像,确定当前帧图像对应的当前化妆进度。Then, according to the converted first target area image, second target area image and third target area image, the current makeup progress corresponding to the current frame image is determined.

具体地,分别计算转换后的第一目标区域图像对应的第一平均像素值、第二目标区域图像对应的第二平均像素值和第三目标区域图像对应的第三平均像素值。其中,平均像素值即为图像中眼线上妆区域中所有像素点的饱和度分量的平均值。Specifically, the first average pixel value corresponding to the converted first target area image, the second average pixel value corresponding to the second target area image, and the third average pixel value corresponding to the third target area image are respectively calculated. Wherein, the average pixel value is the average value of saturation components of all pixels in the eyeliner makeup area in the image.

计算第二平均像素值与第一平均像素值之间的第一差值,第一差值能够表示眼线上妆区域从当前帧图像到初始帧图像的饱和度变化,该饱和度变化是由当前帧图像对应的当前已进行的眼线上妆操作而形成的。Calculate the first difference between the second average pixel value and the first average pixel value, the first difference can represent the saturation change of the eyeliner makeup area from the current frame image to the initial frame image, the saturation change is determined by the current The frame image is formed by the current eyeliner makeup operation corresponding to the frame image.

计算第三平均像素值与第一平均像素值之间的第二差值,第二差值能够表示眼线上妆区域从结果图像到初始帧图像的饱和度变化,该饱和度变化是由完成眼线上妆而形成的。Calculating a second difference between the third average pixel value and the first average pixel value, the second difference can represent the saturation change of the eyeliner makeup area from the result image to the initial frame image, the saturation change is determined by the completed eyeliner formed by applying makeup.

计算第一差值与第二差值之间的比值,得到当前帧图像对应的当前化妆进度。即将当前已进行的眼线上妆所引起的饱和度变化与完成眼线上妆所引起的饱和度变化之间的比值,作为当前化妆进度。Calculate the ratio between the first difference and the second difference to obtain the current makeup progress corresponding to the current frame image. The ratio between the saturation change caused by the currently performed eyeliner makeup and the saturation change caused by the completed eyeliner makeup is used as the current makeup progress.

在本申请的另一些实施例中,为了进一步提高眼线化妆进度检测的准确性,还对初始帧图像对应的第一目标区域图像和当前帧图像对应的第二目标区域图像进行对齐处理;以及,对初始帧图像对应的第一目标区域图像和结果图像对应的第三目标区域图像进行对齐处理。In some other embodiments of the present application, in order to further improve the accuracy of eyeliner makeup progress detection, an alignment process is also performed on the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image; and, An alignment process is performed on the first target area image corresponding to the initial frame image and the third target area image corresponding to the result image.

由于对第一目标区域图像和第二目标区域图像对齐的操作,与对第一目标区域图像和第三目标区域图像对齐的操作相同。因此本申请实施例仅以第一目标区域图像和第二目标区域图像对齐进行详细说明。Because the operation of aligning the first target area image and the second target area image is the same as the operation of aligning the first target area image and the third target area image. Therefore, the embodiment of the present application is only described in detail by aligning the first target area image and the second target area image.

具体地,分别对仅包含饱和度通道的第一目标区域图像和第二目标区域图像进行二值化处理,即将第一目标区域图像和第二目标区域图像眼线上妆区域中的像素点对应的饱和度分量的值均修改为1,将其余位置处的像素点的饱和度分量的值均修改为0。通过二值化处理得到第一目标区域图像对应的第一二值化掩膜图像和第二目标区域图像对应的第二二值化掩膜图像。Specifically, binarization is performed on the first target area image and the second target area image that only contain the saturation channel, that is, the pixel points in the eyeliner makeup area of the first target area image and the second target area image correspond to The values of the saturation components are all modified to 1, and the values of the saturation components of the pixels at other positions are all modified to 0. A first binarized mask image corresponding to the first target area image and a second binarized mask image corresponding to the second target area image are obtained through binarization processing.

然后对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,即分别将第一二值化掩膜图像和第二二值化掩膜图像中相同位置处的像素点进行与运算,得到第一目标区域图像与第二目标区域图像的相交区域对应的第二掩膜图像。该第二掩膜图像中像素点的饱和度分量不为零的区域,即为第一目标区域图像和第二目标区域中重合的眼线上妆区域。Then, the AND operation is performed on the first binarized mask image and the second binarized mask image, that is, the pixels at the same position in the first binarized mask image and the second binarized mask image are respectively An AND operation is performed to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image. The area where the saturation component of the pixel in the second mask image is not zero is the overlapping eyeliner makeup area in the first target area image and the second target area.

通过步骤403的操作获得初始帧图像对应的人脸区域图像及当前帧图像对应的人脸区域图像。对第二掩膜图像和初始帧图像对应的人脸区域图像进行与运算,得到初始帧图像对应的新的第一目标区域图像;对第二掩膜图像和当前帧图像对应的人脸区域图像进行与运算,得到当前帧图像对应的新的第二目标区域图像。The face area image corresponding to the initial frame image and the face area image corresponding to the current frame image are obtained through the operation of step 403 . Perform an AND operation on the face area image corresponding to the second mask image and the initial frame image to obtain a new first target area image corresponding to the initial frame image; to obtain a new first target area image corresponding to the second mask image and the current frame image An AND operation is performed to obtain a new second target area image corresponding to the current frame image.

由于第二掩膜图像中包含初始帧图像和当前帧图像中重合的眼线上妆区域,因此通过第二掩膜图像按照上述方式从初始帧图像和当前帧图像中分别抠取出新的第一目标区域图像和新的第二目标区域图像,使得新的第一目标区域图像和新的第二目标区域图像中眼线上妆区域的位置是完全一致的,即对齐了初始帧图像和结果图像中的眼线上妆区域,能够提高眼线化妆进度检测的准确性。Since the second mask image contains the overlapping eyeliner makeup area in the initial frame image and the current frame image, the new first target is extracted from the initial frame image and the current frame image respectively through the second mask image in the above-mentioned manner area image and the new second target area image, so that the positions of the eyeliner makeup area in the new first target area image and the new second target area image are exactly the same, that is, the initial frame image and the result image are aligned The eyeliner makeup area can improve the accuracy of eyeliner makeup progress detection.

同样按照上述方式对第一目标区域图像和第三目标区域图像进行对齐,使得新的第一目标区域图像和新的第三目标区域图像中眼线上妆区域的位置是完全一致的,即对齐了初始帧图像和当前帧图像中的眼线上妆区域,能够提高眼线化妆进度检测的准确性。Also align the first target area image and the third target area image in the same manner as above, so that the positions of the eyeliner makeup area in the new first target area image and the new third target area image are completely consistent, that is, they are aligned The eyeliner makeup area in the initial frame image and the current frame image can improve the accuracy of eyeliner makeup progress detection.

通过上述方式得到新的第一目标区域图像、新的第二目标区域图像和新的第三目标区域图像后,再次通过上述步骤403的操作来确定当前帧图像对应的当前化妆进度。After the new first target area image, new second target area image and new third target area image are obtained in the above manner, the current makeup progress corresponding to the current frame image is determined again through the operation of the above step 403 .

通过上述任一方式确定出当前化妆进度后,服务器发送该当前化妆进度给用户的终端。用户的终端接收到当前化妆进度后,显示该当前化妆进度。当前化妆进度可以为比值或百分数。终端可以通过进度条的形式来显示当前化妆进度。After the current makeup progress is determined by any of the above methods, the server sends the current makeup progress to the user's terminal. After receiving the current makeup progress, the user's terminal displays the current makeup progress. The current makeup progress can be a ratio or a percentage. The terminal can display the current makeup progress in the form of a progress bar.

在用户化妆的过程中,通过本申请实施例提供的化妆进度检测方法,实时检测第一帧图像之后的每帧图像相对于第一帧图像的化妆进度,并将检测的化妆进度显示给用户,使用户能够直观地看到自己的眼线化妆进度,提高化妆效率。During the process of the user's makeup, the makeup progress detection method provided by the embodiment of the present application can detect the makeup progress of each frame image after the first frame image relative to the first frame image in real time, and display the detected makeup progress to the user, It enables users to visually see the progress of their eyeliner makeup and improve makeup efficiency.

为了便于理解本申请实施例提供的方法,下面结合附图进行说明。如图11所示,根据初始帧图像及其对应的第一人脸关键点,当前帧图像及其对应的第二人脸关键点,以及结果图像及其对应的第三人脸关键点,分别对初始帧图像、结果图像和当前帧图像中的人脸进行矫正和裁剪,然后通过拉普拉斯算法对裁剪出的三个人脸区域图像进行平滑去噪。然后将美妆掩码图分别与三个人脸区域图像进行对齐,根据美妆掩码图分别扣取出初始帧图像对应的第一目标区域图像、当前帧图像对应的第二目标区域图像和结果图像对应的第三目标区域图像。然后将第一目标区域图像、第二目标区域图像和第三目标区域图像转换为HLS颜色空间下仅包含饱和度通道的图像。计算转换后第一目标区域图像、第二目标区域图像和第三目标区域图像各自对应的第一平均像素值、第二平均像素值和第三平均像素值,计算第二平均像素值与第一平均像素值之间的第一差值和第三平均像素值与第一平均像素值之间的第二差值,计算第一差值与第二差值之间的比值得到当前化妆进度。In order to facilitate understanding of the method provided by the embodiment of the present application, the following description will be made in conjunction with the accompanying drawings. As shown in Figure 11, according to the initial frame image and its corresponding first face key point, the current frame image and its corresponding second face key point, and the result image and its corresponding third face key point, respectively Correct and crop the faces in the initial frame image, result image and current frame image, and then smooth and denoise the cropped three face region images by Laplacian algorithm. Then align the beauty mask image with the three face area images, respectively deduct the first target area image corresponding to the initial frame image, the second target area image corresponding to the current frame image and the result image according to the beauty mask image The corresponding image of the third target area. Then convert the first target area image, the second target area image and the third target area image into images containing only the saturation channel in the HLS color space. Calculate the first average pixel value, the second average pixel value and the third average pixel value respectively corresponding to the converted first target area image, the second target area image and the third target area image, calculate the second average pixel value and the first Calculate the first difference between the average pixel values and the second difference between the third average pixel value and the first average pixel value, and calculate the ratio between the first difference and the second difference to obtain the current makeup progress.

在本申请实施例中,获取用户化妆过程的当前帧图像与初始帧图像,在初始帧图像的基础上渲染出完成眼线上妆的结果图像。确定出眼线上妆区域从当前帧图像到初始帧图像的饱和度变化值,以及确定出眼线上妆区域从结果图像到初始帧图像的饱和度变化值,计算当前帧图像对应的饱和度变化值与结果图像对应的饱和度变化值之间的比值,即得到了眼线上妆的当前化妆进度。无需采用深度学习模型,仅通过图像处理即可准确地检测出眼线的化妆进度,运算量小,成本低,减少了服务器的处理压力,提高了眼线化妆进度检测的效率,能够满足眼线化妆进度检测的实时性要求。In the embodiment of the present application, the current frame image and the initial frame image of the user's makeup process are obtained, and the result image of eyeliner makeup is rendered based on the initial frame image. Determine the saturation change value of the eyeliner makeup area from the current frame image to the initial frame image, and determine the saturation change value of the eyeliner makeup area from the result image to the initial frame image, and calculate the saturation change value corresponding to the current frame image The ratio between the saturation change values corresponding to the result image obtains the current makeup progress of the eyeliner makeup. Without using a deep learning model, the eyeliner makeup progress can be accurately detected only through image processing, with a small amount of calculation and low cost, which reduces the processing pressure on the server, improves the efficiency of eyeliner makeup progress detection, and can meet the eyeliner makeup progress detection real-time requirements.

进一步地,利用人脸关键点,对视频帧中用户的人脸区域进行矫正和裁剪,提高了识别人脸区域的准确性。对初始帧图像和当前帧图像以及对初始帧图像和结果图像中的眼线上妆区域对齐,减少因眼线上妆区域的位置差别引入的误差。在对扣取眼线上妆区域时可以将不连贯的眼线上妆区域分开计算,增加获取眼线上妆区域的准确率。还将美妆掩码图中的眼线上妆区域与人脸区域图像中的眼线上妆区域对齐,保证了扣取的眼线上妆区域的精确性。Further, the face key points are used to correct and crop the user's face area in the video frame, which improves the accuracy of recognizing the face area. The eyeliner makeup area in the initial frame image and the current frame image and the initial frame image and the result image are aligned to reduce the error introduced by the position difference of the eyeliner makeup area. When deducting the eyeliner makeup area, the incoherent eyeliner makeup area can be calculated separately to increase the accuracy of obtaining the eyeliner makeup area. The eyeliner makeup area in the beauty mask image is also aligned with the eyeliner makeup area in the face area image to ensure the accuracy of the deducted eyeliner makeup area.

实施例四Embodiment four

1、一种化妆进度检测方法,包括:1. A method for detecting makeup progress, comprising:

获取眼影掩码图以及用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像;Obtain the eye shadow mask map and the initial frame image and current frame image in the real-time makeup video of the user currently performing a specific makeup;

根据眼影上妆的每个目标上妆区域,分别从所述眼影掩码图中拆分出每个所述目标上妆区域对应的美妆掩码图;According to each target makeup area of eye shadow makeup, respectively split the beauty mask map corresponding to each target makeup area from the eye shadow mask map;

根据所述初始帧图像、所述当前帧图像及每个所述目标上妆区域对应的美妆掩码图,确定所述当前帧图像对应的当前化妆进度。The current makeup progress corresponding to the current frame image is determined according to the initial frame image, the current frame image, and the beauty mask map corresponding to each target makeup area.

2、根据1,所述根据所述初始帧图像、所述当前帧图像及每个所述目标上妆区域对应的美妆掩码图,确定所述当前帧图像对应的当前化妆进度,包括:2. According to 1, the determination of the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image, and the beauty mask map corresponding to each target makeup area includes:

分别以每个所述目标上妆区域对应的美妆掩码图为参照,从所述初始帧图像中获取每个所述目标上妆区域对应的第一目标区域图像;Taking the beauty mask image corresponding to each target makeup area as a reference, acquiring a first target area image corresponding to each target makeup area from the initial frame image;

分别以每个所述目标上妆区域对应的美妆掩码图为参照,从所述当前帧图像中获取每个所述目标上妆区域对应的第二目标区域图像;Respectively using the beauty mask image corresponding to each target makeup area as a reference, acquiring a second target area image corresponding to each target makeup area from the current frame image;

根据每个所述目标上妆区域对应的第一目标区域图像及第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is determined.

3、根据2,所述根据每个所述目标上妆区域对应的第一目标区域图像及第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:3. According to 2, the determination of the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each target makeup area includes:

分别将每个所述目标上妆区域对应的第一目标区域图像和第二目标区域图像转换为HLS颜色空间下包含预设单通道分量的图像;Respectively converting the first target area image and the second target area image corresponding to each target makeup area into images containing preset single-channel components under the HLS color space;

根据转换后的每个所述目标上妆区域对应的第一目标区域图像和第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the converted first target area image and second target area image corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is determined.

4、根据3,所述根据转换后的每个所述目标上妆区域对应的第一目标区域图像和第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:4. According to 3, according to the converted first target area image and second target area image corresponding to each target makeup area, determining the current makeup progress corresponding to the current frame image includes:

分别计算转换后同一目标上妆区域对应的第一目标区域图像和第二目标区域图像中位置相同的像素点对应的所述预设单通道分量的差值绝对值;Respectively calculate the difference absolute value of the preset single-channel component corresponding to the pixels in the same position in the first target area image corresponding to the same target makeup area after conversion and in the second target area image;

统计每个目标上妆区域对应的差值绝对值满足预设化妆完成条件的像素点数目;Count the number of pixels whose absolute value of the difference corresponding to each target makeup area satisfies the preset makeup completion condition;

分别计算每个目标上妆区域对应的所述像素点数目与对应目标上妆区域中的像素点总数目之间的比值,得到每个目标上妆区域对应的化妆进度;Calculating the ratio between the number of pixels corresponding to each target makeup area and the total number of pixels in the corresponding target makeup area to obtain the makeup progress corresponding to each target makeup area;

根据每个目标上妆区域对应的化妆进度及每个目标上妆区域对应的预设权重,计算所述当前帧图像对应的当前化妆进度。According to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is calculated.

5、根据2,分别以每个所述目标上妆区域对应的美妆掩码图为参照,从所述初始帧图像中获取每个所述目标上妆区域对应的第一目标区域图像,包括:5. According to 2, using the beauty mask image corresponding to each target makeup area as a reference, the first target area image corresponding to each target makeup area is obtained from the initial frame image, including :

检测所述初始帧图像对应的第一人脸关键点;Detecting the first human face key point corresponding to the initial frame image;

根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像;Acquiring a face region image corresponding to the initial frame image according to the first face key point;

分别以每个所述目标上妆区域对应的美妆掩码图为参照,从所述人脸区域图像中分别获取每个所述目标上妆区域对应的的第一目标区域图像。Taking the beauty mask map corresponding to each of the target makeup areas as a reference, the first target area images corresponding to each of the target makeup areas are respectively acquired from the face area images.

6、根据5,所述分别以每个所述目标上妆区域对应的美妆掩码图为参照,从所述人脸区域图像中分别获取每个所述目标上妆区域对应的的第一目标区域图像,包括:6. According to 5, the first makeup mask image corresponding to each target makeup area is respectively obtained from the face area image with reference to the beauty mask map corresponding to each target makeup area. Imagery of the target area, including:

分别将第一目标上妆区域对应的美妆掩码图和所述人脸区域图像转换为二值化图像;所述第一目标上妆区域为每个所述目标上妆区域中的任一目标上妆区域;respectively converting the beauty mask map corresponding to the first target makeup area and the face area image into a binary image; the first target makeup area is any one of each of the target makeup areas target makeup area;

对所述美妆掩码图对应的二值化图像和所述人脸区域图像对应的二值化图像进行与运算,获得所述美妆掩码图与所述人脸区域图像的相交区域对应的第一掩膜图像;Perform an AND operation on the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face area image to obtain the correspondence between the intersection area of the beauty mask map and the face area image The first mask image of ;

对所述第一掩膜图像与所述初始帧图像对应的人脸区域图像进行与运算,获得第一目标区域图像。An AND operation is performed on the face area image corresponding to the first mask image and the initial frame image to obtain a first target area image.

7、根据6,所述对所述美妆掩码图对应的二值化图像和所述人脸区域图像对应的二值化图像进行与运算之前,还包括:7. According to 6, before performing the AND operation on the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face region image, it further includes:

根据所述美妆掩码图对应的标准人脸关键点,确定所述美妆掩码图中位于所述美妆掩码图包括的目标上妆区域的轮廓上的一个或多个第一定位点;According to the standard human face key points corresponding to the beauty mask map, determine one or more first positions located on the outline of the target makeup area included in the beauty mask map point;

根据所述第一人脸关键点,从所述人脸区域图像中确定出与每个所述第一定位点对应的第二定位点;Determining a second anchor point corresponding to each of the first anchor points from the face area image according to the first face key point;

对所述美妆掩码图进行拉伸处理,将每个所述第一定位点拉伸至对应的每个所述第二定位点对应的位置处。Stretching is performed on the beauty mask image, and each of the first positioning points is stretched to a position corresponding to each of the second positioning points.

8、根据1,所述根据眼影上妆的每个目标上妆区域,分别从所述眼影掩码图中拆分出每个所述目标上妆区域对应的美妆掩码图之前,还包括:8. According to 1, before splitting the beauty mask map corresponding to each target makeup area from the eye shadow mask map according to each target makeup area of eye shadow makeup, it also includes: :

根据所述眼影掩码图对应的标准人脸关键点,确定所述眼影掩码图中位于每个上妆区域的轮廓上的一个或多个第一定位点;According to the standard human face key points corresponding to the eye shadow mask map, determine one or more first positioning points on the outline of each makeup area in the eye shadow mask map;

根据所述第一人脸关键点,从所述人脸区域图像中确定出与每个所述第一定位点对应的第二定位点;Determining a second anchor point corresponding to each of the first anchor points from the face area image according to the first face key point;

对所述眼影掩码图进行拉伸处理,将每个所述第一定位点拉伸至对应的每个所述第二定位点对应的位置处。Stretching is performed on the eye shadow mask image, and each of the first positioning points is stretched to a position corresponding to each of the second positioning points.

9、根据5,所述根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像,包括:9. According to 5, the acquisition of the face area image corresponding to the initial frame image according to the first face key point includes:

根据所述初始帧图像对应的第一人脸关键点,对所述初始帧图像及所述第一人脸关键点进行旋转矫正;Performing rotation correction on the initial frame image and the first human face key point according to the first human face key point corresponding to the initial frame image;

根据矫正后的所述第一人脸关键点,从矫正后的所述初始帧图像中截取包含人脸区域的图像;According to the corrected first human face key point, intercepting an image containing a human face area from the corrected initial frame image;

将所述包含人脸区域的图像缩放至预设尺寸,得到所述初始帧图像对应的人脸区域图像。The image containing the face area is scaled to a preset size to obtain a face area image corresponding to the initial frame image.

10、根据9,所述根据所述第一人脸关键点,对所述初始帧图像及所述第一人脸关键点进行旋转矫正,包括:10. According to 9, performing rotation correction on the initial frame image and the first facial key point according to the first key point of human face, including:

根据所述第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标;Determine left eye center coordinates and right eye center coordinates respectively according to the left eye key point and right eye key point included in the first human face key point;

根据所述左眼中心坐标和所述右眼中心坐标,确定所述初始帧图像对应的旋转角度及旋转中心点坐标;Determine the rotation angle and rotation center point coordinates corresponding to the initial frame image according to the left eye center coordinates and the right eye center coordinates;

根据所述旋转角度和所述旋转中心点坐标,对所述初始帧图像及所述第一人脸关键点进行旋转矫正。Rotation correction is performed on the initial frame image and the first human face key point according to the rotation angle and the coordinates of the rotation center point.

11、根据9,所述根据矫正后的所述第一人脸关键点,从矫正后的所述初始帧图像中截取包含人脸区域的图像,包括:11. According to 9, according to the corrected first human face key point, the image containing the human face area is intercepted from the corrected initial frame image, including:

根据矫正后的所述第一人脸关键点,对矫正后的所述初始帧图像中包含的人脸区域进行图像截取。According to the corrected key points of the first human face, image interception is performed on the human face area included in the corrected initial frame image.

12、根据11,所述根据矫正后的所述第一人脸关键点,对矫正后的所述初始帧图像中包含的人脸区域进行图像截取,包括:12. According to 11, according to the corrected first human face key point, image interception is performed on the human face area contained in the corrected initial frame image, including:

从矫正后的所述第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值;Determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key point;

根据所述最小横坐标值、所述最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的所述初始帧图像中人脸区域对应的截取框;According to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value, determine the clipping frame corresponding to the face area in the corrected initial frame image;

根据所述截取框,从矫正后的所述初始帧图像中截取出包含所述人脸区域的图像。According to the clipping frame, an image including the face area is clipped from the rectified initial frame image.

13、根据12,所述方法还包括:13. According to 12, the method further comprising:

将所述截取框放大预设倍数;Enlarging the interception frame by a preset multiple;

根据放大后的所述截取框,从矫正后的所述初始帧图像中截取出包含所述人脸区域的图像。According to the enlarged clipping frame, an image including the face area is clipped from the corrected initial frame image.

14、根据9,所述方法还包括:14. According to 9, the method further comprises:

根据所述包含人脸区域的图像的尺寸及所述预设尺寸,对矫正后的所述第一人脸关键点进行缩放平移处理。According to the size of the image including the face area and the preset size, zooming and translation processing is performed on the corrected first key points of the face.

15、根据1,所述方法还包括:15. According to 1, the method further comprises:

检测所述初始帧图像和所述当前帧图像中是否均仅包含同一个用户的人脸图像;Detecting whether both the initial frame image and the current frame image only contain the face image of the same user;

如果是,则执行所述确定所述用户进行所述特定妆容的当前化妆进度的操作;If yes, then perform the operation of determining the current makeup progress of the user performing the specific makeup look;

如果否,则发送提示信息给所述用户的终端,所述提示信息用于提示所述用户保持所述实时化妆视频中仅出现同一个用户的人脸。If not, send prompt information to the user's terminal, where the prompt information is used to prompt the user to keep only the face of the same user appearing in the real-time makeup video.

16、一种化妆进度检测装置,包括:16. A makeup progress detection device, comprising:

获取模块,用于获取眼影掩码图以及用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像;The obtaining module is used to obtain the initial frame image and the current frame image in the real-time makeup video of the eye shadow mask map and the user's current specific makeup;

拆分模块,用于根据眼影上妆的每个目标上妆区域,分别从所述眼影掩码图中拆分出每个所述目标上妆区域对应的美妆掩码图;The splitting module is used to split the beauty mask map corresponding to each target makeup area from the eye shadow mask map according to each target makeup area of eye shadow makeup;

化妆进度确定模块,用于根据所述初始帧图像、所述当前帧图像及每个所述目标上妆区域对应的美妆掩码图,确定所述当前帧图像对应的当前化妆进度。The makeup progress determination module is configured to determine the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image, and the beauty mask map corresponding to each target makeup area.

17、一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序以实现如权利要求 1-15任一项所述的方法。17. An electronic device, comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, the processor runs the computer program to implement any of claims 1-15. one of the methods described.

18、一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行实现如权利要求1-15中任一项所述的方法。18. A computer-readable storage medium, on which a computer program is stored, and the program is executed by a processor to implement the method according to any one of claims 1-15.

本申请实施例提供一种化妆进度检测方法,该方法用于眼影妆容对应的化妆进度。参见图12,该实施例具体包括以下步骤:An embodiment of the present application provides a method for detecting makeup progress, which is used for makeup progress corresponding to eye shadow makeup. Referring to Figure 12, this embodiment specifically includes the following steps:

步骤501:获取眼影掩码图以及用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像。Step 501: Obtain an eye shadow mask image and an initial frame image and a current frame image in a real-time makeup video in which the user is currently performing a specific makeup look.

本申请实施例的执行主体为服务器。用户的手机或电脑等终端上安装有与服务器提供的化妆进度检测服务相适配的客户端。当用户需要使用化妆进度检测服务时,用户打开终端上的该客户端,客户端显示多个眼影掩码图,眼影掩码图是基于预设标准人脸图像制作的,预设标准人脸图像为脸部无遮挡、五官清晰且两眼连线与水平线平行的人脸图像。每个眼影掩码图分别对应于不同的眼影妆效。The execution subject of this embodiment of the application is a server. A client that is compatible with the makeup progress detection service provided by the server is installed on the terminal such as the user's mobile phone or computer. When the user needs to use the makeup progress detection service, the user opens the client on the terminal, and the client displays multiple eye shadow mask images. The eye shadow mask images are made based on preset standard face images, and the preset standard face images It is a face image with an unoccluded face, clear facial features, and a line connecting the eyes parallel to the horizontal line. Each eye shadow mask map corresponds to a different eye shadow makeup effect.

客户端显示的界面中可以同时显示多个眼影掩码图,用户从显示的多个眼影掩码图中选择一个。客户端将用户选择的眼影掩码图发送给服务器,服务器接收客户端发送的眼影掩码图。The interface displayed by the client can simultaneously display multiple eye shadow mask diagrams, and the user selects one from the displayed multiple eye shadow mask diagrams. The client sends the eye shadow mask image selected by the user to the server, and the server receives the eye shadow mask image sent by the client.

上述客户端的显示界面中还设置有视频上传接口,当检测到用户点击该视频上传接口时,调用终端的摄像装置拍摄用户的实时化妆视频,在拍摄过程中用户在自己脸部进行眼影上妆操作。用户的终端将拍摄的实时化妆视频以视频流的形式传输给服务器。服务器接收用户的终端传输的该实时化妆视频的每一帧图像。The display interface of the above-mentioned client is also provided with a video upload interface. When it is detected that the user clicks on the video upload interface, the camera device of the terminal is called to shoot the real-time makeup video of the user. During the shooting process, the user applies eye shadow makeup on his face . The user's terminal transmits the captured real-time makeup video to the server in the form of a video stream. The server receives each frame of the real-time makeup video transmitted by the user's terminal.

在本申请实施例中,服务器将接收到的第一帧图像作为初始帧图像,以该初始帧图像作为参考来比对后续接收到的每一帧图像对应的特定妆容的当前化妆进度。由于对于后续每一帧图像的处理方式都相同,因此本申请实施例以当前时刻接收到的当前帧图像为例来阐述化妆进度检测的过程。In the embodiment of the present application, the server takes the received first frame image as the initial frame image, and uses the initial frame image as a reference to compare the current makeup progress of the specific makeup corresponding to each subsequently received frame image. Since the processing method for each subsequent image frame is the same, the embodiment of the present application uses the current frame image received at the current moment as an example to illustrate the process of makeup progress detection.

在本申请的另一些实施例中,服务器获得用户的实时化妆视频的初始帧图像和当前帧图像后,还检测初始帧图像和当前帧图像中是否均仅包含同一个用户的人脸图像。首先检测初始帧图像和当前帧图像中是否均仅包含一个人脸图像,若初始帧图像和/或当前帧图像中包含多个人脸图,或者,初始帧图像和/或当前帧图像中不包含人脸图像,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户保持实时化妆视频中仅出现同一个用户的人脸。例如,提示信息可以为“请保持镜头内仅出现同一个人的人脸”。In some other embodiments of the present application, after the server obtains the initial frame image and the current frame image of the user's real-time makeup video, it also detects whether the initial frame image and the current frame image contain only the face image of the same user. First, detect whether the initial frame image and the current frame image contain only one face image, if the initial frame image and/or the current frame image contain multiple face images, or, the initial frame image and/or the current frame image do not contain face image, then send prompt information to the user's terminal. The user's terminal receives and displays the prompt information to prompt the user to keep only the face of the same user appearing in the real-time makeup video. For example, the prompt information may be "please keep only the face of the same person appearing in the camera".

若检测出初始帧图像和当前帧图像中均仅包含一个人脸图像,则进一步判断初始帧图像中的人脸图像与当前帧图像中的人脸图像是否属于同一个用户。具体地,可以通过人脸识别技术提取初始帧图像中人脸图像对应的人脸特征信息,以及提取当前帧图像中人脸图像对应的人脸特征信息,计算这两帧图像中提取的人脸特征信息的相似度,若计算的相似度大于或等于设定值,则确定初始帧图像和当前帧图像中的人脸属于同一用户。若计算的相似度小于设定值,则确定初始帧图像和当前帧图像中的人脸属于不同用户,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户保持实时化妆视频中仅出现同一个用户的人脸。If it is detected that both the initial frame image and the current frame image contain only one face image, it is further determined whether the face image in the initial frame image and the face image in the current frame image belong to the same user. Specifically, the face feature information corresponding to the face image in the initial frame image can be extracted by face recognition technology, and the face feature information corresponding to the face image in the current frame image can be extracted, and the face feature information extracted from the two frame images can be calculated. The similarity of the feature information, if the calculated similarity is greater than or equal to the set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user. If the calculated similarity is less than the set value, it is determined that the faces in the initial frame image and the current frame image belong to different users, and a prompt message is sent to the user's terminal. The user's terminal receives and displays the prompt information to prompt the user to keep only the face of the same user appearing in the real-time makeup video.

服务器通过本步骤获得用户化妆的初始帧图像、当前帧图像及眼影掩码图后,通过如下步骤502和503的操作来确定用户的当前化妆进度。After the server obtains the initial frame image of the user's make-up, the current frame image and the eye shadow mask through this step, the server determines the current progress of the user's make-up through the following steps 502 and 503 .

步骤502:根据眼影上妆的每个目标上妆区域,分别从眼影掩码图中拆分出每个目标上妆区域对应的美妆掩码图。Step 502: According to each target makeup area of eye shadow makeup, respectively split the beauty mask map corresponding to each target makeup area from the eye shadow mask map.

眼影化妆主要涉及上眼皮大面积晕染、上眼皮中间部分提亮、下眼皮卧蚕及眼头提亮等操作。其中,上眼皮大面积晕染和上眼皮中间部分提亮均是在上眼皮上操作,存在叠涂区域。而只有在所有区域都完成上妆之后才算是完成眼影上妆。因此需要将步骤501获得的眼影掩码图按照眼影上妆的每个目标上妆区域,拆分出每个目标上妆区域对应的美妆掩码图。如拆分出眼皮大面积晕染对应的目标上妆区域对应的美妆掩码图,上眼皮中间部分对应的目标上妆区域对应的美妆掩码图,下眼皮卧蚕部位对应的美妆掩码图以及眼头部位对应的美妆掩码图,等等。Eye shadow makeup mainly involves large-scale smudging of the upper eyelid, brightening the middle part of the upper eyelid, lying on the lower eyelid, and brightening the eyelids. Among them, the large-area smudging of the upper eyelid and the brightening of the middle part of the upper eyelid are both performed on the upper eyelid, and there is an overlapping area. And the eyeshadow application is not complete until all areas have been applied. Therefore, the eye shadow mask map obtained in step 501 needs to be split according to each target makeup area of eye shadow makeup to obtain a beauty mask map corresponding to each target makeup area. For example, splitting out the beauty mask map corresponding to the target makeup area corresponding to the large-area smudged eyelid, the beauty mask map corresponding to the target makeup area corresponding to the middle part of the upper eyelid, and the beauty makeup corresponding to the lying silkworm part of the lower eyelid The mask map and the beauty mask map corresponding to the eye and head parts, etc.

步骤503:根据初始帧图像、当前帧图像及每个目标上妆区域对应的美妆掩码图,确定当前帧图像对应的当前化妆进度。Step 503: Determine the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image, and the beauty mask map corresponding to each target makeup area.

首先分别以每个目标上妆区域对应的美妆掩码图为参照,从初始帧图像中获取每个目标上妆区域对应的第一目标区域图像。以及,分别以每个目标上妆区域对应的美妆掩码图为参照,从当前帧图像中获取每个目标上妆区域对应的第二目标区域图像。然后根据每个目标上妆区域对应的第一目标区域图像及第二目标区域图像,确定当前帧图像对应的当前化妆进度。Firstly, the first target area image corresponding to each target makeup area is obtained from the initial frame image with reference to the beauty mask image corresponding to each target makeup area. And, using the beauty mask image corresponding to each target makeup area as a reference, the second target area image corresponding to each target makeup area is acquired from the current frame image. Then, according to the first target area image and the second target area image corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is determined.

每个目标上妆区域对应的第一目标区域图像和第二目标区域图像的获取过程均相同。本申请实施例以一个目标上妆区域对应的第一目标区域图像的获取过程为例进行详细说明。服务器具体通过如下步骤S1-S3的操作来获取该目标上妆区域对应的第一目标区域图像,包括:The acquisition process of the first target area image and the second target area image corresponding to each target makeup area is the same. In this embodiment of the present application, a process of acquiring a first target area image corresponding to a target makeup area is taken as an example to describe in detail. The server specifically obtains the first target area image corresponding to the target makeup area through the following steps S1-S3, including:

S1:检测初始帧图像对应的第一人脸关键点。S1: Detect the first face key point corresponding to the initial frame image.

服务器中配置了预先训练好的用于检测人脸关键点的检测模型,通过该检测模型提供人脸关键点检测的接口服务。服务器获取到用户化妆视频的初始帧图像后,调用人脸关键点检测的接口服务,通过检测模型识别出初始帧图像中用户脸部的所有人脸关键点。为了与当前帧图像对应的人脸关键点进行区分,本申请实施例将初始帧图像对应的所有人脸关键点称为第一人脸关键点。将当前帧图像对应的所有人脸关键点称为第二人脸关键点。The server is configured with a pre-trained detection model for detecting key points of the face, and the interface service of key point detection of the face is provided through the detection model. After the server obtains the initial frame image of the user's makeup video, it calls the interface service of face key point detection, and uses the detection model to identify all face key points of the user's face in the initial frame image. In order to distinguish the face key points corresponding to the current frame image, all face key points corresponding to the initial frame image are referred to as first human face key points in the embodiment of the present application. All face key points corresponding to the current frame image are referred to as second face key points.

其中,识别出的人脸关键点包括用户脸部轮廓上的关键点及嘴巴、鼻子、眼睛、眉毛等部位的关键点。识别出的人脸关键点的数目可以为106个。Among them, the recognized key points of the human face include key points on the outline of the user's face and key points of the mouth, nose, eyes, eyebrows and other parts. The number of recognized face key points may be 106.

S2:根据第一人脸关键点,获取初始帧图像对应的人脸区域图像。S2: According to the key points of the first face, obtain the face area image corresponding to the initial frame image.

服务器具体通过如下步骤S20-S22的操作来获取初始帧图像对应的人脸区域图像,包括:The server specifically obtains the face area image corresponding to the initial frame image through the following steps S20-S22, including:

S20:根据第一人脸关键点,对初始帧图像及第一人脸关键点进行旋转矫正。S20: According to the key points of the first face, perform rotation correction on the initial frame image and the key points of the first face.

由于用户在通过终端拍摄化妆视频时,不能保证每一帧图像中人脸的姿态角度均相同,为了提高当前帧图像与初始帧图像比对的准确性,需要对每一帧图像中的人脸进行旋转矫正,以使矫正后每一帧图像中人脸眼睛的连线都处在同一水平线上,从而确保每一帧图像中人脸的姿态角度相同,避免因姿态角度不同导致化妆进度检测误差较大的问题。When the user shoots a makeup video through the terminal, the pose angle of the face in each frame of image cannot be guaranteed to be the same. In order to improve the accuracy of the comparison between the current frame image and the initial frame image, it is necessary to Perform rotation correction so that the lines connecting the faces and eyes in each frame of image after correction are on the same horizontal line, so as to ensure that the pose angle of the face in each frame of image is the same, and avoid errors in makeup progress detection due to different pose angles Bigger problem.

具体地,根据第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标。从第一人脸关键点确定出左眼区域的全部左眼关键点及右眼区域的全部右眼关键点。对确定出的全部左眼关键点的横坐标取均值,以及对全部左眼关键点的纵坐标取均值,将左眼对应的横坐标的均值和纵坐标的均值组成一个坐标,将该坐标确定为左眼中心坐标。按照同样的方式确定出右眼中心坐标。Specifically, according to the left eye key point and the right eye key point included in the first human face key point, the left eye center coordinates and the right eye center coordinates are respectively determined. All the left-eye key points in the left-eye area and all the right-eye key points in the right-eye area are determined from the first face key points. Take the mean value of the abscissa of all the determined key points of the left eye, and take the mean of the ordinate of all the key points of the left eye, combine the mean value of the abscissa and the mean value of the ordinate corresponding to the left eye into a coordinate, and determine the coordinate is the coordinates of the center of the left eye. Determine the coordinates of the center of the right eye in the same manner.

然后根据左眼中心坐标和右眼中心坐标,确定初始帧图像对应的旋转角度及旋转中心点坐标。如图4所示,根据左眼中心坐标和右眼中心坐标计算出两个坐标的水平差值dx和竖直差值dy,以及计算出左眼中心坐标和右眼中心坐标的两眼连线长度d。依据两眼连线长度d、水平差值dx和竖直差值dy,计算出两眼连线与水平方向的夹角θ,该夹角θ即为初始帧图像对应的旋转角度。然后根据左眼中心坐标和右眼中心坐标计算出两眼连线的中心点坐标,该中点点坐标即为初始帧图像对应的旋转中心点坐标。Then, according to the left eye center coordinates and the right eye center coordinates, the rotation angle and the rotation center point coordinates corresponding to the initial frame image are determined. As shown in Figure 4, the horizontal difference dx and the vertical difference dy of the two coordinates are calculated according to the left eye center coordinates and the right eye center coordinates, and the line connecting the two eyes between the left eye center coordinates and the right eye center coordinates is calculated length d. According to the length d of the connecting line between the two eyes, the horizontal difference dx and the vertical difference dy, the angle θ between the connecting line between the two eyes and the horizontal direction is calculated, and the angle θ is the corresponding rotation angle of the initial frame image. Then, the center point coordinates of the line connecting the two eyes are calculated according to the left eye center coordinates and the right eye center coordinates, and the midpoint coordinates are the rotation center point coordinates corresponding to the initial frame image.

根据计算出的旋转角度和旋转中心点坐标,对初始帧图像及第一人脸关键点进行旋转矫正。具体将旋转角度和旋转中心点坐标输入用于计算图片的旋转矩阵的预设函数中,该预设函数可以为OpenCV中的函数cv2.getRotationMatrix2D()。通过调用该预设函数获得初始帧图像对应的旋转矩阵。然后计算初始帧图像和该旋转矩阵的乘积,得到矫正后的初始帧图像。利用旋转矩阵对初始帧图像进行矫正的操作,也可以通过调用OpenCV中的函数cv2.warpAffine()来完成。Perform rotation correction on the initial frame image and the key points of the first face according to the calculated rotation angle and the coordinates of the rotation center point. Specifically, the rotation angle and the coordinates of the rotation center point are input into a preset function used to calculate the rotation matrix of the picture, and the preset function may be the function cv2.getRotationMatrix2D() in OpenCV. Obtain the rotation matrix corresponding to the initial frame image by calling this preset function. Then calculate the product of the initial frame image and the rotation matrix to obtain the rectified initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be done by calling the function cv2.warpAffine() in OpenCV.

对于第一人脸关键点,需要对每个第一人脸关键点逐一进行矫正,以与矫正后的初始帧图像相对应。在对第一人脸关键点逐一矫正时,需要进行两次坐标系的转换,第一次将以初始帧图像左上角为原点的坐标系转化为以左下角为原点的坐标系,第二次进一步将以左下角为原点的坐标系转化为以上述旋转中心点坐标为坐标原点的坐标系,如图5所示。经过两次坐标系转换后对每个第一人脸关键点进行如下公式(1)的转换,可完成对第一人脸关键点的旋转矫正。For the key points of the first human face, each key point of the first human face needs to be corrected one by one to correspond to the corrected initial frame image. When correcting the key points of the first face one by one, two transformations of the coordinate system are required. The first time the coordinate system with the origin at the upper left corner of the initial frame image is transformed into the coordinate system with the origin at the lower left corner. The second time Further transform the coordinate system with the lower left corner as the origin into the coordinate system with the above-mentioned rotation center point coordinates as the coordinate origin, as shown in FIG. 5 . After two coordinate system conversions, the conversion of the following formula (1) is performed on each key point of the first human face, and the rotation correction of the key points of the first human face can be completed.

Figure RE-GDA0003326777920000541
Figure RE-GDA0003326777920000541

在公式(1)中,x0、y0分别为旋转矫正前第一人脸关键点的横坐标和纵坐标, x、y分别为旋转矫正第一人脸关键点的横坐标和纵坐标,θ为上述旋转角度。In formula (1), x 0 and y 0 are respectively the abscissa and ordinate of the key point of the first face before rotation correction, and x and y are respectively the abscissa and ordinate of the key point of the first face after rotation correction, θ is the above-mentioned rotation angle.

经过矫正后的初始帧图像和第一人脸关键点是基于整张图像的,整张图像不仅包含用户的人脸信息,还包括其他多余的图像信息,因此需要通过如下步骤S21 对矫正后的图像进行人脸区域的裁剪。The corrected initial frame image and the key points of the first face are based on the entire image, which not only contains the user’s face information, but also includes other redundant image information, so the corrected image needs to be corrected through the following steps S21 The image is cropped for the face area.

S21:根据矫正后的第一人脸关键点,从矫正后的初始帧图像中截取包含人脸区域的图像。S21: According to the corrected first key point of the human face, intercept an image including the human face area from the corrected initial frame image.

首先从矫正后的第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值。然后根据最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的初始帧图像中人脸区域对应的截取框。具体地,将最小横坐标值和最小纵坐标值组成一个坐标点,将该坐标点作为人脸区域对应的截取框的左上角顶点。将最大横坐标值和最大纵坐标值组成另一个坐标点,将该坐标点作为人脸区域对应的截取框的右下角顶点。根据上述左上角顶点和右下角顶点在矫正后的初始帧图像中确定截取框的位置,从矫正后的初始帧图像中截取出位于该截取框内的图像,即截取出包含人脸区域的图像。Firstly, the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value are determined from the corrected first face key points. Then according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value, determine the clipping frame corresponding to the face area in the corrected initial frame image. Specifically, the minimum abscissa value and the minimum ordinate value form a coordinate point, and the coordinate point is used as the upper left vertex of the interception frame corresponding to the face area. The maximum abscissa value and the maximum ordinate value form another coordinate point, and this coordinate point is used as the vertex of the lower right corner of the interception frame corresponding to the face area. Determine the position of the interception frame in the corrected initial frame image according to the above-mentioned upper left corner vertex and lower right corner vertex, and intercept the image located in the interception frame from the corrected initial frame image, that is, intercept the image containing the face area .

在本申请的另一些实施例中,为了确保将用户的全部的人脸区域截取出来,避免截取不完整导致后续化妆进度检测误差很大的情况发生,还可以将上述截取框放大预设倍数,该预设倍数可以为1.15或1.25等。本申请实施例并不限制预设倍数的具体取值,实际应用中可根据需求来设定该预设倍数。将上述截取框向周围放大预设倍数之后,从矫正后的初始帧图像中截取出位于放大后的截取框中的图像,从而截取出包含用户的完整人脸区域的图像。In other embodiments of the present application, in order to ensure that all of the user's face area is intercepted, and to avoid incomplete interception resulting in a large error in the follow-up makeup progress detection, the above interception frame can also be enlarged by a preset multiple, The preset multiple can be 1.15 or 1.25 and so on. The embodiment of the present application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical applications. After the aforementioned clipping frame is enlarged by a preset multiple, the image located in the enlarged clipping frame is clipped from the corrected initial frame image, so as to clip an image including the complete face area of the user.

S22:将包含人脸区域的图像缩放至预设尺寸,得到初始帧图像对应的人脸区域图像。S22: Scale the image including the face area to a preset size to obtain the face area image corresponding to the initial frame image.

通过上述方式从初始帧图像中截取出包含用户的人脸区域的图像后,将包含人脸区域的图像缩放至预设尺寸,得到初始帧图像对应的人脸区域图像。该预设尺寸可以为390×390或400×400等。本申请实施例并不限制预设尺寸的具体取值,实际应用中可根据需求进行设定。After the image including the user's face area is intercepted from the initial frame image in the above manner, the image including the face area is scaled to a preset size to obtain a face area image corresponding to the initial frame image. The preset size may be 390×390 or 400×400 or the like. The embodiment of the present application does not limit the specific value of the preset size, which can be set according to requirements in practical applications.

为了使第一人脸关键点与缩放后的人脸区域图像相适应,将截取的包含人脸区域的图像缩放至预设尺寸后,还需根据缩放前包含人脸区域的图像的尺寸及该预设尺寸,对上述矫正后的第一人脸关键点进行缩放平移处理。具体地,依据缩放前包含人脸区域的图像的尺寸及图像所需缩放至的预设尺寸,确定每个第一人脸关键点的平移方向及平移距离,进而根据每个第一人脸关键点对应的平移方向和平移距离,分别对每个第一人脸关键点进行平移操作,并记录平移后的每个第一人脸关键点的坐标。In order to make the first human face key point adapt to the scaled human face area image, after the captured image containing the human face area is scaled to a preset size, it is also necessary to use the size of the image containing the human face area before scaling and the The preset size is used to zoom and translate the corrected key points of the first human face. Specifically, according to the size of the image containing the face area before zooming and the preset size to which the image needs to be zoomed, determine the translation direction and translation distance of each first face key point, and then according to each first face key point The translation direction and the translation distance corresponding to the point, the translation operation is performed on each first human face key point respectively, and the coordinates of each first human face key point after translation are recorded.

通过上述方式从初始帧图像中获得人脸区域图像,并通过旋转矫正和平移缩放等操作使第一人脸关键点与获得的人脸区域图像相适应,之后通过如下步骤S3 的方式从人脸区域图像中提取出目标上妆区域对应的图像区域。Obtain the face area image from the initial frame image in the above way, and make the first face key point adapt to the obtained face area image through operations such as rotation correction and translation scaling, and then obtain the face area image through the following step S3. The image area corresponding to the target makeup area is extracted from the area image.

在本申请的另一些实施例中,执行步骤S3之前,还可以先对人脸区域图像进行高斯滤波处理,用于去除人脸区域图像中的噪声。具体地,根据预设尺寸的高斯核,对初始帧图像对应的人脸区域图像进行高斯滤波处理。In some other embodiments of the present application, before step S3 is performed, Gaussian filtering may be performed on the image of the face area to remove noise in the image of the face area. Specifically, Gaussian filtering is performed on the face area image corresponding to the initial frame image according to a Gaussian kernel of a preset size.

高斯滤波的高斯核是高斯滤波处理的关键参数,高斯核选取过小,则不能达到很好的滤波效果,高斯核选取过大,虽然能过滤掉图像中的噪声信息,但同时会平滑掉图像中有用的信息。本申请实施例选取预设尺寸的高斯核,预设尺寸可以为9×9。此外高斯滤波函数的另一组参数sigmaX、sigmaY均设置为0,经过高斯滤波后,图像信息更加平滑,进而提高后续获取化妆进度的准确性。The Gaussian kernel of Gaussian filter is the key parameter of Gaussian filter processing. If the Gaussian kernel is too small, the filtering effect cannot be achieved. If the Gaussian kernel is too large, although the noise information in the image can be filtered out, the image will be smoothed at the same time. Useful information in . In this embodiment of the present application, a Gaussian kernel with a preset size is selected, and the preset size may be 9×9. In addition, another set of parameters of the Gaussian filter function, sigmaX and sigmaY, are set to 0. After Gaussian filtering, the image information will be smoother, thereby improving the accuracy of subsequent acquisition of makeup progress.

通过上述方式获得人脸区域图像,或者对人脸区域图像进行高斯滤波处理之后,通过步骤S3从人脸区域图像中提取该目标上妆区域对应的第一目标区域图像。The face area image is obtained in the above manner, or after the Gaussian filtering is performed on the face area image, the first target area image corresponding to the target makeup area is extracted from the face area image by step S3.

S3:以该目标上妆区域对应的美妆掩码图为参照,从该人脸区域图像中提取该目标上妆区域对应的第一目标区域图像。S3: Using the beauty mask image corresponding to the target makeup area as a reference, extract a first target area image corresponding to the target makeup area from the face area image.

直接从人脸区域图像中抠取出该目标上妆区域对应的第一目标区域图像,能够避免其他区域对该目标上妆区域的化妆进度检测的干扰。尤其是能够避免重叠的目标上妆区域之间的相互干扰,提高眼影化妆进度检测的准确性。The first target area image corresponding to the target makeup area is directly extracted from the face area image, which can avoid the interference of other areas on the makeup progress detection of the target makeup area. In particular, mutual interference between overlapping target makeup areas can be avoided, and the accuracy of eye shadow makeup progress detection can be improved.

服务器具体通过如下步骤S40-S42的操作来获得第一目标区域图像,包括:The server obtains the first target area image specifically through the following steps S40-S42, including:

S40:分别将该目标上妆区域对应的美妆掩码图和人脸区域图像转换为二值化图像。S40: Transform the beauty mask image corresponding to the target makeup area and the face area image into binary images respectively.

S41:对美妆掩码图对应的二值化图像和人脸区域图像对应的二值化图像进行与运算,获得美妆掩码图与该人脸区域图像的相交区域对应的第一掩膜图像。S41: Perform an AND operation on the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face region image to obtain a first mask corresponding to the intersection area between the beauty makeup mask map and the face region image image.

分别将美妆掩码图对应的二值化图像和该人脸区域图像对应的二值化图像中坐标相同的像素点的像素值进行与运算。由于美妆掩码图中只有目标上妆区域中的像素点的像素值不为零,其他区域的像素点均为零。因此与运算得到的第一掩膜图像相当于从初始帧图像对应的人脸区域图像中截取出了各个目标上妆区域。An AND operation is performed on the pixel values of the pixel points with the same coordinates in the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face region image. Since only the pixel values of the pixels in the target makeup area in the beauty mask image are not zero, the pixels in other areas are all zero. Therefore, the first mask image obtained by the calculation is equivalent to intercepting each target makeup area from the face area image corresponding to the initial frame image.

在本申请的另一些实施例中,由于美妆掩码图是基于预设标准人脸图像生成的,因此美妆掩码图中的目标上妆区域很可能无法与初始帧图像中用户实际上妆的区域完全重合,从而影响化妆进度检测的准确性。因此在将美妆掩码图对应的二值化图像和人脸区域图像对应的二值化图像进行与运算之前,还可以先对美妆掩码图中的目标上妆区域与初始帧图像中的对应区域进行对齐操作。In other embodiments of the present application, since the beauty mask image is generated based on a preset standard face image, the target makeup area in the beauty mask image may not be the same as the user's actual face image in the initial frame image. The area of makeup completely overlaps, which affects the accuracy of makeup progress detection. Therefore, before performing an AND operation on the binary image corresponding to the beauty mask image and the binary image corresponding to the face area image, it is also possible to compare the target makeup area in the beauty mask image with the initial frame image The corresponding area is aligned.

具体地,根据美妆掩码图对应的标准人脸关键点,确定美妆掩码图中位于该目标上妆区域的轮廓上的一个或多个第一定位点。美妆掩码图对应的标准人脸关键点即为预设标准人脸图像对应的标准人脸关键点。首先确定该目标上妆区域的轮廓上是否包含标准人脸关键点,若包含,则将位于轮廓上的标准人脸关键点确定为该目标上妆区域对应的第一定位点。若不包含,则利用该目标上妆区域周围的标准人脸关键点,通过线性变换的方式生成位于该目标上妆区域的轮廓上的第一定位点。具体可以将周围的标准人脸关键点上移、下移、左移或右移等平移操作获得第一定位点。Specifically, according to the standard human face key points corresponding to the beauty mask image, one or more first positioning points located on the outline of the target makeup area in the beauty mask image are determined. The standard face key points corresponding to the beauty mask image are the standard face key points corresponding to the preset standard face image. Firstly, it is determined whether the contour of the target makeup area contains standard human face key points, and if yes, the standard human face key points on the contour are determined as the first positioning point corresponding to the target makeup area. If not included, then use the standard human face key points around the target makeup area to generate the first positioning point on the contour of the target makeup area by means of linear transformation. Specifically, a translation operation such as moving up, down, left, or right surrounding standard face key points can be used to obtain the first positioning point.

在本申请实施例中,目标上妆区域对应的第一定位点的数目可以为预设数目个,预设数目可以为3或4等。In the embodiment of the present application, the number of first positioning points corresponding to the target makeup area may be a preset number, and the preset number may be 3 or 4, etc.

通过上述方式获得美妆掩码图中该目标上妆区域对应的第一定位点后,根据初始帧图像对应的第一人脸关键点,从初始帧图像中确定出与每个第一定位点对应的第二定位点。由于美妆掩码图对应的标准人脸关键点和初始帧图像对应的第一人脸关键点都是通过相同的检测模型获得的,不同位置的关键点都具有各自的编号。因此对于属于标准人脸关键点的第一定位点,从初始帧图像对应的第一人脸关键点中确定出与该第一定位点对应的标准人脸关键点的编号相同的第一人脸关键点,将确定出的第一人脸关键点作为该第一定位点对应的第二定位点。对于利用标准人脸关键点进行线性变换得到的第一定位点,则从初始帧图像对应的第一人脸关键点中确定出该第一定位点对应的第一人脸关键点,将对该第一人脸关键点进行相同的线性变换得到的点确定为该第一定位点对应的第二定位点。After obtaining the first positioning point corresponding to the target makeup area in the beauty mask image by the above method, according to the first human face key point corresponding to the initial frame image, determine the first positioning point corresponding to each first positioning point from the initial frame image The corresponding second anchor point. Since the standard face key points corresponding to the beauty mask map and the first face key points corresponding to the initial frame image are obtained through the same detection model, the key points at different positions have their own numbers. Therefore, for the first anchor point belonging to the standard human face key point, determine the first human face with the same number as the standard human face key point corresponding to the first anchor point from the first human face key point corresponding to the initial frame image key point, the determined first face key point is used as the second positioning point corresponding to the first positioning point. For the first anchor point obtained by linear transformation using standard face key points, the first face key point corresponding to the first anchor point is determined from the first face key points corresponding to the initial frame image, and the A point obtained by performing the same linear transformation on the first face key point is determined as a second positioning point corresponding to the first positioning point.

通过上述方式确定每个第一定位点对应的第二定位点之后,对美妆掩码图进行拉伸处理,将每个第一定位点拉伸至对应的每个第二定位点对应的位置处,即使得拉伸后美妆掩码图中每个第一定位点的位置都与其对应的第二定位点的位置相同。After determining the second anchor point corresponding to each first anchor point in the above way, stretch the beauty mask image, and stretch each first anchor point to the position corresponding to each second anchor point , that is, the position of each first anchor point in the stretched beauty mask image is the same as the position of its corresponding second anchor point.

通过上述方式能够将美妆掩码图中的目标上妆区域与初始帧图像中用户实际上妆的区域对齐,从而确保通过美妆掩码图能够精确地从初始帧图像中提取出该目标上妆区域对应的第一目标区域图像,进而提高化妆进度检测的准确性。Through the above method, the target makeup area in the beauty mask image can be aligned with the user's actual makeup area in the initial frame image, so as to ensure that the target makeup area can be accurately extracted from the initial frame image through the beauty mask image. The image of the first target area corresponding to the makeup area, thereby improving the accuracy of makeup progress detection.

将该目标上妆区域对应的美妆掩码图与初始帧图像对齐之后,再通过步骤S41 的操作获得美妆掩码图与初始帧图像的人脸区域图像之间相交区域对应的第一掩膜图像,然后通过步骤S42的方式来扣取该目标上妆区域对应的第一目标区域图像。After aligning the beauty mask map corresponding to the target makeup area with the initial frame image, the first mask corresponding to the intersection area between the beauty mask map and the face area image of the initial frame image is obtained through the operation of step S41. film image, and then deduct the first target area image corresponding to the target makeup area by means of step S42.

S42:对第一掩膜图像与初始帧图像对应的人脸区域图像进行与运算,获得该目标上妆区域对应的第一目标区域图像。S42: Perform an AND operation on the face area image corresponding to the first mask image and the initial frame image to obtain a first target area image corresponding to the target makeup area.

由于第一掩膜图像为二值化图像,因此对第一掩膜图像和初始帧图像对应的人脸区域图像进行与运算,从初始帧图像对应的人脸区域图像中截取出该目标上妆区域的图像,即得到了该目标上妆区域对应的第一目标区域图像。Since the first mask image is a binarized image, an AND operation is performed on the first mask image and the face region image corresponding to the initial frame image, and the target makeup is intercepted from the face region image corresponding to the initial frame image The image of the region, that is, the first target region image corresponding to the target makeup region is obtained.

对于每个目标上妆区域对应的第一目标区域图像和第二目标区域图像,均可通过上述步骤S40-S42的操作来获得。The first target area image and the second target area image corresponding to each target makeup area can be obtained through the above-mentioned operations of steps S40-S42.

在本申请的另一些实施例中,考虑到实际化妆场景中目标上妆区域的边缘可能不会有很清晰的轮廓线,如眼影化妆场景中越靠近边缘颜色越浅,从而使眼影妆容更自然,不会显得很突兀。因此通过上述实施例获得第一目标区域图像和第二目标区域图像之后,还分别对第一目标区域图像和第二目标区域图像中的目标上妆区域进行边界腐蚀处理,使目标上妆区域的边界模糊,使得第一目标区域图像和第二目标区域图像中的目标上妆区域更贴近真实上妆范围,进而提供化妆进度检测的准确性。In some other embodiments of the present application, considering that the edge of the target makeup area in the actual makeup scene may not have a very clear outline, for example, the closer to the edge in the eye shadow makeup scene, the lighter the color, thereby making the eye shadow makeup more natural. It won't look obtrusive. Therefore, after the first target area image and the second target area image are obtained through the above embodiment, boundary erosion processing is also performed on the target makeup area in the first target area image and the second target area image, so that the target makeup area The boundary is blurred, so that the target makeup area in the first target area image and the second target area image is closer to the real makeup area, thereby improving the accuracy of makeup progress detection.

通过上述方式获得的各目标上妆区域对应的第一目标区域图像和第二目标区域图像,这些图像的颜色空间均为RGB颜色空间。本申请实施例预先通过大量试验确定了眼影上妆对颜色空间的各通道分量的影响,发现对RGB颜色空间中各颜色通道的影响差别不大。而HLS颜色空间是由Hue(色调)、Saturation(饱和度) 和Light(亮度)三个分量组成,通过试验发现眼影上妆能够引起HLS颜色空间的亮度分量发生明显的变化。The color space of the first target area image and the second target area image corresponding to each target makeup area obtained in the above manner is RGB color space. In the embodiment of the present application, a large number of tests have been conducted in advance to determine the influence of eye shadow makeup on each channel component of the color space, and it is found that the influence on each color channel in the RGB color space is not very different. The HLS color space is composed of three components: Hue (hue), Saturation (saturation) and Light (brightness). Through experiments, it is found that eye shadow makeup can cause significant changes in the brightness component of the HLS color space.

因此通过上述任一方式获得各目标上妆区域对应的第一目标区域图像和第二目标区域图像后,还将每个目标上妆区域对应的第一目标区域图像和第二目标区域图像均由RGB颜色空间转换为HLS颜色空间下。并从转换后的各个第一目标区域图像和各个第二目标区域图中的HLS颜色空间中分离出预设单通道分量,得到每个目标上妆区域对应的HLS颜色空间下仅包含预设单通道分量的第一目标区域图像和第二目标区域图像。预设单通道分量可以为亮度分量。Therefore, after obtaining the first target area image and the second target area image corresponding to each target makeup area by any of the above methods, the first target area image and the second target area image corresponding to each target makeup area are also obtained by The RGB color space is converted to the HLS color space. And separate the preset single-channel component from the HLS color space of each converted first target area image and each second target area image, and obtain only the preset single-channel component under the HLS color space corresponding to each target makeup area. The first target region image and the second target region image of the channel components. The preset single-channel component may be a luma component.

然后根据转换后每个目标上妆区域对应的第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。Then, according to the first target area image and the second target area image corresponding to each target makeup area after conversion, the current makeup progress corresponding to the current frame image is determined.

具体地,分别计算转换后同一个目标上妆区域对应的第一目标区域图像和第二目标区域图像中位置相同的像素点对应的上述预设单通道分量的差值绝对值。例如,假设该目标上妆区域为卧蚕区域,则计算初始帧图像中卧蚕区域对应的第一目标区域图像和当前帧图像中卧蚕区域对应的第二目标区域图像中坐标相同的像素点之间的亮度分量的差值绝对值。Specifically, the absolute value of the difference between the preset single-channel components corresponding to the pixels at the same position in the first target area image and the second target area image corresponding to the same target makeup area after conversion are respectively calculated. For example, assuming that the target makeup area is a lying silkworm area, calculate the pixels with the same coordinates in the first target area image corresponding to the lying silkworm area in the initial frame image and the second target area image corresponding to the lying silkworm area in the current frame image The absolute value of the difference between the luma components.

统计每个目标上妆区域对应的差值绝对值满足眼影化妆对应的预设化妆完成条件的像素点数目。其中,预设化妆完成条件为像素点对应的差值绝对值大于眼影化妆对应的第一预设阈值,第一预设阈值可以为11或12等。Count the number of pixels whose absolute value of the difference corresponding to each target makeup area satisfies the preset makeup completion condition corresponding to the eyeshadow makeup. Wherein, the preset makeup completion condition is that the absolute value of the difference corresponding to the pixel points is greater than the first preset threshold corresponding to the eye shadow makeup, and the first preset threshold can be 11 or 12, etc.

统计每个目标上妆区域对应的第一目标区域图像或第二目标区域图像中所有像素点的总数。然后对于每个目标上妆区域,均分别计算统计的差值绝对值满足预设化妆完成条件的像素点数目与该目标上妆区域中的像素点总数目之间的比值,分别得到每个目标上妆区域对应的化妆进度。然后根据每个目标上妆区域对应的化妆进度及每个目标上妆区域对应的预设权重,计算当前帧图像对应的当前化妆进度。The total number of all pixels in the first target area image or the second target area image corresponding to each target makeup area is counted. Then, for each target makeup area, the ratio of the absolute value of the statistical difference between the number of pixels that meet the preset makeup completion conditions and the total number of pixels in the target makeup area is calculated, respectively, to obtain each target Makeup progress corresponding to the makeup area. Then, according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is calculated.

其中,每个目标上妆区域对应的预设权重之和为1,各个目标上妆区域对应的预设权重可以相同,也可以不相同。例如,假设有上眼皮、上眼皮中间部位、卧蚕部位和眼头部位共4个目标上妆区域,则这4个目标上妆区域对应的预设权重可以均为0.25。本申请实施例并不限制每个目标上妆区域的预设权重的取值,实际应用中可根据需求进行限定。Wherein, the sum of the preset weights corresponding to each target makeup area is 1, and the preset weights corresponding to each target makeup area may be the same or different. For example, assuming that there are 4 target makeup areas including the upper eyelid, the middle part of the upper eyelid, the lying silkworm part and the head part, the preset weights corresponding to these 4 target makeup areas can all be 0.25. The embodiment of the present application does not limit the value of the preset weight of each target makeup area, which can be defined according to requirements in practical applications.

在本申请的另一些实施例中,为了进一步提高化妆进度检测的准确性,还对同一目标上妆区域对应的第一目标区域图像和第二目标区域图像中的该目标上妆区域进行进一步对齐。In other embodiments of the present application, in order to further improve the accuracy of makeup progress detection, the target makeup area in the first target area image and the second target area image corresponding to the same target makeup area are further aligned .

具体地,分别对同一个目标上妆区域对应的仅包含上述预设单通道分量的第一目标区域图像和第二目标区域图像进行二值化处理,即将第一目标区域图像和第二目标区域图像中该目标上妆区域中的像素点对应的上述预设单通道分量的值均修改为1,将其余位置处的像素点的预设单通道分量的值均修改为0。通过二值化处理得到第一目标区域图像对应的第一二值化掩膜图像和第二目标区域图像对应的第二二值化掩膜图像。Specifically, binary processing is performed on the first target area image and the second target area image corresponding to the same target makeup area, which only contain the above-mentioned preset single-channel component, that is, the first target area image and the second target area The values of the preset single-channel components corresponding to the pixels in the target makeup area in the image are all changed to 1, and the values of the preset single-channel components of the pixels at other positions are all changed to 0. A first binarized mask image corresponding to the first target area image and a second binarized mask image corresponding to the second target area image are obtained through binarization processing.

然后对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,得到该目标上妆区域对应的第一目标区域图像和第二目标区域图像的相交区域对应的第二掩膜图像。即分别将第一二值化掩膜图像和第二二值化掩膜图像中相同位置处的像素点进行与运算,得到相交区域的第二掩膜图像。该第二掩膜图像中像素点的预设单通道分量不为零的区域,即为第一目标区域图像和第二目标区域中重合的目标上妆区域。Then perform an AND operation on the first binary mask image and the second binary mask image to obtain a second mask corresponding to the intersection area of the first target area image corresponding to the target makeup area and the second target area image film image. That is, an AND operation is performed on pixels at the same positions in the first binarized mask image and the second binarized mask image to obtain a second mask image in the intersecting area. The area in which the preset single-channel component of the pixel in the second mask image is not zero is the overlapping target makeup area in the first target area image and the second target area.

通过前述步骤的操作获得初始帧图像对应的人脸区域图像及当前帧图像对应的人脸区域图像。对第二掩膜图像和初始帧图像对应的人脸区域图像进行与运算,得到该目标上妆区域对应的新第一目标区域图像;对第二掩膜图像和当前帧图像对应的人脸区域图像进行与运算,得到该目标上妆区域对应的新第二目标区域图像。The face area image corresponding to the initial frame image and the face area image corresponding to the current frame image are obtained through the operations of the foregoing steps. Perform an AND operation on the face area image corresponding to the second mask image and the initial frame image to obtain a new first target area image corresponding to the target makeup area; for the face area corresponding to the second mask image and the current frame image An AND operation is performed on the image to obtain a new second target area image corresponding to the target makeup area.

在本申请的另一些实施例中,也可以利用上述第二掩膜图像和前文得到的边界腐蚀后该目标上妆区域对应的第一目标区域图像进行与运算,得到该目标上妆区域对应的新第一目标区域图像。以及对第二掩膜图像和前文得到的边界腐蚀后该目标上妆区域对应的第二目标区域图像进行与运算,得到该目标上妆区域对应的新第二目标区域图像。In some other embodiments of the present application, the above-mentioned second mask image and the first target area image corresponding to the target makeup area after boundary erosion obtained above can also be used to perform an AND operation to obtain the target area corresponding to the target makeup area. New first target area image. And performing an AND operation on the second mask image and the second target area image corresponding to the target makeup area obtained above after boundary erosion, to obtain a new second target area image corresponding to the target makeup area.

由于第二掩膜图像中包含该目标上妆区域对应的第一目标区域图像和第二目标区域图像中重合的区域,因此通过第二掩膜图像按照上述方式获得新第一目标区域图像和新第二目标区域图像,使得新第一目标区域图像和新第二目标区域图像中目标上妆区域的位置是完全一致的,如此后续依据完全对齐的目标上妆区域在预设单通道分量上的变化来确定化妆进度,确保了进行比对的区域是完全一致的,大大提高了化妆进度检测的准确性。Since the second mask image contains the overlapping area of the first target area image corresponding to the target makeup area and the second target area image, the new first target area image and the new target area image are obtained through the second mask image in the above-mentioned manner. The second target area image, so that the positions of the target makeup area in the new first target area image and the new second target area image are completely consistent, so that the follow-up is based on the completely aligned target makeup area on the preset single-channel component. Changes are used to determine the makeup progress, ensuring that the areas for comparison are completely consistent, and greatly improving the accuracy of makeup progress detection.

通过上述任一方式对初始帧图像和当前帧图像中的该目标上妆区域进行对齐,得到新第一目标区域图像和新第二目标区域图像后,再次通过上述步骤503的操作来确定该目标上妆区域对应的化妆进度。对于其他每个目标上妆区域,均按照上述方式对初始帧图像和当前帧图像中的其他每个目标上妆区域进行对齐,分别得到其他每个目标上妆区域对应的化妆进度。进而根据得到的每个目标上妆区域的化妆进度,计算当前帧图像对应的当前化妆进度。Align the target makeup area in the initial frame image and the current frame image by any of the above methods, and after obtaining the new first target area image and the new second target area image, determine the target again through the operation of the above step 503 Makeup progress corresponding to the makeup area. For each other target makeup area, the initial frame image and each other target makeup area in the current frame image are aligned according to the above method, and the makeup progress corresponding to each other target makeup area is respectively obtained. Then, according to the obtained makeup progress of each target makeup area, the current makeup progress corresponding to the current frame image is calculated.

通过上述任一方式确定出当前化妆进度后,服务器发送该当前化妆进度给用户的终端。用户的终端接收到当前化妆进度后,显示该当前化妆进度。当前化妆进度可以为比值或百分数。终端可以通过进度条的形式来显示当前化妆进度。After the current makeup progress is determined by any of the above methods, the server sends the current makeup progress to the user's terminal. After receiving the current makeup progress, the user's terminal displays the current makeup progress. The current makeup progress can be a ratio or a percentage. The terminal can display the current makeup progress in the form of a progress bar.

在用户化妆的过程中,通过本申请实施例提供的化妆进度检测方法,实时检测第一帧图像之后的每帧图像相对于第一帧图像的化妆进度,并将检测的化妆进度显示给用户,使用户能够直观地看到自己的化妆进度,提高化妆效率。During the process of the user's makeup, the makeup progress detection method provided by the embodiment of the present application can detect the makeup progress of each frame image after the first frame image relative to the first frame image in real time, and display the detected makeup progress to the user, It enables users to visually see the progress of their makeup and improve the efficiency of makeup.

为了便于理解本申请实施例提供的方法,下面结合附图进行说明。如图13所示,根据初始帧图像及其对应的第一人脸关键点,当前帧图像及其对应的第二人脸关键点,分别对初始帧图像和当前帧图像中的人脸进行矫正和裁剪。将眼影掩码图拆分为每个目标上妆区域对应的美妆掩码图。然后将每个目标上妆区域对应的美妆掩码图分别与初始帧图像及当前帧图像各自对应的人脸区域图像进行对齐。以每个目标上妆区域对应的美妆掩码图为参照从初始帧图像中扣取出每个目标上妆区域对应的第一目标区域图像,以及以每个目标上妆区域对应的美妆掩码图为参照从当前帧图像中扣取出每个目标上妆区域对应的第二目标区域图像。然后将每个目标上妆区域对应的第一目标区域图像和第二目标区域图像转换为HLS颜色空间下仅包含预设单通道分量的图像。计算转换后同一目标上妆区域对应的第一目标区域图像和第二目标区域图像中位置相同的像素点对应的预设单通道分量的差值绝对值。统计每个目标上妆区域对应的差值绝对值满足预设化妆完成条件的像素点数目。分别计算每个目标上妆区域对应的像素点数目与对应目标上妆区域中的像素点总数目之间的比值,得到每个目标上妆区域对应的化妆进度。根据每个目标上妆区域对应的化妆进度及每个目标上妆区域对应的预设权重,计算当前帧图像对应的当前化妆进度。In order to facilitate understanding of the method provided by the embodiment of the present application, the following description will be made in conjunction with the accompanying drawings. As shown in Figure 13, according to the initial frame image and its corresponding first face key point, the current frame image and its corresponding second face key point, the face in the initial frame image and the current frame image are corrected respectively and cropping. The eye shadow mask map is split into beauty mask maps corresponding to each target makeup area. Then, the beauty mask image corresponding to each target makeup area is aligned with the face area images corresponding to the initial frame image and the current frame image respectively. Deduct the first target area image corresponding to each target makeup area from the initial frame image with reference to the beauty mask image corresponding to each target makeup area, and use the beauty mask image corresponding to each target makeup area The code map refers to the second target area image corresponding to each target makeup area deducted from the current frame image. Then, the first target area image and the second target area image corresponding to each target makeup area are converted into images containing only preset single-channel components in the HLS color space. Calculate the absolute value of the difference between the preset single-channel components corresponding to the pixels in the same position in the first target area image corresponding to the same target makeup area and the second target area image after conversion. Count the number of pixels whose absolute value of the difference corresponding to each target makeup area satisfies the preset makeup completion condition. Calculate the ratio between the number of pixels corresponding to each target makeup area and the total number of pixels in the corresponding target makeup area to obtain the makeup progress corresponding to each target makeup area. According to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is calculated.

在本申请实施例中,利用人脸关键点,对视频帧中用户的人脸区域进行矫正和裁剪,提高了识别人脸区域的准确性。依据眼影化妆上妆区域存在重叠的特点,将眼影掩码图拆分成每个目标上妆区域对应的美妆掩码图,对每个目标上妆区域分别进行化妆进度检测,提高了化妆进度检测的准确性。将每个美妆掩码图中的目标上妆区域分别与视频帧中人脸区域图像中的目标上妆区域对齐,确保每个美妆掩码图均与初始帧图像及当前帧图像中对应的目标上妆区域的位置一致。通过对齐后的美妆掩码图分别从初始帧图像和当前帧图像中扣取出每个目标上妆区域对应的第一目标区域图像和第二目标区域图像。进一步地,还对同一目标上妆区域对应的第一目标区域图像和第二目标区域图像进行再次对齐,减少了比较时因位置信息引入的误差。还且本申请没有采用深度学习的方式,无需预先收集大量数据,通过对用户化妆的实时画面的捕获,经过服务器端的计算,将检测结果返回给用户。相比深度学习的模型推理方案,本申请在算法处理环节耗费更少的计算成本,减少了服务器的处理压力。In the embodiment of the present application, the user's face area in the video frame is corrected and cropped by using key points of the face, which improves the accuracy of identifying the face area. According to the overlapping features of eye shadow makeup areas, the eye shadow mask map is split into beauty mask maps corresponding to each target makeup area, and the makeup progress is detected for each target makeup area, which improves the makeup progress. detection accuracy. Align the target makeup area in each beauty mask image with the target makeup area in the face area image in the video frame to ensure that each beauty mask image corresponds to the initial frame image and the current frame image The position of the target makeup area is consistent. The first target area image and the second target area image corresponding to each target makeup area are respectively deducted from the initial frame image and the current frame image through the aligned beauty mask image. Furthermore, the first target area image and the second target area image corresponding to the same target makeup area are also aligned again, which reduces the error introduced by position information during comparison. Moreover, this application does not use deep learning methods, and does not need to collect a large amount of data in advance. By capturing real-time images of the user's make-up and calculating on the server side, the detection results are returned to the user. Compared with the model reasoning scheme of deep learning, this application consumes less computing cost in the algorithm processing link, and reduces the processing pressure of the server.

本申请实施例还提供一种化妆进度检测装置,该装置用于执行上述化妆进度检测方法来实时检测眼影化妆的进度。参见图14,该装置具体包括:An embodiment of the present application also provides a makeup progress detection device, which is used to implement the above makeup progress detection method to detect the progress of eye shadow makeup in real time. Referring to Figure 14, the device specifically includes:

获取模块1401,用于获取眼影掩码图以及用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像;The obtaining module 1401 is used to obtain the initial frame image and the current frame image in the eye shadow mask map and the real-time makeup video in which the user is currently performing a specific makeup look;

拆分模块1402,用于根据眼影上妆的每个目标上妆区域,分别从眼影掩码图中拆分出每个目标上妆区域对应的美妆掩码图;Splitting module 1402, is used for according to each target makeup area of eye shadow makeup, respectively splits the beauty makeup mask map corresponding to each target makeup area from the eye shadow mask map;

化妆进度确定模块1403,用于根据初始帧图像、当前帧图像及每个目标上妆区域对应的美妆掩码图,确定当前帧图像对应的当前化妆进度。The makeup progress determination module 1403 is configured to determine the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image, and the beauty mask map corresponding to each target makeup area.

化妆进度确定模块1403,用于分别以每个目标上妆区域对应的美妆掩码图为参照,从初始帧图像中获取每个目标上妆区域对应的第一目标区域图像;分别以每个目标上妆区域对应的美妆掩码图为参照,从当前帧图像中获取每个目标上妆区域对应的第二目标区域图像;根据每个目标上妆区域对应的第一目标区域图像及第二目标区域图像,确定当前帧图像对应的当前化妆进度。The makeup progress determination module 1403 is used to obtain the first target area image corresponding to each target makeup area from the initial frame image with reference to the beauty mask map corresponding to each target makeup area; The beauty mask image corresponding to the target makeup area is used as a reference, and the second target area image corresponding to each target makeup area is obtained from the current frame image; according to the first target area image and the second target area image corresponding to each target makeup area 2. Target area image, determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块1403,用于分别将每个目标上妆区域对应的第一目标区域图像和第二目标区域图像转换为HLS颜色空间下包含预设单通道分量的图像;根据转换后的每个目标上妆区域对应的第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。The makeup progress determination module 1403 is used to convert the first target area image and the second target area image corresponding to each target makeup area into images containing preset single-channel components in the HLS color space; The first target area image and the second target area image corresponding to the target makeup area determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块1403,用于分别计算转换后同一目标上妆区域对应的第一目标区域图像和第二目标区域图像中位置相同的像素点对应的预设单通道分量的差值绝对值;统计每个目标上妆区域对应的差值绝对值满足预设化妆完成条件的像素点数目;分别计算每个目标上妆区域对应的像素点数目与对应目标上妆区域中的像素点总数目之间的比值,得到每个目标上妆区域对应的化妆进度;根据每个目标上妆区域对应的化妆进度及每个目标上妆区域对应的预设权重,计算当前帧图像对应的当前化妆进度。The makeup progress determination module 1403 is used to calculate the absolute value of the difference between the preset single-channel components corresponding to the pixels in the same position in the first target area image and the second target area image corresponding to the same target makeup area after conversion; statistics The absolute value of the difference corresponding to each target makeup area satisfies the number of pixels that meet the preset makeup completion conditions; calculate the difference between the number of pixels corresponding to each target makeup area and the total number of pixels in the corresponding target makeup area to obtain the makeup progress corresponding to each target makeup area; according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area, calculate the current makeup progress corresponding to the current frame image.

化妆进度确定模块1403,用于检测初始帧图像对应的第一人脸关键点;根据第一人脸关键点,获取初始帧图像对应的人脸区域图像;分别以每个目标上妆区域对应的美妆掩码图为参照,从人脸区域图像中分别获取每个目标上妆区域对应的的第一目标区域图像。The makeup progress determination module 1403 is used to detect the first human face key point corresponding to the initial frame image; according to the first human face key point, obtain the human face area image corresponding to the initial frame image; The makeup mask image is used as a reference, and the first target area image corresponding to each target makeup area is respectively obtained from the face area image.

化妆进度确定模块1403,用于分别将第一目标上妆区域对应的美妆掩码图和人脸区域图像转换为二值化图像;第一目标上妆区域为每个目标上妆区域中的任一目标上妆区域;对美妆掩码图对应的二值化图像和人脸区域图像对应的二值化图像进行与运算,获得美妆掩码图与人脸区域图像的相交区域对应的第一掩膜图像;对第一掩膜图像与初始帧图像对应的人脸区域图像进行与运算,获得第一目标区域图像。The makeup progress determination module 1403 is used to respectively convert the beauty mask map corresponding to the first target makeup area and the face area image into a binary image; the first target makeup area is each target makeup area Any target makeup area; perform an AND operation on the binary image corresponding to the beauty mask image and the binary image corresponding to the face area image to obtain the intersection area corresponding to the beauty mask image and the face area image The first mask image: performing an AND operation on the face area image corresponding to the first mask image and the initial frame image to obtain a first target area image.

化妆进度确定模块1403,还用于根据美妆掩码图对应的标准人脸关键点,确定美妆掩码图中位于美妆掩码图包括的目标上妆区域的轮廓上的一个或多个第一定位点;根据第一人脸关键点,从人脸区域图像中确定出与每个第一定位点对应的第二定位点;对美妆掩码图进行拉伸处理,将每个第一定位点拉伸至对应的每个第二定位点对应的位置处。The makeup progress determination module 1403 is further configured to determine one or more facial key points located in the beauty mask image on the outline of the target makeup area included in the beauty mask image according to the standard face key points corresponding to the beauty mask image. The first positioning point; according to the first key point of the face, determine the second positioning point corresponding to each first positioning point from the face area image; stretch the beauty mask image, and convert each second positioning point A positioning point is stretched to the position corresponding to each second positioning point.

化妆进度确定模块1403,还用于根据眼影掩码图对应的标准人脸关键点,确定眼影掩码图中位于每个上妆区域的轮廓上的一个或多个第一定位点;根据第一人脸关键点,从人脸区域图像中确定出与每个第一定位点对应的第二定位点;对眼影掩码图进行拉伸处理,将每个第一定位点拉伸至对应的每个第二定位点对应的位置处。The makeup progress determination module 1403 is also used to determine one or more first positioning points on the outline of each makeup area in the eyeshadow mask image according to the standard human face key points corresponding to the eye shadow mask image; according to the first Face key points, determine the second positioning point corresponding to each first positioning point from the face area image; stretch the eye shadow mask map, stretch each first positioning point to the corresponding each at the position corresponding to the second positioning point.

化妆进度确定模块1403,用于根据初始帧图像对应的第一人脸关键点,对初始帧图像及第一人脸关键点进行旋转矫正;根据矫正后的第一人脸关键点,从矫正后的初始帧图像中截取包含人脸区域的图像;将包含人脸区域的图像缩放至预设尺寸,得到初始帧图像对应的人脸区域图像。The makeup progress determination module 1403 is used to perform rotation correction on the initial frame image and the first human face key point according to the first human face key point corresponding to the initial frame image; The image containing the face area is intercepted from the initial frame image; the image containing the face area is scaled to a preset size to obtain the face area image corresponding to the initial frame image.

化妆进度确定模块1403,用于根据第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标;根据左眼中心坐标和右眼中心坐标,确定初始帧图像对应的旋转角度及旋转中心点坐标;根据旋转角度和旋转中心点坐标,对初始帧图像及第一人脸关键点进行旋转矫正。The makeup progress determination module 1403 is used to determine the left eye center coordinates and right eye center coordinates respectively according to the left eye key points and right eye key points included in the first human face key points; according to the left eye center coordinates and right eye center coordinates, Determine the rotation angle and the coordinates of the rotation center point corresponding to the initial frame image; perform rotation correction on the initial frame image and the key points of the first face according to the rotation angle and the rotation center point coordinates.

化妆进度确定模块1403,用于根据矫正后的第一人脸关键点,对矫正后的初始帧图像中包含的人脸区域进行图像截取。The makeup progress determination module 1403 is configured to, according to the corrected first key points of the human face, perform image interception on the human face area included in the corrected initial frame image.

化妆进度确定模块1403,用于从矫正后的第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值;根据最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的初始帧图像中人脸区域对应的截取框;根据截取框,从矫正后的初始帧图像中截取出包含人脸区域的图像。The makeup progress determination module 1403 is used to determine the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value from the corrected first face key point; according to the minimum abscissa value, the minimum ordinate value , the maximum abscissa value and the maximum ordinate value, determine the interception frame corresponding to the face area in the corrected initial frame image; according to the interception frame, intercept the image containing the face area from the corrected initial frame image.

化妆进度确定模块1403,还用于将截取框放大预设倍数;根据放大后的截取框,从矫正后的初始帧图像中截取出包含人脸区域的图像。The makeup progress determination module 1403 is further configured to enlarge the clipping frame by a preset multiple; and according to the enlarged clipping frame, clip an image containing a human face area from the corrected initial frame image.

化妆进度确定模块1403,还用于根据包含人脸区域的图像的尺寸及预设尺寸,对矫正后的第一人脸关键点进行缩放平移处理。The makeup progress determination module 1403 is further configured to perform scaling and translation processing on the corrected first human face key points according to the size of the image including the human face area and the preset size.

该装置还包括:人脸检测模块,用于检测初始帧图像和当前帧图像中是否均仅包含同一个用户的人脸图像;如果是,则执行确定用户进行特定妆容的当前化妆进度的操作;如果否,则发送提示信息给用户的终端,提示信息用于提示用户保持实时化妆视频中仅出现同一个用户的人脸。The device also includes: a face detection module, which is used to detect whether the initial frame image and the current frame image only contain the face image of the same user; if so, perform the operation of determining the current makeup progress of the user for a specific makeup look; If not, send prompt information to the user's terminal, and the prompt information is used to prompt the user to keep only the face of the same user appearing in the real-time makeup video.

本申请的上述实施例提供的化妆进度检测装置与本申请实施例提供的化妆进度检测方法出于相同的发明构思,具有与其存储的应用程序所采用、运行或实现的方法相同的有益效果。The makeup progress detection device provided in the above embodiments of the present application is based on the same inventive concept as the makeup progress detection method provided in the embodiments of the present application, and has the same beneficial effects as the methods adopted, run or realized by its stored application programs.

实施例五Embodiment five

1、一种化妆进度检测方法,包括:1. A method for detecting makeup progress, comprising:

获取用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像;Obtain the initial frame image and the current frame image in the real-time makeup video of the user currently performing a specific makeup look;

从所述初始帧图像中获取眉毛对应的第一目标区域图像,及从所述当前帧图像中获取眉毛对应的第二目标区域图像;Obtaining a first target area image corresponding to the eyebrows from the initial frame image, and acquiring a second target area image corresponding to the eyebrows from the current frame image;

根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

2、根据1,所述从所述初始帧图像中获取眉毛对应的第一目标区域图像,包括:2. According to 1, the acquisition of the first target region image corresponding to the eyebrows from the initial frame image includes:

检测所述初始帧图像对应的第一人脸关键点;Detecting the first human face key point corresponding to the initial frame image;

根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像;Acquiring a face region image corresponding to the initial frame image according to the first face key point;

根据所述第一人脸关键点中包括的眉毛关键点,从所述人脸区域图像中获取眉毛对应的第一目标区域图像。According to the eyebrow key points included in the first human face key points, a first target area image corresponding to eyebrows is acquired from the human face area image.

3、根据2,所述根据所述第一人脸关键点中包括的眉毛关键点,从所述人脸区域图像中截取眉毛对应的第一目标区域图像,包括:3. According to 2, according to the eyebrow key points included in the first human face key points, the first target area image corresponding to the eyebrows is intercepted from the face area image, including:

对所述第一人脸关键点包括的眉头至眉峰之间的眉毛关键点进行插值,得到多个插值点;Interpolating the eyebrow key points between the eyebrows and eyebrow peaks included in the first human face key points to obtain a plurality of interpolation points;

从所述人脸区域图像中截取出眉头至眉峰之间所有眉毛关键点及所述多个插值点连接而成的闭合区域,得到眉头至眉峰之间的部分眉毛图像;Intercepting all eyebrow key points between the eyebrow head and the eyebrow peak and the closed area formed by the connection of the plurality of interpolation points from the image of the human face area, obtaining a partial eyebrow image between the eyebrow head and the eyebrow peak;

从所述人脸区域图像中截取出眉峰至眉尾之间的所有眉毛关键点连接而成的闭合区域,得到眉峰至眉尾之间的部分眉毛图像;Intercepting the closed area formed by connecting all eyebrow key points between the eyebrow peak and the eyebrow tail from the image of the human face region, obtaining a partial eyebrow image between the eyebrow peak and the eyebrow tail;

将所述眉头至眉峰之间的部分眉毛图像与所述眉峰至眉尾之间的部分眉毛图像拼接为眉毛对应的第一目标区域图像。The partial eyebrow image between the eyebrow head and the eyebrow peak and the partial eyebrow image between the eyebrow peak and the eyebrow tail are spliced into a first target region image corresponding to the eyebrows.

4、根据1,所述根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:4. According to 1, the determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image includes:

分别将所述第一目标区域图像和所述第二目标区域图像转换为HSV颜色空间下包含预设单通道分量的图像;respectively converting the first target area image and the second target area image into images containing preset single-channel components in HSV color space;

根据转换后的所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the converted first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

5、根据4,所述根据转换后的所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:5. According to 4, the determination of the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image includes:

分别计算转换后的所述第一目标区域图像和所述第二目标区域图像中位置相同的像素点对应的所述预设单通道分量的差值绝对值;Calculating the absolute value of the difference between the preset single-channel components corresponding to pixels at the same position in the converted first target area image and the second target area image;

统计对应的差值绝对值满足预设化妆完成条件的像素点数目;Count the number of pixels whose absolute value of the corresponding difference satisfies the preset makeup completion condition;

计算统计的所述像素点数目与所述第一目标区域图像中所有目标上妆区域中的像素点总数目之间的比值,得到所述当前帧图像对应的当前化妆进度。Calculate the ratio between the counted number of pixels and the total number of pixels in all target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.

6、根据1,所述根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度之前,还包括:6. According to 1, before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, further comprising:

分别对所述第一目标区域图像和所述第二目标区域图像进行二值化处理,得到所述第一目标区域图像对应的第一二值化掩膜图像和所述第二目标区域图像对应的第二二值化掩膜图像;Performing binarization processing on the first target area image and the second target area image respectively, to obtain a first binarized mask image corresponding to the first target area image and corresponding to the second target area image The second binarized mask image of ;

对所述第一二值化掩膜图像和所述第二二值化掩膜图像进行与运算,得到所述第一目标区域图像与所述第二目标区域图像的相交区域对应的第二掩膜图像;Perform an AND operation on the first binarized mask image and the second binarized mask image to obtain a second mask corresponding to the intersection area of the first target area image and the second target area image film image;

获取所述初始帧图像对应的人脸区域图像及所述当前帧图像对应的人脸区域图像;Acquiring the face area image corresponding to the initial frame image and the face area image corresponding to the current frame image;

对所述第二掩膜图像和所述初始帧图像对应的人脸区域图像进行与运算,得到所述初始帧图像对应的新第一目标区域图像;performing an AND operation on the second mask image and the face area image corresponding to the initial frame image to obtain a new first target area image corresponding to the initial frame image;

对所述第二掩膜图像和所述当前帧图像对应的人脸区域图像进行与运算,得到所述当前帧图像对应的新第二目标区域图像。An AND operation is performed on the second mask image and the face area image corresponding to the current frame image to obtain a new second target area image corresponding to the current frame image.

7、根据1,所述确定所述当前帧图像对应的当前化妆进度之前,还包括:7. According to 1, before the determination of the current makeup progress corresponding to the current frame image, it further includes:

分别对所述第一目标区域图像和所述第二目标区域图像中的上妆区域进行边界腐蚀处理。Boundary erosion processing is performed on makeup areas in the first target area image and the second target area image respectively.

8、根据2,所述根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像,包括:8. According to 2, the acquisition of the face area image corresponding to the initial frame image according to the first face key point includes:

根据所述初始帧图像对应的第一人脸关键点,对所述初始帧图像及所述第一人脸关键点进行旋转矫正;Performing rotation correction on the initial frame image and the first human face key point according to the first human face key point corresponding to the initial frame image;

根据矫正后的所述第一人脸关键点,从矫正后的所述初始帧图像中截取包含人脸区域的图像;According to the corrected first human face key point, intercepting an image containing a human face area from the corrected initial frame image;

将所述包含人脸区域的图像缩放至预设尺寸,得到所述初始帧图像对应的人脸区域图像。The image containing the face area is scaled to a preset size to obtain a face area image corresponding to the initial frame image.

9、根据8,所述根据所述第一人脸关键点,对所述初始帧图像及所述第一人脸关键点进行旋转矫正,包括:9. According to 8, performing rotation correction on the initial frame image and the first facial key point according to the first key point of human face, including:

根据所述第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标;Determine left eye center coordinates and right eye center coordinates respectively according to the left eye key point and right eye key point included in the first human face key point;

根据所述左眼中心坐标和所述右眼中心坐标,确定所述初始帧图像对应的旋转角度及旋转中心点坐标;Determine the rotation angle and rotation center point coordinates corresponding to the initial frame image according to the left eye center coordinates and the right eye center coordinates;

根据所述旋转角度和所述旋转中心点坐标,对所述初始帧图像及所述第一人脸关键点进行旋转矫正。Rotation correction is performed on the initial frame image and the first human face key point according to the rotation angle and the coordinates of the rotation center point.

10、根据8,所述根据矫正后的所述第一人脸关键点,从矫正后的所述初始帧图像中截取包含人脸区域的图像,包括:10. According to 8, according to the corrected first human face key point, the image containing the human face area is intercepted from the corrected initial frame image, including:

根据矫正后的所述第一人脸关键点,对矫正后的所述初始帧图像中包含的人脸区域进行图像截取。According to the corrected key points of the first human face, image interception is performed on the human face area included in the corrected initial frame image.

11、根据10,所述根据矫正后的所述第一人脸关键点,对矫正后的所述初始帧图像中包含的人脸区域进行图像截取,包括:11. According to 10, according to the corrected first human face key point, image interception is performed on the human face area contained in the corrected initial frame image, including:

从矫正后的所述第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值;Determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key point;

根据所述最小横坐标值、所述最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的所述初始帧图像中人脸区域对应的截取框;According to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value, determine the clipping frame corresponding to the face area in the corrected initial frame image;

根据所述截取框,从矫正后的所述初始帧图像中截取出包含所述人脸区域的图像。According to the clipping frame, an image including the face area is clipped from the rectified initial frame image.

12、根据11,所述方法还包括:12. According to 11, the method further comprises:

将所述截取框放大预设倍数;Enlarging the interception frame by a preset multiple;

根据放大后的所述截取框,从矫正后的所述初始帧图像中截取出包含所述人脸区域的图像。According to the enlarged clipping frame, an image including the face area is clipped from the corrected initial frame image.

13、根据8,所述方法还包括:13. According to 8, the method further comprises:

根据所述包含人脸区域的图像的尺寸及所述预设尺寸,对矫正后的所述第一人脸关键点进行缩放平移处理。According to the size of the image including the face area and the preset size, zooming and translation processing is performed on the corrected first key points of the face.

14、根据1-13任一项,所述方法还包括:14. According to any one of 1-13, the method further comprises:

检测所述初始帧图像和所述当前帧图像中是否均仅包含同一个用户的人脸图像;Detecting whether both the initial frame image and the current frame image only contain the face image of the same user;

如果是,则执行所述确定所述用户进行所述特定妆容的当前化妆进度的操作;If yes, then perform the operation of determining the current makeup progress of the user performing the specific makeup look;

如果否,则发送提示信息给所述用户的终端,所述提示信息用于提示所述用户保持所述实时化妆视频中仅出现同一个用户的人脸。If not, send prompt information to the user's terminal, where the prompt information is used to prompt the user to keep only the face of the same user appearing in the real-time makeup video.

15、一种化妆进度检测装置,包括:15. A makeup progress detection device, comprising:

视频获取模块,用于获取用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像;The video acquisition module is used to acquire the initial frame image and the current frame image in the real-time makeup video of the user currently performing a specific makeup look;

目标区域获取模块,用于从所述初始帧图像中获取眉毛对应的第一目标区域图像,及从所述当前帧图像中获取眉毛对应的第二目标区域图像;A target area acquiring module, configured to acquire a first target area image corresponding to the eyebrows from the initial frame image, and acquire a second target area image corresponding to the eyebrows from the current frame image;

进度确定模块,用于根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。A progress determination module, configured to determine the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.

16、一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序以实现如权利要求 1-14任一项所述的方法。16. An electronic device, comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, the processor runs the computer program to implement any of claims 1-14. one of the methods described.

17、一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行实现如权利要求1-14中任一项所述的方法。17. A computer-readable storage medium, on which a computer program is stored, and the program is executed by a processor to implement the method according to any one of claims 1-14.

本申请实施例提供一种化妆进度检测方法,该方法用于眉毛妆容对应的化妆进度。参见图15,该实施例具体包括以下步骤:An embodiment of the present application provides a method for detecting makeup progress, which is used for makeup progress corresponding to eyebrow makeup. Referring to Figure 15, this embodiment specifically includes the following steps:

步骤601:获取用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像。Step 601: Obtain an initial frame image and a current frame image in a real-time makeup video in which the user is currently performing a specific makeup look.

本申请实施例的执行主体为服务器。用户的手机或电脑等终端上安装有与服务器提供的化妆进度检测服务相适配的客户端。当用户需要使用化妆进度检测服务时,用户打开终端上的该客户端,客户端的显示界面中设置有视频上传接口,当检测到用户点击该视频上传接口时,调用终端的摄像装置拍摄用户的化妆视频,在拍摄过程中用户在自己脸部进行眉毛上妆操作。用户的终端将拍摄的化妆视频以视频流的形式传输给服务器。服务器接收用户的终端传输的该化妆视频的每一帧图像。The execution subject of this embodiment of the application is a server. A client that is compatible with the makeup progress detection service provided by the server is installed on the terminal such as the user's mobile phone or computer. When the user needs to use the makeup progress detection service, the user opens the client on the terminal, and the display interface of the client is provided with a video upload interface. In the video, the user performs eyebrow makeup on his face during the shooting process. The user's terminal transmits the captured makeup video to the server in the form of a video stream. The server receives each frame of the makeup video transmitted by the user's terminal.

在本申请实施例中,服务器将接收到的第一帧图像作为初始帧图像,以该初始帧图像作为参考来比对后续接收到的每一帧图像对应的特定妆容的当前化妆进度。由于对于后续每一帧图像的处理方式都相同,因此本申请实施例以当前时刻接收到的当前帧图像为例来阐述化妆进度检测的过程。In the embodiment of the present application, the server takes the received first frame image as the initial frame image, and uses the initial frame image as a reference to compare the current makeup progress of the specific makeup corresponding to each subsequently received frame image. Since the processing method for each subsequent image frame is the same, the embodiment of the present application uses the current frame image received at the current moment as an example to illustrate the process of makeup progress detection.

在本申请的另一些实施例中,服务器获得用户的化妆视频的初始帧图像和当前帧图像后,还检测初始帧图像和当前帧图像中是否均仅包含同一个用户的人脸图像。首先检测初始帧图像和当前帧图像中是否均仅包含一个人脸图像,若初始帧图像和/或当前帧图像中包含多个人脸图,或者,初始帧图像和/或当前帧图像中不包含人脸图像,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户保持化妆视频中仅出现同一个用户的人脸。例如,提示信息可以为“请保持镜头内仅出现同一个人的人脸”。In other embodiments of the present application, after the server obtains the initial frame image and the current frame image of the user's makeup video, it also detects whether the initial frame image and the current frame image contain only the face image of the same user. First, detect whether the initial frame image and the current frame image contain only one face image, if the initial frame image and/or the current frame image contain multiple face images, or, the initial frame image and/or the current frame image do not contain face image, then send prompt information to the user's terminal. The user's terminal receives and displays the prompt information to prompt the user to keep only the face of the same user appearing in the makeup video. For example, the prompt information may be "please keep only the face of the same person appearing in the camera".

若检测出初始帧图像和当前帧图像中均仅包含一个人脸图像,则进一步判断初始帧图像中的人脸图像与当前帧图像中的人脸图像是否属于同一个用户。具体地,可以通过人脸识别技术提取初始帧图像中人脸图像对应的人脸特征信息,以及提取当前帧图像中人脸图像对应的人脸特征信息,计算这两帧图像中提取的人脸特征信息的相似度,若计算的相似度大于或等于设定值,则确定初始帧图像和当前帧图像中的人脸属于同一用户。若计算的相似度小于设定值,则确定初始帧图像和当前帧图像中的人脸属于不同用户,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户保持化妆视频中仅出现同一个用户的人脸。If it is detected that both the initial frame image and the current frame image contain only one face image, it is further determined whether the face image in the initial frame image and the face image in the current frame image belong to the same user. Specifically, the face feature information corresponding to the face image in the initial frame image can be extracted by face recognition technology, and the face feature information corresponding to the face image in the current frame image can be extracted, and the face feature information extracted from the two frame images can be calculated. The similarity of feature information, if the calculated similarity is greater than or equal to the set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user. If the calculated similarity is less than the set value, it is determined that the faces in the initial frame image and the current frame image belong to different users, and a prompt message is sent to the user's terminal. The user's terminal receives and displays the prompt information to prompt the user to keep only the face of the same user appearing in the makeup video.

服务器通过本步骤获得用户化妆的初始帧图像及当前帧图像后,通过如下步骤602和603的操作来确定用户的当前化妆进度。After obtaining the initial frame image and the current frame image of the user's make-up through this step, the server determines the current make-up progress of the user through the following steps 602 and 603 .

步骤602:从初始帧图像中获取眉毛对应的第一目标区域图像,及从当前帧图像中获取眉毛对应的第二目标区域图像。Step 602: Obtain a first target area image corresponding to the eyebrows from the initial frame image, and acquire a second target area image corresponding to the eyebrows from the current frame image.

其中,第一目标区域图像的获取过程,与第二目标区域图像的获取过程相同。本申请实施例以第一目标区域图像的获取过程为例进行详细说明。服务器具体通过以下步骤S5-S7的操作来从初始帧图像中获得第一目标区域图像。Wherein, the acquisition process of the first target area image is the same as the acquisition process of the second target area image. In this embodiment of the present application, the process of acquiring the image of the first target area is taken as an example to describe in detail. Specifically, the server obtains the first target area image from the initial frame image through the following steps S5-S7.

S5:检测初始帧图像对应的第一人脸关键点。S5: Detect the key points of the first human face corresponding to the initial frame image.

服务器中配置了预先训练好的用于检测人脸关键点的检测模型,通过该检测模型提供人脸关键点检测的接口服务。服务器获取到用户化妆视频的初始帧图像后,调用人脸关键点检测的接口服务,通过检测模型识别出初始帧图像中用户脸部的所有人脸关键点。为了与当前帧图像对应的人脸关键点进行区分,本申请实施例将初始帧图像对应的所有人脸关键点称为第一人脸关键点。将当前帧图像对应的所有人脸关键点称为第二人脸关键点。The server is configured with a pre-trained detection model for detecting key points of the face, and the interface service of key point detection of the face is provided through the detection model. After the server obtains the initial frame image of the user's makeup video, it calls the interface service of face key point detection, and uses the detection model to identify all face key points of the user's face in the initial frame image. In order to distinguish the face key points corresponding to the current frame image, all face key points corresponding to the initial frame image are referred to as first human face key points in the embodiment of the present application. All face key points corresponding to the current frame image are referred to as second face key points.

其中,识别出的人脸关键点包括用户脸部轮廓上的关键点及嘴巴、鼻子、眼睛、眉毛等部位的关键点。识别出的人脸关键点的数目可以为106个。Among them, the recognized key points of the human face include key points on the outline of the user's face and key points of the mouth, nose, eyes, eyebrows and other parts. The number of recognized face key points may be 106.

S6:根据第一人脸关键点,获取初始帧图像对应的人脸区域图像。S6: According to the key points of the first face, acquire the face area image corresponding to the initial frame image.

服务器具体通过如下步骤S60-S62的操作来获取初始帧图像对应的人脸区域图像,包括:The server specifically obtains the face area image corresponding to the initial frame image through the following steps S60-S62, including:

S60:根据第一人脸关键点,对初始帧图像及第一人脸关键点进行旋转矫正。S60: Perform rotation correction on the initial frame image and the first human face key points according to the first human face key points.

由于用户在通过终端拍摄化妆视频时,不能保证每一帧图像中人脸的姿态角度均相同,为了提高当前帧图像与初始帧图像比对的准确性,需要对每一帧图像中的人脸进行旋转矫正,以使矫正后每一帧图像中人脸眼睛的连线都处在同一水平线上,从而确保每一帧图像中人脸的姿态角度相同,避免因姿态角度不同导致化妆进度检测误差较大的问题。When the user shoots a makeup video through the terminal, the pose angle of the face in each frame of image cannot be guaranteed to be the same. In order to improve the accuracy of the comparison between the current frame image and the initial frame image, it is necessary to Perform rotation correction so that the lines connecting the faces and eyes in each frame of image after correction are on the same horizontal line, so as to ensure that the pose angle of the face in each frame of image is the same, and avoid errors in makeup progress detection due to different pose angles Bigger problem.

具体地,根据第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标。从第一人脸关键点确定出左眼区域的全部左眼关键点及右眼区域的全部右眼关键点。对确定出的全部左眼关键点的横坐标取均值,以及对全部左眼关键点的纵坐标取均值,将左眼对应的横坐标的均值和纵坐标的均值组成一个坐标,将该坐标确定为左眼中心坐标。按照同样的方式确定出右眼中心坐标。Specifically, according to the left eye key point and the right eye key point included in the first human face key point, the left eye center coordinates and the right eye center coordinates are respectively determined. All the left-eye key points in the left-eye area and all the right-eye key points in the right-eye area are determined from the first face key points. Take the mean value of the abscissa of all the determined key points of the left eye, and take the mean of the ordinate of all the key points of the left eye, combine the mean value of the abscissa and the mean value of the ordinate corresponding to the left eye into a coordinate, and determine the coordinate is the coordinates of the center of the left eye. Determine the coordinates of the center of the right eye in the same manner.

然后根据左眼中心坐标和右眼中心坐标,确定初始帧图像对应的旋转角度及旋转中心点坐标。如图4所示,根据左眼中心坐标和右眼中心坐标计算出两个坐标的水平差值dx和竖直差值dy,以及计算出左眼中心坐标和右眼中心坐标的两眼连线长度d。依据两眼连线长度d、水平差值dx和竖直差值dy,计算出两眼连线与水平方向的夹角θ,该夹角θ即为初始帧图像对应的旋转角度。然后根据左眼中心坐标和右眼中心坐标计算出两眼连线的中心点坐标,该中点点坐标即为初始帧图像对应的旋转中心点坐标。Then, according to the left eye center coordinates and the right eye center coordinates, the rotation angle and the rotation center point coordinates corresponding to the initial frame image are determined. As shown in Figure 4, the horizontal difference dx and the vertical difference dy of the two coordinates are calculated according to the left eye center coordinates and the right eye center coordinates, and the line connecting the two eyes between the left eye center coordinates and the right eye center coordinates is calculated length d. According to the length d of the connecting line between the two eyes, the horizontal difference dx and the vertical difference dy, the angle θ between the connecting line between the two eyes and the horizontal direction is calculated, and the angle θ is the corresponding rotation angle of the initial frame image. Then, the center point coordinates of the line connecting the two eyes are calculated according to the left eye center coordinates and the right eye center coordinates, and the midpoint coordinates are the rotation center point coordinates corresponding to the initial frame image.

根据计算出的旋转角度和旋转中心点坐标,对初始帧图像及第一人脸关键点进行旋转矫正。具体将旋转角度和旋转中心点坐标输入用于计算图片的旋转矩阵的预设函数中,该预设函数可以为OpenCV中的函数cv2.getRotationMatrix2D()。通过调用该预设函数获得初始帧图像对应的旋转矩阵。然后计算初始帧图像和该旋转矩阵的乘积,得到矫正后的初始帧图像。利用旋转矩阵对初始帧图像进行矫正的操作,也可以通过调用OpenCV中的函数cv2.warpAffine()来完成。Perform rotation correction on the initial frame image and the key points of the first face according to the calculated rotation angle and the coordinates of the rotation center point. Specifically, the rotation angle and the coordinates of the rotation center point are input into a preset function used to calculate the rotation matrix of the picture, and the preset function may be the function cv2.getRotationMatrix2D() in OpenCV. Obtain the rotation matrix corresponding to the initial frame image by calling this preset function. Then calculate the product of the initial frame image and the rotation matrix to obtain the rectified initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be done by calling the function cv2.warpAffine() in OpenCV.

对于第一人脸关键点,需要对每个第一人脸关键点逐一进行矫正,以与矫正后的初始帧图像相对应。在对第一人脸关键点逐一矫正时,需要进行两次坐标系的转换,第一次将以初始帧图像左上角为原点的坐标系转化为以左下角为原点的坐标系,第二次进一步将以左下角为原点的坐标系转化为以上述旋转中心点坐标为坐标原点的坐标系,如图5所示。经过两次坐标系转换后对每个第一人脸关键点进行如下公式(1)的转换,可完成对第一人脸关键点的旋转矫正。For the key points of the first human face, each key point of the first human face needs to be corrected one by one to correspond to the corrected initial frame image. When correcting the key points of the first face one by one, two transformations of the coordinate system are required. The first time the coordinate system with the origin at the upper left corner of the initial frame image is transformed into the coordinate system with the origin at the lower left corner. The second time Further transform the coordinate system with the lower left corner as the origin into the coordinate system with the above-mentioned rotation center point coordinates as the coordinate origin, as shown in FIG. 5 . After two coordinate system conversions, the conversion of the following formula (1) is performed on each key point of the first human face, and the rotation correction of the key points of the first human face can be completed.

Figure RE-GDA0003326777920000701
Figure RE-GDA0003326777920000701

在公式(1)中,x0、y0分别为旋转矫正前第一人脸关键点的横坐标和纵坐标, x、y分别为旋转矫正第一人脸关键点的横坐标和纵坐标,θ为上述旋转角度。In formula (1), x 0 and y 0 are respectively the abscissa and ordinate of the key point of the first face before rotation correction, and x and y are respectively the abscissa and ordinate of the key point of the first face after rotation correction, θ is the above-mentioned rotation angle.

经过矫正后的初始帧图像和第一人脸关键点是基于整张图像的,整张图像不仅包含用户的人脸信息,还包括其他多余的图像信息,因此需要通过如下步骤S61 对矫正后的图像进行人脸区域的裁剪。The corrected initial frame image and the key points of the first face are based on the entire image, which not only contains the user’s face information, but also includes other redundant image information, so the corrected image needs to be corrected in the following steps S61 The image is cropped for the face area.

S61:根据矫正后的第一人脸关键点,从矫正后的初始帧图像中截取包含人脸区域的图像。S61: According to the corrected first key point of the human face, intercept an image including the human face area from the corrected initial frame image.

首先从矫正后的第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值。然后根据最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的初始帧图像中人脸区域对应的截取框。具体地,将最小横坐标值和最小纵坐标值组成一个坐标点,将该坐标点作为人脸区域对应的截取框的左上角顶点。将最大横坐标值和最大纵坐标值组成另一个坐标点,将该坐标点作为人脸区域对应的截取框的右下角顶点。根据上述左上角顶点和右下角顶点在矫正后的初始帧图像中确定截取框的位置,从矫正后的初始帧图像中截取出位于该截取框内的图像,即截取出包含人脸区域的图像。Firstly, the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value are determined from the corrected first face key points. Then according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value, determine the clipping frame corresponding to the face area in the corrected initial frame image. Specifically, the minimum abscissa value and the minimum ordinate value form a coordinate point, and the coordinate point is used as the upper left vertex of the interception frame corresponding to the face area. The maximum abscissa value and the maximum ordinate value form another coordinate point, and this coordinate point is used as the vertex of the lower right corner of the interception frame corresponding to the face area. Determine the position of the interception frame in the corrected initial frame image according to the above-mentioned upper left corner vertex and lower right corner vertex, and intercept the image located in the interception frame from the corrected initial frame image, that is, intercept the image containing the face area .

在本申请的另一些实施例中,为了确保将用户的全部的人脸区域截取出来,避免截取不完整导致后续化妆进度检测误差很大的情况发生,还可以将上述截取框放大预设倍数,该预设倍数可以为1.15或1.25等。本申请实施例并不限制预设倍数的具体取值,实际应用中可根据需求来设定该预设倍数。将上述截取框向周围放大预设倍数之后,从矫正后的初始帧图像中截取出位于放大后的截取框中的图像,从而截取出包含用户的完整人脸区域的图像。In other embodiments of the present application, in order to ensure that all of the user's face area is intercepted, and to avoid incomplete interception resulting in a large error in the follow-up makeup progress detection, the above interception frame can also be enlarged by a preset multiple, The preset multiple can be 1.15 or 1.25 and so on. The embodiment of the present application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical applications. After the aforementioned clipping frame is enlarged by a preset multiple, the image located in the enlarged clipping frame is clipped from the corrected initial frame image, so as to clip an image including the complete face area of the user.

S62:将包含人脸区域的图像缩放至预设尺寸,得到初始帧图像对应的人脸区域图像。S62: Scale the image including the face area to a preset size to obtain a face area image corresponding to the initial frame image.

通过上述方式从初始帧图像中截取出包含用户的人脸区域的图像后,将包含人脸区域的图像缩放至预设尺寸,得到初始帧图像对应的人脸区域图像。该预设尺寸可以为390×390或400×400等。本申请实施例并不限制预设尺寸的具体取值,实际应用中可根据需求进行设定。After the image including the user's face area is intercepted from the initial frame image in the above manner, the image including the face area is scaled to a preset size to obtain a face area image corresponding to the initial frame image. The preset size may be 390×390 or 400×400 or the like. The embodiment of the present application does not limit the specific value of the preset size, which can be set according to requirements in practical applications.

为了使第一人脸关键点与缩放后的人脸区域图像相适应,将截取的包含人脸区域的图像缩放至预设尺寸后,还需根据缩放前包含人脸区域的图像的尺寸及该预设尺寸,对上述矫正后的第一人脸关键点进行缩放平移处理。具体地,依据缩放前包含人脸区域的图像的尺寸及图像所需缩放至的预设尺寸,确定每个第一人脸关键点的平移方向及平移距离,进而根据每个第一人脸关键点对应的平移方向和平移距离,分别对每个第一人脸关键点进行平移操作,并记录平移后的每个第一人脸关键点的坐标。In order to make the first human face key point adapt to the scaled human face area image, after the captured image containing the human face area is scaled to a preset size, it is also necessary to use the size of the image containing the human face area before scaling and the The preset size is used to zoom and translate the corrected key points of the first human face. Specifically, according to the size of the image containing the face area before zooming and the preset size to which the image needs to be zoomed, determine the translation direction and translation distance of each first face key point, and then according to each first face key point The translation direction and the translation distance corresponding to the point, the translation operation is performed on each first human face key point respectively, and the coordinates of each first human face key point after translation are recorded.

通过上述方式从初始帧图像中获得人脸区域图像,并通过旋转矫正和平移缩放等操作使第一人脸关键点与获得的人脸区域图像相适应,之后通过如下步骤S7 的方式从人脸区域图像中提取出眉毛对应的第一目标区域图像。Obtain the face area image from the initial frame image in the above manner, and make the first key point of the face adapt to the obtained face area image through operations such as rotation correction and translation scaling, and then obtain the face area image through the following step S7. A first target region image corresponding to the eyebrows is extracted from the region image.

在本申请的另一些实施例中,执行步骤S7之前,还可以先对人脸区域图像进行高斯滤波处理,用于去除人脸区域图像中的噪声。具体地,根据预设尺寸的高斯核,对初始帧图像对应的人脸区域图像进行高斯滤波处理。In some other embodiments of the present application, before step S7 is performed, Gaussian filtering may be performed on the image of the face area to remove noise in the image of the face area. Specifically, Gaussian filtering is performed on the face area image corresponding to the initial frame image according to a Gaussian kernel of a preset size.

高斯滤波的高斯核是高斯滤波处理的关键参数,高斯核选取过小,则不能达到很好的滤波效果,高斯核选取过大,虽然能过滤掉图像中的噪声信息,但同时会平滑掉图像中有用的信息。本申请实施例选取预设尺寸的高斯核,预设尺寸可以为9×9。此外高斯滤波函数的另一组参数sigmaX、sigmaY均设置为0,经过高斯滤波后,图像信息更加平滑,进而提高后续获取化妆进度的准确性。The Gaussian kernel of Gaussian filter is the key parameter of Gaussian filter processing. If the Gaussian kernel is too small, the filtering effect cannot be achieved. If the Gaussian kernel is too large, although the noise information in the image can be filtered out, the image will be smoothed at the same time. Useful information in . In this embodiment of the present application, a Gaussian kernel with a preset size is selected, and the preset size may be 9×9. In addition, another set of parameters of the Gaussian filter function, sigmaX and sigmaY, are set to 0. After Gaussian filtering, the image information will be smoother, thereby improving the accuracy of subsequent acquisition of makeup progress.

通过上述方式获得人脸区域图像,或者对人脸区域图像进行高斯滤波处理之后,通过步骤S7从初始帧图像对应的人脸区域图像中提取眉毛对应的第一目标区域图像。The face region image is obtained in the above manner, or after Gaussian filtering is performed on the face region image, the first target region image corresponding to the eyebrows is extracted from the face region image corresponding to the initial frame image through step S7.

S7:根据第一人脸关键点中包括的眉毛关键点,从初始帧图像对应的人脸区域图像中提取眉毛对应的第一目标区域图像。S7: According to the eyebrow key points included in the first face key points, extract the first target area image corresponding to the eyebrows from the face area image corresponding to the initial frame image.

对眉毛化妆进行进度检测时需要将眉毛所在区域的图像扣取出来,以避免其他区域对眉毛化妆进度检测的影响,而且扣取出眉毛区域,后续仅对眉毛区域进行运算,减小了运算量,同时提高了准确性。When detecting the progress of eyebrow makeup, it is necessary to deduct the image of the area where the eyebrows are located to avoid the influence of other areas on the progress detection of eyebrow makeup, and to deduct the eyebrow area, and then only perform operations on the eyebrow area, reducing the amount of calculation. Accuracy is also improved.

上述获得的第一人脸关键点中包括多个眉毛关键点,如可以包括18个眉毛关键点。这多个眉毛关键点分布于眉毛轮廓上眉头至眉尾的不同位置处。为了提高扣取眉毛对应的第一目标区域图像的准确性,本申请实施例通过线性插值的方式来获得更多位于眉毛轮廓上的点,从而依据更多的点来扣取图像。由于眉尾呈尖角状,不便于进行线性插值运算。因此本申请实施例将提取眉毛对应的第一目标区域图像的过程划分为两部分。一部分为眉头至眉峰段,以线性差值的方式获得更多点进而扣取图像。另一部分为眉峰至眉尾段,以当前获得的眉峰至眉尾段的眉毛关键点来扣取图像。The first human face key points obtained above include multiple eyebrow key points, for example, 18 eyebrow key points may be included. These multiple eyebrow key points are distributed at different positions from the head of the eyebrow to the end of the eyebrow on the contour of the eyebrow. In order to improve the accuracy of extracting the image of the first target region corresponding to the eyebrows, the embodiment of the present application uses linear interpolation to obtain more points on the outline of the eyebrows, so as to extract images based on more points. Since the tail of eyebrows is pointed, it is not convenient for linear interpolation. Therefore, in the embodiment of the present application, the process of extracting the image of the first target region corresponding to the eyebrows is divided into two parts. A part is from the eyebrow head to the eyebrow peak, and more points are obtained by linear difference to deduct the image. The other part is the section from the peak of the eyebrow to the end of the eyebrow, and the image is extracted by using the currently obtained key points of the eyebrows from the peak of the eyebrow to the end of the eyebrow.

具体地,对第一人脸关键点包括的眉头至眉峰之间的眉毛关键点进行线性插值,得到多个插值点。将初始帧图像对应的人脸区域图像中眉头至眉峰之间所有眉毛关键点及得到的多个插值点沿眉毛轮廓线顺次连接,得到一个闭合区域,该闭合区域圈出了眉头至眉峰段的部分眉毛区域。从初始帧图像对应的人脸区域图像中截取出该闭合区域的图像,得到眉头至眉峰之间的部分眉毛图像。Specifically, a linear interpolation is performed on the eyebrow key points included in the first human face key points between the head of the eyebrow and the peak of the eyebrow to obtain a plurality of interpolation points. Connect all the eyebrow key points between the eyebrow head and the eyebrow peak in the face area image corresponding to the initial frame image and the obtained multiple interpolation points sequentially along the eyebrow contour line to obtain a closed area, which circles the eyebrow head to the eyebrow peak section part of the eyebrow area. The image of the closed area is intercepted from the face area image corresponding to the initial frame image to obtain a partial eyebrow image between the head of the eyebrow and the peak of the eyebrow.

将初始帧图像对应的人脸区域图像中眉峰至眉尾之间的所有眉毛关键点沿眉毛轮廓线顺次连接,得到一个闭合区域,该闭合区域圈出了眉峰至眉尾段的部分眉毛区域。从初始帧图像对应的人脸区域图像中截取出该闭合区域的图像,得到眉峰至眉尾之间的部分眉毛图像。All eyebrow key points between the eyebrow peak and the eyebrow tail in the face area image corresponding to the initial frame image are connected sequentially along the eyebrow contour line to obtain a closed area, which circles part of the eyebrow area from the eyebrow peak to the eyebrow tail . The image of the closed area is intercepted from the face area image corresponding to the initial frame image to obtain a part of the eyebrow image between the eyebrow peak and the eyebrow tail.

将眉头至眉峰之间的部分眉毛图像与眉峰至眉尾之间的部分眉毛图像拼接为眉毛对应的第一目标区域图像。The part of the eyebrow image between the eyebrow head and the eyebrow peak and the part of the eyebrow image between the eyebrow peak and the eyebrow tail are stitched into the first target area image corresponding to the eyebrow.

对于当前帧图像,同样按照上述步骤S5-S7的操作从当前帧图像中获取眉毛对应的第二目标区域图像。For the current frame image, the second target area image corresponding to the eyebrows is also obtained from the current frame image according to the above steps S5-S7.

在本申请的另一些实施例中,考虑到实际化妆场景中眉毛上妆的边缘可能不会有很清晰的轮廓线,边界通常比较模糊,才不会显得很突兀。因此通过上述实施例获得第一目标区域图像和第二目标区域图像之后,还分别对第一目标区域图像和第二目标区域图像中的眉毛区域进行边界腐蚀处理,使眉毛上妆的目标上妆区域的边界模糊,进而提高化妆进度检测的准确性。In some other embodiments of the present application, considering that the edge of the eyebrow makeup in the actual makeup scene may not have a very clear outline, the boundary is usually blurred so as not to appear abrupt. Therefore, after the first target area image and the second target area image are obtained through the above-mentioned embodiment, boundary erosion processing is also performed on the eyebrow area in the first target area image and the second target area image respectively, so that the target with eyebrow makeup is put on makeup. The boundary of the area is blurred, which in turn improves the accuracy of makeup progress detection.

步骤603:根据第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。Step 603: According to the first target area image and the second target area image, determine the current makeup progress corresponding to the current frame image.

通过上述方式获得的初始帧图像中眉毛对应的第一目标区域图像和当前帧图像中眉毛对应的第二目标区域图像的颜色空间均为RGB颜色空间。本申请实施例预先通过大量试验确定眉毛妆容对颜色空间的各通道分量的影响,发现对RGB颜色空间中各颜色通道的影响差别不大。而HSV颜色空间是由Hue(色调)、Saturation (饱和度)和Value(亮度)三个分量组成,其中一个分量发生变化时,另外两个分量值不会发生明显的变化,相比RGB颜色空间,HSV颜色空间可以分离出其中一个通道分量。且通过试验确定了眉毛妆容对亮度、色调和饱和度中哪个通道分量的影响最大,并在服务器中配置影响最大的通道分量为预设类型的妆容对应的预设单通道分量。对于眉毛妆容,其对应的预设单通道分量可以为亮度分量。The color spaces of the first target region image corresponding to the eyebrows in the initial frame image and the second target region image corresponding to the eyebrows in the current frame image obtained in the above manner are RGB color spaces. In the embodiment of the present application, a large number of experiments are carried out to determine the influence of eyebrow makeup on each channel component of the color space in advance, and it is found that the influence on each color channel in the RGB color space is not very different. The HSV color space is composed of three components: Hue (hue), Saturation (saturation) and Value (brightness). When one of the components changes, the values of the other two components will not change significantly. Compared with the RGB color space , the HSV color space can separate out one of the channel components. And through experiments, it is determined which channel component the eyebrow makeup has the greatest influence on brightness, hue and saturation, and the channel component with the greatest influence is configured in the server as the preset single-channel component corresponding to the preset type of makeup. For eyebrow makeup, the corresponding preset single-channel component may be a brightness component.

通过上述方式获得初始帧图像中眉毛对应的第一目标区域图像和当前帧图像中眉毛对应的第二目标区域图像后,还将第一目标区域图像和第二目标区域图像均由RGB颜色空间转换为HSV颜色空间下。并从转换后的第一目标区域图像的 HSV颜色空间中分离出预设单通道分量,得到仅包含预设单通道分量的第一目标区域图像。以及,从转换后的第二目标区域图像的HSV颜色空间中分离出预设单通道分量,得到仅包含预设单通道分量的第二目标区域图像。After the first target region image corresponding to the eyebrows in the initial frame image and the second target region image corresponding to the eyebrows in the current frame image are obtained by the above method, both the first target region image and the second target region image are converted by RGB color space For the HSV color space. And separating the preset single-channel component from the converted HSV color space of the first target area image to obtain the first target area image including only the preset single-channel component. And, the preset single-channel component is separated from the converted HSV color space of the second target area image to obtain the second target area image including only the preset single-channel component.

然后根据转换后的第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。Then, according to the converted first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

具体地,分别计算转换后第一目标区域图像和第二目标区域图像中位置相同的像素点对应的上述通道分量的差值绝对值。例如,计算转换后的第一目标区域图像和第二目标区域图像中坐标相同的像素点之间的亮度分量的差值绝对值。统计对应的差值绝对值满足预设化妆完成条件的像素点数目。其中,预设化妆完成条件为像素点对应的差值绝对值大于第一预设阈值,第一预设阈值可以为7或8 等。Specifically, the absolute values of the differences of the above-mentioned channel components corresponding to the pixel points in the same position in the converted first target area image and the second target area image are respectively calculated. For example, the absolute value of the difference in brightness component between pixels with the same coordinates in the converted first target area image and the second target area image is calculated. Count the number of pixels whose absolute value of the corresponding difference satisfies the preset makeup completion condition. Wherein, the preset makeup completion condition is that the absolute value of the difference corresponding to the pixel points is greater than a first preset threshold, and the first preset threshold may be 7 or 8 or the like.

统计第一目标区域图像或第二目标区域图像中眉毛区域中的所有像素点的总数。然后计算满足预设化妆完成条件的像素点数目与眉毛区域的像素点总数目之间的比值,将该比值确定为当前化妆进度。The total number of all pixels in the eyebrow area in the first target area image or the second target area image is counted. Then calculate the ratio between the number of pixels meeting the preset makeup completion condition and the total number of pixels in the eyebrow region, and determine the ratio as the current makeup progress.

在本申请的另一些实施例中,为了进一步提高化妆进度检测的准确性,还对上述第一目标区域图像和第二目标区域图像中的眉毛区域进行进一步对齐。具体地,分别对仅包含上述预设单通道分量的第一目标区域图像和第二目标区域图像进行二值化处理,即将第一目标区域图像和第二目标区域图像目标上妆区域中的像素点对应的上述预设单通道分量的值均修改为1,将其余位置处的像素点的上述预设单通道分量的值均修改为0。通过二值化处理得到第一目标区域图像对应的第一二值化掩膜图像和第二目标区域图像对应的第二二值化掩膜图像。In some other embodiments of the present application, in order to further improve the accuracy of makeup progress detection, the eyebrow regions in the above-mentioned first target area image and the second target area image are further aligned. Specifically, binary processing is performed on the first target area image and the second target area image that only contain the above-mentioned preset single-channel components, that is, the pixels in the first target area image and the second target area image target the makeup area The values of the above-mentioned preset single-channel components corresponding to the points are all modified to 1, and the values of the above-mentioned preset single-channel components of the pixels at other positions are all modified to 0. A first binarized mask image corresponding to the first target area image and a second binarized mask image corresponding to the second target area image are obtained through binarization processing.

然后对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,即分别将第一二值化掩膜图像和第二二值化掩膜图像中相同位置处的像素点进行与运算,得到第一目标区域图像与第二目标区域图像的相交区域对应的第二掩膜图像。该第二掩膜图像中像素点的预设单通道分量不为零的区域,即为第一目标区域图像和第二目标区域中重合的目标上妆区域。Then, the AND operation is performed on the first binarized mask image and the second binarized mask image, that is, the pixels at the same position in the first binarized mask image and the second binarized mask image are respectively An AND operation is performed to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image. The area in which the preset single-channel component of the pixel in the second mask image is not zero is the overlapping target makeup area in the first target area image and the second target area.

通过前述步骤的操作获得初始帧图像对应的人脸区域图像及当前帧图像对应的人脸区域图像。对第二掩膜图像和初始帧图像对应的人脸区域图像进行与运算,得到初始帧图像中眉毛对应的新第一目标区域图像;对第二掩膜图像和当前帧图像对应的人脸区域图像进行与运算,得到当前帧图像中眉毛对应的新第二目标区域图像。The face area image corresponding to the initial frame image and the face area image corresponding to the current frame image are obtained through the operations of the foregoing steps. Perform an AND operation on the face area image corresponding to the second mask image and the initial frame image to obtain a new first target area image corresponding to the eyebrows in the initial frame image; for the face area corresponding to the second mask image and the current frame image An AND operation is performed on the image to obtain a new second target area image corresponding to the eyebrow in the current frame image.

在本申请的另一些实施例中,还可以对第二掩膜图像和边界腐蚀处理后的眉毛对应的第一目标区域图像进行与运算,得到眉毛对应的新第一目标区域图像。以及,对第二掩膜图像和边界腐蚀处理后的眉毛对应的第二目标区域图像进行与运算,得到眉毛对应的新第二目标区域图像。In some other embodiments of the present application, an AND operation may be performed on the second mask image and the first target region image corresponding to the eyebrow after boundary erosion processing to obtain a new first target region image corresponding to the eyebrow. And, an AND operation is performed on the second mask image and the second target region image corresponding to the eyebrow after the boundary erosion processing to obtain a new second target region image corresponding to the eyebrow.

由于第二掩膜图像中包含初始帧图像和当前帧图像中重合的目标上妆区域,因此通过第二掩膜图像按照上述方式分别抠取出新第一目标区域图像和新第二目标区域图像,使得新第一目标区域图像和新第二目标区域图像中目标上妆区域的位置是完全一致的,如此后续比对当前帧图像中目标上妆区域与初始帧图像中目标上妆区域的变化来确定化妆进度,确保了进行比对的区域是完全一致的,大大提高了化妆进度检测的准确性。Since the second mask image contains the overlapped target makeup area in the initial frame image and the current frame image, the new first target area image and the new second target area image are extracted respectively through the second mask image in the above-mentioned manner, Make the position of the target makeup area in the new first target area image and the new second target area image completely consistent, so that the subsequent comparison of the target makeup area in the current frame image and the change of the target makeup area in the initial frame image Determining the makeup progress ensures that the areas for comparison are completely consistent, which greatly improves the accuracy of makeup progress detection.

通过上述方式对初始帧图像和当前帧图像中的目标上妆区域进行对齐,得到新第一目标区域图像和新第二目标区域图像后,再次通过上述步骤603的操作来确定当前帧图像对应的当前化妆进度。Align the target makeup area in the initial frame image and the current frame image in the above-mentioned manner, and after obtaining the new first target area image and the new second target area image, determine the corresponding area of the current frame image through the operation of the above step 603 again. Current makeup progress.

通过上述任一方式确定出当前化妆进度后,服务器发送该当前化妆进度给用户的终端。用户的终端接收到当前化妆进度后,显示该当前化妆进度。当前化妆进度可以为比值或百分数。终端可以通过进度条的形式来显示当前化妆进度。After the current makeup progress is determined by any of the above methods, the server sends the current makeup progress to the user's terminal. After receiving the current makeup progress, the user's terminal displays the current makeup progress. The current makeup progress can be a ratio or a percentage. The terminal can display the current makeup progress in the form of a progress bar.

在用户化妆的过程中,通过本申请实施例提供的化妆进度检测方法,实时检测第一帧图像之后的每帧图像相对于第一帧图像的化妆进度,并将检测的化妆进度显示给用户,使用户能够直观地看到自己的化妆进度,提高化妆效率。During the process of the user's makeup, the makeup progress detection method provided by the embodiment of the present application can detect the makeup progress of each frame image after the first frame image relative to the first frame image in real time, and display the detected makeup progress to the user, It enables users to visually see the progress of their makeup and improve the efficiency of makeup.

为了便于理解本申请实施例提供的方法,下面结合附图进行说明。如图16所示,根据初始帧图像及其对应的第一人脸关键点,以及当前帧图像及其对应的第二人脸关键点,分别对初始帧图像和当前帧图像中的人脸进行对齐和裁剪,然后通过拉普拉斯算法对裁剪出的两个人脸区域图像进行平滑和去噪。然后分别从两个人脸区域图像中扣取出眉毛对应的第一目标区域图像和第二目标区域图像。对第一目标区域图像和第二目标区域图像进行边界腐蚀处理。然后将第一目标区域图像和第二目标区域图像转换为HSV颜色空间下仅包含预设单通道分量的图像。再次对第一目标区域图像和第二目标区域图像进行对齐处理,之后依据二者计算当前化妆进度。In order to facilitate understanding of the method provided by the embodiment of the present application, the following description will be made in conjunction with the accompanying drawings. As shown in Figure 16, according to the initial frame image and its corresponding first face key point, and the current frame image and its corresponding second face key point, the faces in the initial frame image and the current frame image are respectively Align and crop, and then smooth and denoise the cropped two face region images by Laplacian algorithm. Then, the first target region image and the second target region image corresponding to the eyebrows are respectively deducted from the two face region images. Boundary erosion processing is performed on the first target area image and the second target area image. Then convert the first target area image and the second target area image into images containing only preset single-channel components in the HSV color space. Alignment is performed on the first target area image and the second target area image again, and then the current makeup progress is calculated based on the two.

在本申请实施例中,利用人脸关键点,对视频帧中用户的人脸区域进行矫正和裁剪,提高了识别人脸区域的准确性。基于人脸关键点从人脸区域图像中扣取出眉毛对应的目标区域图像,并对初始帧图像和当前帧图像各自对应的目标区域图像进行像素对齐,提高了眉毛对应的目标区域图像的准确性。对初始帧图像和当前帧图像中的目标上妆区域对齐,减少因目标上妆区域的位置差别引入的误差。在抠取眉毛区域时引入分段插值算法,使得扣取的眉毛区域更加连贯准确。且本申请没有采用深度学习的方式,无需预先收集大量数据,本申请通过对用户化妆的实时画面的捕获,经过服务器端的计算,将检测结果返回给用户。相比深度学习的模型推理方案,本申请在算法处理环节耗费更少的计算成本,减少了服务器的处理压力。In the embodiment of the present application, the user's face area in the video frame is corrected and cropped by using key points of the face, which improves the accuracy of identifying the face area. Based on the key points of the face, the target area image corresponding to the eyebrow is deducted from the face area image, and the pixel alignment is performed on the target area image corresponding to the initial frame image and the current frame image, which improves the accuracy of the target area image corresponding to the eyebrow . Align the target makeup area in the initial frame image and the current frame image to reduce the error introduced by the position difference of the target makeup area. A segmented interpolation algorithm is introduced when extracting the eyebrow area to make the extracted eyebrow area more coherent and accurate. Moreover, this application does not adopt the method of deep learning, and does not need to collect a large amount of data in advance. This application captures the real-time picture of the user's make-up, and returns the detection result to the user through server-side calculation. Compared with the model reasoning scheme of deep learning, this application consumes less computing cost in the algorithm processing link, and reduces the processing pressure of the server.

本申请实施例还提供一种化妆进度检测装置,该装置用于执行上述检测眉毛的化妆进度的化妆进度检测方法。参见图17,该装置具体包括:An embodiment of the present application further provides a makeup progress detection device, which is used to implement the above-mentioned makeup progress detection method for detecting eyebrow makeup progress. Referring to Figure 17, the device specifically includes:

视频获取模块1601,用于获取用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像;The video acquisition module 1601 is used to acquire the initial frame image and the current frame image in the real-time makeup video in which the user is currently performing a specific makeup look;

目标区域获取模块1602,用于从初始帧图像中获取眉毛对应的第一目标区域图像,及从当前帧图像中获取眉毛对应的第二目标区域图像;The target area acquisition module 1602 is used to acquire the first target area image corresponding to the eyebrows from the initial frame image, and acquire the second target area image corresponding to the eyebrows from the current frame image;

进度确定模块1603,用于根据第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。The progress determination module 1603 is configured to determine the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.

目标区域获取模块1602,用于检测初始帧图像对应的第一人脸关键点;根据第一人脸关键点,获取初始帧图像对应的人脸区域图像;根据第一人脸关键点中包括的眉毛关键点,从人脸区域图像中获取眉毛对应的第一目标区域图像。The target area acquisition module 1602 is used to detect the first human face key point corresponding to the initial frame image; according to the first human face key point, obtain the corresponding human face area image of the initial frame image; according to the first human face key point included Eyebrow key points, obtain the first target area image corresponding to the eyebrows from the face area image.

目标区域获取模块1602,用于对第一人脸关键点包括的眉头至眉峰之间的眉毛关键点进行插值,得到多个插值点;从人脸区域图像中截取出眉头至眉峰之间所有眉毛关键点及多个插值点连接而成的闭合区域,得到眉头至眉峰之间的部分眉毛图像;从人脸区域图像中截取出眉峰至眉尾之间的所有眉毛关键点连接而成的闭合区域,得到眉峰至眉尾之间的部分眉毛图像;将眉头至眉峰之间的部分眉毛图像与眉峰至眉尾之间的部分眉毛图像拼接为眉毛对应的第一目标区域图像。The target area acquisition module 1602 is used to interpolate the eyebrow key points between the eyebrows and the eyebrow peaks included in the first human face key points to obtain a plurality of interpolation points; intercept all eyebrows between the eyebrows and the eyebrow peaks from the face area image A closed area formed by connecting key points and multiple interpolation points to obtain a part of the eyebrow image between the head of the eyebrow and the peak of the eyebrow; a closed area formed by connecting all eyebrow key points between the peak of the eyebrow and the end of the eyebrow is intercepted from the image of the face area , to obtain a part of the eyebrow image between the eyebrow peak and the eyebrow tail; the part of the eyebrow image between the eyebrow head and the eyebrow peak and the part of the eyebrow image between the eyebrow peak and the eyebrow tail are stitched into the first target area image corresponding to the eyebrows.

进度确定模块1603,用于分别将第一目标区域图像和第二目标区域图像转换为HSV颜色空间下包含预设单通道分量的图像;根据转换后的第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。A progress determination module 1603, configured to respectively convert the first target area image and the second target area image into images containing preset single-channel components in the HSV color space; according to the converted first target area image and the second target area image , to determine the current makeup progress corresponding to the current frame image.

进度确定模块1603,用于分别计算转换后的第一目标区域图像和第二目标区域图像中位置相同的像素点对应的预设单通道分量的差值绝对值;统计对应的差值绝对值满足预设化妆完成条件的像素点数目;计算统计的像素点数目与第一目标区域图像中所有目标上妆区域中的像素点总数目之间的比值,得到当前帧图像对应的当前化妆进度。The progress determination module 1603 is used to calculate the absolute value of the difference of the preset single-channel component corresponding to the pixels in the same position in the converted first target area image and the second target area image respectively; the absolute value of the corresponding statistical difference satisfies Preset the number of pixels for makeup completion conditions; calculate the ratio between the statistical number of pixels and the total number of pixels in all target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.

进度确定模块1603,还用于分别对第一目标区域图像和第二目标区域图像进行二值化处理,得到第一目标区域图像对应的第一二值化掩膜图像和第二目标区域图像对应的第二二值化掩膜图像;对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,得到第一目标区域图像与第二目标区域图像的相交区域对应的第二掩膜图像;获取初始帧图像对应的人脸区域图像及当前帧图像对应的人脸区域图像;对第二掩膜图像和初始帧图像对应的人脸区域图像进行与运算,得到初始帧图像对应的新第一目标区域图像;对第二掩膜图像和当前帧图像对应的人脸区域图像进行与运算,得到当前帧图像对应的新第二目标区域图像。The progress determination module 1603 is further configured to perform binarization processing on the first target area image and the second target area image respectively to obtain the first binarized mask image corresponding to the first target area image and the second target area image corresponding to The second binarized mask image; the AND operation is performed on the first binarized mask image and the second binarized mask image to obtain the first target area image corresponding to the intersection area of the second target area image Two mask images; obtain the face area image corresponding to the initial frame image and the face area image corresponding to the current frame image; perform an AND operation on the face area image corresponding to the second mask image and the initial frame image to obtain the initial frame image The corresponding new first target area image; performing an AND operation on the second mask image and the face area image corresponding to the current frame image to obtain a new second target area image corresponding to the current frame image.

该装置还包括:边界腐蚀模块,用于分别对第一目标区域图像和第二目标区域图像中的上妆区域进行边界腐蚀处理。The device also includes: a boundary erosion module, which is used to respectively perform boundary erosion processing on the makeup area in the first target area image and the second target area image.

目标区域获取模块1602,用于根据初始帧图像对应的第一人脸关键点,对初始帧图像及第一人脸关键点进行旋转矫正;根据矫正后的第一人脸关键点,从矫正后的初始帧图像中截取包含人脸区域的图像;将包含人脸区域的图像缩放至预设尺寸,得到初始帧图像对应的人脸区域图像。The target area acquisition module 1602 is used to perform rotation correction on the initial frame image and the first human face key point according to the first human face key point corresponding to the initial frame image; The image containing the face area is intercepted from the initial frame image; the image containing the face area is scaled to a preset size to obtain the face area image corresponding to the initial frame image.

目标区域获取模块1602,用于根据第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标;根据左眼中心坐标和右眼中心坐标,确定初始帧图像对应的旋转角度及旋转中心点坐标;根据旋转角度和旋转中心点坐标,对初始帧图像及第一人脸关键点进行旋转矫正。The target area acquisition module 1602 is used to determine the left eye center coordinates and right eye center coordinates respectively according to the left eye key points and right eye key points included in the first human face key points; according to the left eye center coordinates and right eye center coordinates, Determine the rotation angle and the coordinates of the rotation center point corresponding to the initial frame image; perform rotation correction on the initial frame image and the key points of the first face according to the rotation angle and the rotation center point coordinates.

目标区域获取模块1602,用于根据矫正后的第一人脸关键点,对矫正后的初始帧图像中包含的人脸区域进行图像截取。The target area acquiring module 1602 is configured to perform image capture on the face area included in the corrected initial frame image according to the corrected first key point of the human face.

目标区域获取模块1602,用于从矫正后的第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值;根据最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的初始帧图像中人脸区域对应的截取框;根据截取框,从矫正后的初始帧图像中截取出包含人脸区域的图像。The target area acquisition module 1602 is used to determine the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value from the corrected first face key point; according to the minimum abscissa value, the minimum ordinate value , the maximum abscissa value and the maximum ordinate value, determine the interception frame corresponding to the face area in the corrected initial frame image; according to the interception frame, intercept the image containing the face area from the corrected initial frame image.

目标区域获取模块1602,还用于将截取框放大预设倍数;根据放大后的截取框,从矫正后的初始帧图像中截取出包含人脸区域的图像。The target area acquisition module 1602 is also configured to enlarge the clipping frame by a preset multiple; and according to the enlarged clipping frame, clip an image containing the face area from the corrected initial frame image.

目标区域获取模块1602,还用于根据包含人脸区域的图像的尺寸及预设尺寸,对矫正后的第一人脸关键点进行缩放平移处理。The target area acquisition module 1602 is further configured to perform scaling and translation processing on the corrected first key points of the human face according to the size of the image including the human face area and the preset size.

该装置还包括:人脸检测模块,用于检测初始帧图像和当前帧图像中是否均仅包含同一个用户的人脸图像;如果是,则执行确定用户进行特定妆容的当前化妆进度的操作;如果否,则发送提示信息给用户的终端,提示信息用于提示用户保持实时化妆视频中仅出现同一个用户的人脸。The device also includes: a face detection module, which is used to detect whether the initial frame image and the current frame image only contain the face image of the same user; if so, perform the operation of determining the current makeup progress of the user for a specific makeup look; If not, send prompt information to the user's terminal, and the prompt information is used to prompt the user to keep only the face of the same user appearing in the real-time makeup video.

本申请的上述实施例提供的化妆进度检测装置与本申请实施例提供的化妆进度检测方法出于相同的发明构思,具有与其存储的应用程序所采用、运行或实现的方法相同的有益效果。The makeup progress detection device provided in the above embodiments of the present application is based on the same inventive concept as the makeup progress detection method provided in the embodiments of the present application, and has the same beneficial effects as the methods adopted, run or realized by its stored application programs.

实施例六Embodiment six

本申请实施例提供一种化妆进度检测方法,该方法用于粉底、散粉等可以全脸涂抹的妆容对应的化妆进度。参见图18,该实施例具体包括以下步骤:An embodiment of the present application provides a method for detecting makeup progress, which is used for makeup progress corresponding to makeup looks that can be applied to the entire face, such as foundation and loose powder. Referring to Figure 18, this embodiment specifically includes the following steps:

步骤701:获取用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像。Step 701: Obtain an initial frame image and a current frame image in a real-time makeup video in which the user is currently performing a specific makeup look.

步骤701的操作与上述实施例五中步骤601的操作相同,在此不再赘述。The operation of step 701 is the same as the operation of step 601 in the fifth embodiment above, and will not be repeated here.

步骤702:根据初始帧图像,模拟生成完成特定妆容后的结果图像。Step 702: According to the initial frame image, simulate and generate a result image after completing a specific makeup.

利用3D渲染技术在初始帧图像上渲染完成特定妆容的效果,得到结果图像。例如,通过3D渲染技术在初始帧图像上渲染出完成粉底上妆后的结果图像。Use 3D rendering technology to render the effect of specific makeup on the initial frame image to obtain the result image. For example, a resultant image after applying foundation makeup is rendered on the initial frame image by using a 3D rendering technology.

步骤703:根据初始帧图像、结果图像和当前帧图像,确定当前帧图像对应的当前化妆进度。Step 703: According to the initial frame image, the result image and the current frame image, determine the current makeup progress corresponding to the current frame image.

服务器具体通过如下步骤S8-S10的操作来确定当前化妆进度,包括:The server specifically determines the current makeup progress through the following steps S8-S10, including:

S8:分别获取初始帧图像、结果图像和当前帧图像对应的整体图像亮度。S8: Obtain the overall image brightness corresponding to the initial frame image, the result image, and the current frame image respectively.

对于初始帧图像,首先将初始帧图像转换为灰度图像,然后计算转换后初始帧图像对应的灰度图像中所有像素点的灰度平均值,将计算的灰度平均值确定为初始帧图像对应的整体图像亮度。For the initial frame image, first convert the initial frame image into a grayscale image, then calculate the grayscale average value of all pixels in the grayscale image corresponding to the initial frame image after conversion, and determine the calculated grayscale average value as the initial frame image The corresponding overall image brightness.

对于结果图像和当前帧图像,按照同样地方式计算结果图像对应的灰度图像中所有像素点的灰度平均值,得到结果图像对应的整体图像亮度。以及计算当前帧图像对应的灰度图像中所有像素点的灰度平均值,得到当前帧图像对应的整体图像亮度。For the result image and the current frame image, the average gray value of all pixels in the gray image corresponding to the result image is calculated in the same manner to obtain the overall image brightness corresponding to the result image. And calculating the average gray value of all pixels in the gray image corresponding to the current frame image to obtain the overall image brightness corresponding to the current frame image.

其中,整体图像亮度包含了图像中人脸区域的亮度及人脸所处背景的环境亮度。Wherein, the overall image brightness includes the brightness of the face area in the image and the ambient brightness of the background where the face is located.

S9:分别获取初始帧图像、结果图像和当前帧图像对应的人脸区域亮度。S9: Respectively acquire the brightness of the face area corresponding to the initial frame image, the result image and the current frame image.

服务器具体通过如下步骤S91-S93的操作来获取各图像对应的人脸区域亮度,具体包括:The server specifically obtains the brightness of the face area corresponding to each image through the following steps S91-S93, specifically including:

S91:分别获取初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像。S91: Respectively acquire face region images corresponding to the initial frame image, the result image and the current frame image.

获取初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像的具体操作过程相同,均可参考上述实施例五中步骤S5-S6的过程,在此不再赘述。The specific operation process of obtaining the corresponding face area images of the initial frame image, the result image and the current frame image is the same, and can refer to the process of steps S5-S6 in the fifth embodiment above, and will not be repeated here.

S92:分别将初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像转换为人脸灰度图像。S92: Convert the face region images respectively corresponding to the initial frame image, the result image and the current frame image into face grayscale images.

S93:分别计算初始帧图像、结果图像和当前帧图像各自对应的人脸灰度图像中像素点的灰度平均值,得到初始帧图像、结果图像和当前帧图像各自对应的人脸区域亮度。S93: Calculating gray averages of pixels in the face gray images corresponding to the initial frame image, the result image, and the current frame image respectively, to obtain the brightness of the face area corresponding to the initial frame image, the result image, and the current frame image.

将初始帧图像对应的人脸区域图像转换为人脸灰度图像,计算该人脸灰度图像中所有像素点的灰度平均值,将该灰度平均值确定为初始帧图像对应的人脸区域亮度。按照同样的方式,分别计算出结果图像对应的人脸区域亮度和当前帧图像对应的人脸区域亮度。Convert the face area image corresponding to the initial frame image into a grayscale image of the face, calculate the average gray value of all pixels in the gray image of the face, and determine the average gray value as the face area corresponding to the initial frame image brightness. In the same manner, the brightness of the human face area corresponding to the result image and the brightness of the human face area corresponding to the current frame image are calculated respectively.

上述步骤S8和S9可以并行执行,也可以串行执行,本申请实施例并不限制步骤S8和S9的执行顺序。通过上述步骤S8和S9分别获得初始帧图像对应的整体图像亮度和人脸区域亮度、结果图像对应的整体图像亮度和人脸区域亮度、当前帧图像对应的整体图像亮度和人脸区域亮度,之后通过如下步骤S10的操作来确定当前化妆进度。The above steps S8 and S9 may be executed in parallel or serially, and this embodiment of the present application does not limit the execution sequence of steps S8 and S9. Through the above steps S8 and S9, respectively obtain the overall image brightness and the face area brightness corresponding to the initial frame image, the overall image brightness and the face area brightness corresponding to the result image, and the overall image brightness and the face area brightness corresponding to the current frame image, and then The current makeup progress is determined through the operation of the following step S10.

S10:根据初始帧图像、结果图像和当前帧图像各自对应的整体图像亮度和人脸区域亮度,确定当前帧图像对应的当前化妆进度。S10: Determine the current makeup progress corresponding to the current frame image according to the overall image brightness and face area brightness corresponding to the initial frame image, the result image, and the current frame image.

由于用户在一定的亮度环境下进行目标类型的上妆,用户脸部的亮度叠加了上妆效果和环境光线两方面的影响,所以在确定当前化妆进度时需要消除环境光线的影响,只考虑人脸图像中因上妆效果造成的亮度变化,确保获得的当前化妆进度的准确性。Since the user applies makeup of the target type under a certain brightness environment, the brightness of the user’s face is superimposed on the influence of both the makeup effect and the ambient light. Therefore, it is necessary to eliminate the influence of the ambient light when determining the current makeup progress, and only consider the human body. Brightness changes in the face image due to makeup effects to ensure the accuracy of the current makeup progress obtained.

对于当前帧图像与初始帧图像之间的环境变化,根据初始帧图像对应的整体图像亮度和人脸区域亮度以及当前帧图像对应的整体图像亮度和人脸区域亮度,确定当前帧图像对应的第一环境变化亮度。具体地,计算初始帧图像对应的整体图像亮度与初始帧图像对应的人脸区域亮度之间的差值,该差值即为初始帧图像中所有背景部分的亮度,称为初始帧图像的环境亮度。以及计算当前帧图像对应的整体图像亮度与当前帧图像对应的人脸区域亮度之间的差值,该差值即为当前帧图像中所有背景部分的亮度,称为当前帧图像的环境亮度。计算当前帧图像的环境亮度与初始帧图像的环境亮度之间的差值绝对值,将该差值绝对值确定为当前帧图像对应的第一环境变化亮度。第一环境变化亮度体现了当前帧图像与初始帧图像之间环境亮度的变化情况。For the environmental change between the current frame image and the initial frame image, according to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the current frame image, determine the current frame image. - Change the brightness of the environment. Specifically, the difference between the brightness of the overall image corresponding to the initial frame image and the brightness of the face area corresponding to the initial frame image is calculated. This difference is the brightness of all background parts in the initial frame image, which is called the environment of the initial frame image brightness. And calculate the difference between the overall image brightness corresponding to the current frame image and the face area brightness corresponding to the current frame image, the difference is the brightness of all background parts in the current frame image, called the ambient brightness of the current frame image. Calculate the absolute value of the difference between the ambient brightness of the current frame image and the ambient brightness of the initial frame image, and determine the absolute value of the difference as the first ambient change brightness corresponding to the current frame image. The first environment change brightness reflects the change of environment brightness between the current frame image and the initial frame image.

由于结果图像是在初始帧图像的基础上渲染目标类型的上妆效果而得到的,因此可能受渲染操作的影响,使得结果图像中背景部分的亮度与初始帧图像中背景部分的亮度不一致,即结果图像和初始帧图像之间也可能存在环境亮度变化。因此本申请实施例还根据初始帧图像对应的整体图像亮度和人脸区域亮度以及结果图像对应的整体图像亮度和人脸区域亮度,确定结果图像对应的第二环境变化亮度。具体地,计算结果图像对应的整体图像亮度与当前帧图像对应的人脸区域亮度之间的差值,该差值即为结果图像中所有背景部分的亮度,称为结果图像的环境亮度。计算结果图像的环境亮度与上述初始帧图像的环境亮度之间的差值绝对值,将该差值绝对值确定为结果图像对应的第二环境变化亮度。第二环境变化亮度体现了结果图像与初始帧图像之间环境亮度的变化情况。Since the result image is obtained by rendering the makeup effect of the target type on the basis of the initial frame image, it may be affected by the rendering operation, making the brightness of the background part in the result image inconsistent with the brightness of the background part in the initial frame image, that is There may also be changes in ambient brightness between the resulting image and the original frame image. Therefore, the embodiment of the present application also determines the second environmental change brightness corresponding to the result image according to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the result image. Specifically, the difference between the brightness of the overall image corresponding to the result image and the brightness of the face area corresponding to the current frame image is calculated, and the difference is the brightness of all background parts in the result image, which is called the ambient brightness of the result image. Calculate the absolute value of the difference between the ambient brightness of the result image and the ambient brightness of the initial frame image, and determine the absolute value of the difference as the second ambient change brightness corresponding to the result image. The second environment change brightness reflects the change of environment brightness between the result image and the initial frame image.

通过上述方式获得第一环境变化亮度和第二环境变化亮度后,根据第一环境变化亮度、第二环境变化亮度、初始帧图像对应的人脸区域亮度、当前帧图像对应的人脸区域亮度、结果图像对应的人脸区域亮度,确定当前帧图像对应的当前化妆进度。After obtaining the first environment change brightness and the second environment change brightness in the above manner, according to the first environment change brightness, the second environment change brightness, the face area brightness corresponding to the initial frame image, the face area brightness corresponding to the current frame image, The brightness of the face area corresponding to the result image determines the current makeup progress corresponding to the current frame image.

首先根据第一环境变化亮度、初始帧图像对应的人脸区域亮度、当前帧图像对应的人脸区域亮度,确定当前帧图像对应的上妆亮度变化值。具体地,计算当前帧图像对应的人脸区域亮度与初始帧图像对应的人脸区域亮度之间的差值,得到当前帧图像对应的总亮度变化值,该总亮度变化值中包含上妆效果带来的亮度变化及环境光线变化带来的亮度变化。计算该总亮度变化值与第一环境变化亮度之间的差值,得到当前帧图像对应的上妆亮度变化值。First, according to the brightness of the first environmental change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the current frame image, determine the makeup brightness change value corresponding to the current frame image. Specifically, calculate the difference between the brightness of the face area corresponding to the current frame image and the brightness of the face area corresponding to the initial frame image, and obtain the total brightness change value corresponding to the current frame image, which includes the makeup effect Changes in brightness caused by changes in ambient light and changes in brightness caused by changes in ambient light. The difference between the total brightness change value and the first environmental change brightness is calculated to obtain the makeup brightness change value corresponding to the current frame image.

通过上述方式计算出当前帧图像到参考帧图像人脸区域的总亮度变化值,并从总亮度变化值中扣除了当前帧图像到参考帧图像的环境亮度变化,得到的上妆亮度变化值更贴近于通过目标类型的上妆操作形成的实际亮度变化值,准确性很高。The total brightness change value from the current frame image to the reference frame image face area is calculated by the above method, and the environmental brightness change from the current frame image to the reference frame image is deducted from the total brightness change value, and the obtained makeup brightness change value is even better. It is close to the actual brightness change value formed by the makeup operation of the target type, and the accuracy is high.

然后根据第二环境变化亮度、初始帧图像对应的人脸区域亮度、结果图像对应的人脸区域亮度,确定结果图像对应的上妆亮度变化值。具体地,计算结果图像对应的人脸区域亮度与初始帧图像对应的人脸区域亮度之间的差值,得到结果图像对应的总亮度变化值,该总亮度变化值中包含完成目标类型的上妆所带来的亮度变化及生成结果图像所采用的渲染操作对背景部分形成的亮度变化。计算该总亮度变化值与第二环境变化亮度之间的差值,得到结果图像对应的上妆亮度变化值。Then, according to the brightness of the second environmental change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the result image, the makeup brightness change value corresponding to the result image is determined. Specifically, the difference between the brightness of the face area corresponding to the result image and the brightness of the face area corresponding to the initial frame image is calculated to obtain the total brightness change value corresponding to the result image, which includes the upper and lower values of the completed target type The brightness changes caused by makeup and the brightness changes caused by the rendering operation used to generate the resulting image on the background part. The difference between the total brightness change value and the second environmental change brightness is calculated to obtain the makeup brightness change value corresponding to the result image.

通过上述方式计算出结果图像到参考帧图像人脸区域的总亮度变化值,并从总亮度变化值中扣除了渲染操作对结果图像的背景部分造成的环境亮度变化,得到的上妆亮度变化值更贴近于完成目标类型的上妆操作形成的实际亮度变化值,准确性很高。Calculate the total brightness change value from the result image to the face area of the reference frame image by the above method, and deduct the environmental brightness change caused by the rendering operation on the background part of the result image from the total brightness change value, and obtain the makeup brightness change value It is closer to the actual brightness change value formed by completing the makeup operation of the target type, and the accuracy is very high.

最后计算当前帧图像对应的上妆亮度变化值与结果图像对应的上妆亮度变化值的比值,得到当前帧图像对应的当前化妆进度。Finally, the ratio of the makeup brightness change value corresponding to the current frame image to the makeup brightness change value corresponding to the result image is calculated to obtain the current makeup progress corresponding to the current frame image.

通过计算当前帧图像中因上妆造成的上妆亮度变化值与结果图像中上妆完成造成的上妆亮度变化值之间的比值,能够准确得到当前帧图像对应的当前化妆进度。在该过程中剔除了环境亮度变化的影响,大大提高了检测化妆进度的准确性。The current makeup progress corresponding to the current frame image can be accurately obtained by calculating the ratio between the makeup brightness change value caused by makeup application in the current frame image and the makeup brightness change value caused by makeup completion in the result image. In this process, the influence of environmental brightness changes is eliminated, which greatly improves the accuracy of detecting the progress of makeup.

在本申请的另一些实施例中,用户上妆过程中环境亮度可能会出现较大变化,如突然变亮或突然变暗等情况。环境亮度变化较大时对人脸区域亮度的变化有很大影响,使得化妆进度检测的准确性降低。因此本申请实施例通过上述方式获得当前帧图像到参考帧图像的第一环境变化亮度之后,还将第一环境变化亮度与预设阈值进行比较,预设阈值可以为50或60等,本申请实施例并不限制预设阈值的具体取值,实际应用中可根据需求进行设定。In some other embodiments of the present application, the environment brightness may change greatly during the process of applying makeup by the user, such as sudden brightening or sudden darkening. When the ambient brightness changes greatly, it has a great influence on the brightness change of the face area, which reduces the accuracy of makeup progress detection. Therefore, the embodiment of the present application obtains the first environmental change brightness from the current frame image to the reference frame image through the above method, and then compares the first environmental change brightness with a preset threshold, which can be 50 or 60, etc., in this application The embodiment does not limit the specific value of the preset threshold, which can be set according to requirements in practical applications.

若比较出第一环境变化亮度小于或等于预设阈值,则确定当前帧图像与参考帧图像之间环境亮度变化不大,按照上述方式继续确定当前帧图像对应的当前化妆进度。If it is compared that the brightness of the first environmental change is less than or equal to the preset threshold, it is determined that the environmental brightness change between the current frame image and the reference frame image is not large, and the current makeup progress corresponding to the current frame image is determined in the above manner.

若比较出第一环境变化亮度大于预设阈值,则表明当前帧图像与参考帧图像之间环境亮度变化很大,则直接将上一帧图像对应的化妆进度确定为当前帧图像对应的当前化妆进度。并发送第一提示信息给用户的终端,用户的终端接收并显示第一提示信息,以提示用户回到初始帧图像对应的亮度环境下上妆。If it is compared that the brightness of the first environmental change is greater than the preset threshold, it indicates that the environmental brightness changes greatly between the current frame image and the reference frame image, and the makeup progress corresponding to the previous frame image is directly determined as the current makeup progress corresponding to the current frame image. schedule. And send the first prompt information to the user's terminal, and the user's terminal receives and displays the first prompt information to remind the user to return to the brightness environment corresponding to the initial frame image to apply makeup.

如此能够在环境亮度变化很大的情况下,直接以上一帧图像对应的化妆进度作为当前化妆进度,节省运算资源。且避免检测的化妆进度突然大幅增加或减少的情况,提高化妆进度检测的稳定性和准确性。In this way, when the ambient brightness changes greatly, the makeup progress corresponding to the previous frame image can be directly used as the current makeup progress, saving computing resources. And avoid the situation that the detected makeup progress suddenly increases or decreases greatly, and improves the stability and accuracy of the makeup progress detection.

通过上述任一方式确定出当前化妆进度后,服务器发送该当前化妆进度给用户的终端。用户的终端接收到当前化妆进度后,显示该当前化妆进度。当前化妆进度可以为比值或百分数。终端可以通过进度条的形式来显示当前化妆进度。After the current makeup progress is determined by any of the above methods, the server sends the current makeup progress to the user's terminal. After receiving the current makeup progress, the user's terminal displays the current makeup progress. The current makeup progress can be a ratio or a percentage. The terminal can display the current makeup progress in the form of a progress bar.

在用户化妆的过程中,通过本申请实施例提供的化妆进度检测方法,实时检测第一帧图像之后的每帧图像相对于第一帧图像的化妆进度,并将检测的化妆进度显示给用户,使用户能够直观地看到自己的化妆进度,提高化妆效率。During the process of the user's makeup, the makeup progress detection method provided by the embodiment of the present application can detect the makeup progress of each frame image after the first frame image relative to the first frame image in real time, and display the detected makeup progress to the user, It enables users to visually see the progress of their makeup and improve the efficiency of makeup.

为了便于理解本申请实施例提供的方法,下面结合附图进行说明。如图19所示,A1:获取用户实时化妆视频中的初始帧图像和当前帧图像。A2:在初始帧图像上渲染完成目标类型的上妆的效果,得到结果图像。A3:分别获取初始帧图像、结果图像和当前帧图像对应的整体图像亮度,然后并行执行步骤A6和A7。A4:分别获取初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像。A5:分别获取初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像的人脸区域亮度,然后并行执行步骤A6和A7。A6:根据初始帧图像对应的整体图像亮度和人脸区域亮度以及当前帧图像对应的整体图像亮度和人脸区域亮度,确定当前帧图像对应的第一环境变化亮度,然后并行执行步骤A8和A9。A7:根据初始帧图像对应的整体图像亮度和人脸区域亮度以及结果图像对应的整体图像亮度和人脸区域亮度,确定结果图像对应的第二环境变化亮度,然后并行执行步骤A8和A9。 A8:根据第一环境变化亮度、初始帧图像对应的人脸区域亮度、当前帧图像对应的人脸区域亮度,确定当前帧图像对应的上妆亮度变化值,然后执行步骤A10。 A9:根据第二环境变化亮度、初始帧图像对应的人脸区域亮度、结果图像对应的人脸区域亮度,确定结果图像对应的上妆亮度变化值。A10:计算当前帧图像对应的上妆亮度变化值与结果图像对应的上妆亮度变化值的比值,得到当前帧图像对应的当前化妆进度。In order to facilitate understanding of the method provided by the embodiment of the present application, the following description will be made in conjunction with the accompanying drawings. As shown in Figure 19, A1: Acquire the initial frame image and the current frame image in the user's real-time makeup video. A2: Render the makeup effect of the target type on the initial frame image to obtain the result image. A3: Obtain the overall image brightness corresponding to the initial frame image, the result image and the current frame image respectively, and then execute steps A6 and A7 in parallel. A4: Obtain the face region images corresponding to the initial frame image, the result image and the current frame image respectively. A5: Obtain the face area luminance of the face area images corresponding to the initial frame image, the result image and the current frame image respectively, and then execute steps A6 and A7 in parallel. A6: According to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the current frame image, determine the first environment change brightness corresponding to the current frame image, and then perform steps A8 and A9 in parallel . A7: According to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the result image, determine the second environment change brightness corresponding to the result image, and then execute steps A8 and A9 in parallel. A8: According to the brightness of the first environmental change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the current frame image, determine the makeup brightness change value corresponding to the current frame image, and then perform step A10. A9: According to the brightness of the second environment change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the result image, determine the change value of the makeup brightness corresponding to the result image. A10: Calculate the ratio of the makeup brightness change value corresponding to the current frame image to the makeup brightness change value corresponding to the result image to obtain the current makeup progress corresponding to the current frame image.

如图20所示,根据参考帧图像及其对应的第一人脸关键点、当前帧图像及其对应的第二人脸关键点以及结果图像及其对应的第三人脸关键点,分别计算出三张图像各自对应的整体图像亮度。然后根据各个图像对应的人脸关键点信息分别对三张图像中的人脸进行校正和裁剪,分别得到参考帧图像、当前帧图像和结果图像各自对应的人脸区域图像。分别计算裁剪得到的三个人脸区域图像对应的人脸区域亮度。依据各图像对应的整体图像亮度和人脸区域亮度,计算当前帧图像对应的第一环境变化亮度。若第一环境变化亮度大于预设阈值,则提示用户回到初始帧图像对应的亮度环境下上妆。若第一环境变化亮度小于或等于预设阈值,则计算当前帧图像的人脸区域亮度与初始帧图像的人脸区域亮度之间的亮度差,计算该亮度差与第一环境变化亮度之间的差值。以及计算结果图像的人脸区域亮度与初始帧图像的人脸区域亮度之间的亮度差,计算该亮度差与第二环境变化亮度之间的差值。计算当前帧图像对应的上述差值与结果图像对应的上述差值之间的比值,即得到当前化妆进度。As shown in Figure 20, according to the reference frame image and its corresponding first face key point, the current frame image and its corresponding second face key point, and the result image and its corresponding third face key point, calculate The overall image brightness corresponding to each of the three images is displayed. Then, according to the face key point information corresponding to each image, the faces in the three images are corrected and cropped respectively, and the face area images corresponding to the reference frame image, the current frame image and the result image are respectively obtained. The face area luminance corresponding to the cropped three face area images is calculated respectively. According to the overall image brightness and face area brightness corresponding to each image, the first environment change brightness corresponding to the current frame image is calculated. If the brightness of the first environmental change is greater than the preset threshold, the user is prompted to return to the brightness environment corresponding to the initial frame image to apply makeup. If the brightness of the first environmental change is less than or equal to the preset threshold, then calculate the brightness difference between the brightness of the face area of the current frame image and the brightness of the face area of the initial frame image, and calculate the difference between the brightness difference and the brightness of the first environmental change difference. And calculate the brightness difference between the brightness of the face area of the result image and the brightness of the face area of the initial frame image, and calculate the difference between the brightness difference and the brightness of the second environment change. Calculate the ratio between the above-mentioned difference corresponding to the current frame image and the above-mentioned difference corresponding to the result image to obtain the current makeup progress.

在本申请实施例中,在用户化妆过程的初始帧图像的基础上,模拟生成完成上妆后的结果图像。依据当前帧图像、初始帧图像和该结果图像中人脸区域的亮度变化来确定当前化妆进度,且从人脸区域的亮度变化中扣除了环境亮度变化的影响,大大提高了化妆进度检测的准确性,且未采用深度学习模型,无需预先收集大量数据,运算量小,成本低,减少了服务器的处理压力,提高了化妆进度检测的效率,能够满足化妆进度检测的实时性要求。In the embodiment of the present application, on the basis of the initial frame image of the user's makeup process, the result image after the makeup is completed is simulated and generated. The current makeup progress is determined according to the brightness change of the face area in the current frame image, the initial frame image, and the resulting image, and the influence of the environment brightness change is deducted from the brightness change of the face area, which greatly improves the accuracy of makeup progress detection It is flexible, and does not use a deep learning model, does not need to collect a large amount of data in advance, has a small amount of calculation, low cost, reduces the processing pressure on the server, improves the efficiency of makeup progress detection, and can meet the real-time requirements of makeup progress detection.

实施例七Embodiment seven

本申请实施例提供一种化妆进度检测方法,该方法用于遮瑕妆容对应的化妆进度。参见图21,该实施例具体包括以下步骤:An embodiment of the present application provides a makeup progress detection method, which is used for the makeup progress corresponding to the concealer makeup. Referring to Figure 21, this embodiment specifically includes the following steps:

步骤801:获取用户当前进行特定妆容的实时化妆视频中的初始帧图像和当前帧图像。Step 801: Obtain an initial frame image and a current frame image in a real-time makeup video in which the user is currently performing a specific makeup look.

步骤801的操作与上述实施例五中步骤601的操作相同,在此不再赘述。The operation of step 801 is the same as the operation of step 601 in the fifth embodiment above, and will not be repeated here.

步骤802:分别获取初始帧图像和当前帧图像各自对应的脸部瑕疵信息。Step 802: Respectively acquire facial blemish information corresponding to the initial frame image and the current frame image.

首先分别获取初始帧图像和当前帧图像各自对应的人脸区域图像。获取初始帧图像和当前帧图像各自对应的人脸区域图像的具体操作过程相同,均可参考上述实施例五中步骤S5-S6的过程,在此不再赘述。Firstly, the face area images corresponding to the initial frame image and the current frame image are acquired respectively. The specific operation process of obtaining the face area images corresponding to the initial frame image and the current frame image is the same, and can refer to the process of steps S5-S6 in the fifth embodiment above, and will not be repeated here.

然后通过预设的皮肤检测模型检测初始帧图像对应的人脸区域图像中各瑕疵类别对应的瑕疵数目,将检测出的每种瑕疵类别及其对应的瑕疵数目作为初始帧图像对应的脸部瑕疵信息。同样地,通过预设的皮肤检测模型检测出当前帧图像对应的人脸区域图像中各瑕疵类别对应的瑕疵数目,得到当前帧图像对应的脸部瑕疵信息。Then use the preset skin detection model to detect the number of blemishes corresponding to each blemish category in the face area image corresponding to the initial frame image, and use each detected blemish category and its corresponding number of blemishes as the face blemish corresponding to the initial frame image information. Similarly, the number of blemishes corresponding to each blemish category in the face area image corresponding to the current frame image is detected through the preset skin detection model, and the facial blemish information corresponding to the current frame image is obtained.

其中,预设的皮肤检测模型是预先通过大量人脸图像对神经网络模型进行训练得到的,能够对人脸图像中脸部的痘痘、斑点、皱纹等瑕疵进行识别并分类。上述瑕疵类别包括痘痘、斑点、皱纹等瑕疵中的一种或多种。通过预设的皮肤检测模型识别出初始帧图像及当前帧图像各自对应的人脸区域图像中的痘痘数目、斑点数目、皱纹数目等等。Among them, the preset skin detection model is obtained by training the neural network model through a large number of face images in advance, and can identify and classify blemishes such as acne, spots, and wrinkles on the face in the face image. The above blemish categories include one or more of blemishes such as acne, spots, and wrinkles. The number of pimples, the number of spots, the number of wrinkles, etc. in the face area images corresponding to the initial frame image and the current frame image are identified through the preset skin detection model.

步骤803:根据初始帧图像和当前帧图像及其各自对应的脸部瑕疵信息,确定当前帧图像对应的当前化妆进度。Step 803: Determine the current makeup progress corresponding to the current frame image according to the initial frame image and the current frame image and their corresponding facial blemish information.

服务器具体通过如下步骤B1-B3的操作来确定当前化妆进度,包括:The server specifically determines the current makeup progress through the following steps B1-B3, including:

B1:根据初始帧图像对应的脸部瑕疵信息和当前帧图像对应的脸部瑕疵信息,计算当前帧图像与初始帧图像之间的脸部瑕疵差异值。B1: According to the facial blemish information corresponding to the initial frame image and the facial blemish information corresponding to the current frame image, calculate the facial blemish difference value between the current frame image and the initial frame image.

脸部瑕疵信息中包括瑕疵类别及其对应的瑕疵数目,瑕疵类别包括痘痘、斑点、皱纹等瑕疵中的一种或多种。分别计算每种瑕疵类别下初始帧图像对应的瑕疵数目与当前帧图像对应的瑕疵数目之间的差值,然后计算每种瑕疵类别对应的差值之和,将得到的和值作为当前帧图像与初始帧图像之间的脸部瑕疵差异值。The facial blemish information includes a blemish category and a corresponding blemish number, and the blemish category includes one or more of blemishes, spots, wrinkles, and the like. Calculate the difference between the number of defects corresponding to the initial frame image under each defect category and the number of defects corresponding to the current frame image, and then calculate the sum of the differences corresponding to each defect category, and use the obtained sum as the current frame image The face blemish difference value from the original frame image.

例如,假设瑕疵类别包括痘痘和斑点两种类别,初始帧图像对应的脸部瑕疵信息中包括痘痘的数目为5,斑点的数目为4。当前帧图像对应的脸部瑕疵信息中包括的痘痘的数目为3,斑点的数目为1。则初始帧图像与当前帧图像之间的痘痘的差值为2,斑点的差值为3。计算痘痘的差值与斑点的差值之间的和值为5,则当前帧图像与初始帧图像之间的脸部瑕疵差异值即为5。For example, assuming that the blemish category includes acne and blemishes, the facial blemish information corresponding to the initial frame image includes 5 blemishes and 4 blemishes. The number of pimples included in the facial blemish information corresponding to the current frame image is 3, and the number of spots is 1. Then the difference of the acne between the initial frame image and the current frame image is 2, and the difference of the spots is 3. Calculate the sum of the difference between the acne difference and the spot to be 5, and then the face blemish difference between the current frame image and the initial frame image is 5.

脸部瑕疵差异值能够体现当前帧图像与初始帧图像之间在光滑细致方面的差异,这种差异是由遮瑕化妆、光线变化、拍摄角度变化等多种因素造成的。本申请实施例通过大量试验测得当脸部瑕疵差异值大于一定值时可以确定该差异主要是由遮瑕化妆引起的,将该一定值作为预设阈值配置在服务器中,该预设阈值可以为4或5等。The facial blemish difference value can reflect the difference in smoothness and detail between the current frame image and the initial frame image, which is caused by various factors such as concealer makeup, light changes, and shooting angle changes. The embodiment of the present application has measured through a large number of experiments that when the difference value of facial blemishes is greater than a certain value, it can be determined that the difference is mainly caused by concealer makeup, and the certain value is configured in the server as a preset threshold value. The preset threshold value can be 4 or 5 etc.

通过上述方式计算出脸部瑕疵差异值之后,将该脸部瑕疵差异值与预设阈值进行比较,若脸部瑕疵差异值大于预设阈值,则表明当前由遮瑕化妆引起的面部变化比较明显,后续通过步骤B2的操作来确定当前化妆进度。若脸部瑕疵差异值小于或等于预设阈值,则表明当前由遮瑕化妆引起的面部变化很小,后续通过步骤B3的操作来确定当前化妆进度。After the facial blemish difference value is calculated by the above method, compare the facial blemish difference value with the preset threshold, if the facial blemish difference value is greater than the preset threshold value, it indicates that the current facial change caused by concealer makeup is relatively obvious, Subsequently, the current makeup progress is determined through the operation of step B2. If the facial blemish difference value is less than or equal to the preset threshold, it indicates that the current facial change caused by concealer makeup is very small, and the current makeup progress is determined through the operation of step B3.

B2:若脸部瑕疵差异值大于预设阈值,则根据脸部瑕疵差异值和初始帧图像对应的脸部瑕疵信息,计算当前帧图像对应的当前化妆进度。B2: If the facial blemish difference value is greater than the preset threshold, calculate the current makeup progress corresponding to the current frame image according to the facial blemish difference value and the facial blemish information corresponding to the initial frame image.

计算初始帧图像对应的脸部瑕疵信息中各瑕疵类别对应的瑕疵数目之和,得到总瑕疵数。计算上述脸部瑕疵差异值与总瑕疵数之间的比值,将该比值作为当前帧图像对应的当前化妆进度。The sum of the number of blemishes corresponding to each blemish category in the facial blemish information corresponding to the initial frame image is calculated to obtain the total number of blemishes. Calculate the ratio between the facial blemish difference value and the total number of blemishes, and use the ratio as the current makeup progress corresponding to the current frame image.

例如,假设初始帧图像对应的脸部瑕疵信息中包括痘痘的数目为5,斑点的数目为4。当前帧图像对应的脸部瑕疵信息中包括的痘痘的数目为3,斑点的数目为 1。则当前帧图像与初始帧图像之间的脸部瑕疵差异值为5,初始帧图像对应的总瑕疵数为9,则当前帧图像对应的当前化妆进度即为5/9。For example, it is assumed that the facial blemish information corresponding to the initial frame image includes 5 acnes and 4 spots. The number of pimples included in the facial blemish information corresponding to the current frame image is 3, and the number of spots is 1. Then the facial blemish difference value between the current frame image and the initial frame image is 5, the total number of blemishes corresponding to the initial frame image is 9, and the current makeup progress corresponding to the current frame image is 5/9.

在当前帧图像与初始帧图像之间的脸部现场差异值大于预设阈值,即脸部差异主要由遮瑕化妆引起时,直接将脸部瑕疵差异值与初始帧图像对应的总瑕疵数之间的比值作为当前化妆进度,化妆进度确定过程简单,计算量小,能够快速确定出当前化妆进度,效率很高,能够满足遮瑕化妆进度检测的实时性要求。When the face scene difference value between the current frame image and the initial frame image is greater than the preset threshold, that is, when the face difference is mainly caused by concealer makeup, directly calculate the difference between the face defect difference value and the total number of defects corresponding to the initial frame image The ratio of is used as the current makeup progress, the makeup progress determination process is simple, the amount of calculation is small, the current makeup progress can be quickly determined, the efficiency is high, and it can meet the real-time requirements of the concealer makeup progress detection.

通过上述方式确定出当前化妆进度后,服务器发送该当前化妆进度给用户的终端。用户的终端接收到当前化妆进度后,显示该当前化妆进度。当前化妆进度可以为比值或百分数。终端可以通过进度条的形式来显示当前化妆进度。After the current makeup progress is determined in the above manner, the server sends the current makeup progress to the user's terminal. After receiving the current makeup progress, the user's terminal displays the current makeup progress. The current makeup progress can be a ratio or a percentage. The terminal can display the current makeup progress in the form of a progress bar.

B3:若脸部瑕疵差异值小于或等于预设阈值,则获取用户完成遮瑕上妆后的结果图像,根据初始帧图像、结果图像和当前帧图像,确定当前帧图像对应的当前化妆进度。B3: If the facial blemish difference value is less than or equal to the preset threshold, obtain the result image after the user completes the concealment and makeup, and determine the current makeup progress corresponding to the current frame image according to the initial frame image, result image and current frame image.

当判断出上述脸部瑕疵差异值小于或等于预设阈值时,认为当前帧图像与初始帧图像之间的差异较小,若直接利用脸部瑕疵差异值与初始帧图像的总瑕疵数来确定当前化妆进度,则会误差很大。因此本申请实施例在这种情况下不采用脸部瑕疵差异值与初始帧图像的总瑕疵数来确定当前化妆进度,而是先获取用户完成遮瑕上妆后的结果图像,然后根据初始帧图像、结果图像和当前帧图像,确定当前帧图像对应的当前化妆进度。When it is judged that the above facial blemish difference value is less than or equal to the preset threshold, it is considered that the difference between the current frame image and the initial frame image is small, if the face blemish difference value and the total number of blemishes in the initial frame image are used directly The current makeup progress, there will be a big error. Therefore, in this case, the embodiment of the present application does not use the difference value of facial blemishes and the total number of blemishes in the initial frame image to determine the current makeup progress, but first obtains the result image after the user completes the concealment and makeup application, and then uses the initial frame image to determine the current makeup progress. , the result image and the current frame image, and determine the current makeup progress corresponding to the current frame image.

首先利用3D渲染技术在初始帧图像上渲染完成遮瑕上妆的效果,得到结果图像。在本申请实施例中生成结果图像之前,可以先对初始帧图像进行旋转矫正,使图像中两眼连线与水平线平行之后,再在旋转矫正后的初始帧图像的基础上渲染出完成遮瑕上妆后的结果图像,从而使结果图像中人脸两眼连线也与水平线平行,如此无需再对结果图像进行旋转矫正,节省运算量。First, use 3D rendering technology to render the effect of concealer and makeup on the initial frame image to obtain the result image. Before generating the result image in the embodiment of this application, the initial frame image can be rotated and corrected so that the line connecting the two eyes in the image is parallel to the horizontal line, and then the completed concealer can be rendered on the basis of the rotated and corrected initial frame image. The result image after makeup, so that the line connecting the two eyes of the face in the result image is also parallel to the horizontal line, so that there is no need to perform rotation correction on the result image, which saves the amount of calculation.

获得完成遮瑕上妆后的结果图像后,通过用于检测人脸关键点的检测模型检测出结果图像对应的第三人脸关键点,根据第三人脸关键点,通过实施例五中步骤S5-S6的方式从结果图像中截取出人脸区域图像,具体截取过程在此不再赘述。After obtaining the result image after the concealer and makeup are completed, the third face key point corresponding to the result image is detected by the detection model used to detect the key point of the face, and according to the third face key point, through step S5 in the fifth embodiment The method of - S6 intercepts the face area image from the result image, and the specific interception process will not be repeated here.

获得初始帧图像对应的人脸区域图像、当前帧图像对应的人脸区域图像以及结果图像对应的人脸区域图像,之后依据三者各自对应的人脸区域图像来确定当前化妆进度。Obtain the face area image corresponding to the initial frame image, the face area image corresponding to the current frame image, and the face area image corresponding to the result image, and then determine the current makeup progress according to the face area images corresponding to the three.

上述获得的初始帧图像对应的人脸区域图像、当前帧图像对应的人脸区域图像以及结果图像对应的人脸区域图像,这些图像的颜色空间均为RGB颜色空间。本申请实施例预先通过大量试验确定了遮瑕上妆对颜色空间的各通道分量的影响,发现对RGB颜色空间中各颜色通道的影响差别不大。而HLS颜色空间是由Hue (色调)、Saturation(饱和度)和Light(亮度)三个分量组成,通过试验发现遮瑕上妆能够引起HLS颜色空间的饱和度分量发生明显的变化。因此首先分别将初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像均由RGB颜色空间转换至HLS颜色空间下。然后从转换后的图像的HLS颜色空间中分离出饱和度通道,分别得到初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像在HLS颜色空间下仅包含饱和度通道的图像。The color space of the face area image corresponding to the initial frame image obtained above, the face area image corresponding to the current frame image, and the result image are RGB color space. In the embodiments of the present application, a large number of tests have been conducted in advance to determine the influence of concealer and makeup on each channel component of the color space, and it is found that the influence on each color channel in the RGB color space is not much different. The HLS color space is composed of three components: Hue (hue), Saturation (saturation) and Light (brightness). Through experiments, it is found that concealer and makeup can cause significant changes in the saturation component of the HLS color space. Therefore, firstly, the face region images corresponding to the initial frame image, the result image and the current frame image are converted from the RGB color space to the HLS color space. Then the saturation channel is separated from the HLS color space of the converted image, and the images of the face regions corresponding to the initial frame image, the result image and the current frame image respectively contain only the saturation channel images in the HLS color space.

通过上述转换之后,通过预设滤波算法分别计算转换后初始帧图像、结果图像和当前帧图像各自的人脸区域图像对应的平滑因子。其中,预设滤波算法可以为拉普拉斯算法或高斯滤波算法等。以高斯滤波算法为例,可以根据预设尺寸的高斯核,来计算各人脸区域图像对应的平滑因子。预设尺寸可以为7*7,预设尺寸也可以为其他值,本申请实施例并不限定预设尺寸的具体取值。After the above conversion, the smoothing factors corresponding to the face area images of the converted initial frame image, the result image and the current frame image are respectively calculated through a preset filtering algorithm. Wherein, the preset filtering algorithm may be a Laplacian algorithm or a Gaussian filtering algorithm. Taking the Gaussian filtering algorithm as an example, the smoothing factor corresponding to each face area image can be calculated according to a Gaussian kernel of a preset size. The preset size may be 7*7, or other values. The embodiment of the present application does not limit the specific value of the preset size.

获得初始帧图像、结果图像和当前帧图像各自对应的平滑因子之后,据此确定当前帧图像对应的当前化妆进度。具体地,计算当前帧图像对应的平滑因子与初始帧图像对应的平滑因子之间的第一差值,计算结果图像对应的平滑因子与初始帧图像对应的平滑因子之间的第二差值。计算第一差值与第二差值之间的比值,将该比值作为当前帧图像对应的当前化妆进度。After the smoothing factors corresponding to the initial frame image, the result image and the current frame image are obtained, the current makeup progress corresponding to the current frame image is determined accordingly. Specifically, calculate the first difference between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image, and calculate the second difference between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image. Calculate the ratio between the first difference and the second difference, and use the ratio as the current makeup progress corresponding to the current frame image.

通过上述方式确定出当前化妆进度后,服务器发送该当前化妆进度给用户的终端。用户的终端接收到当前化妆进度后,显示该当前化妆进度。当前化妆进度可以为比值或百分数。终端可以通过进度条的形式来显示当前化妆进度。After the current makeup progress is determined in the above manner, the server sends the current makeup progress to the user's terminal. After receiving the current makeup progress, the user's terminal displays the current makeup progress. The current makeup progress can be a ratio or a percentage. The terminal can display the current makeup progress in the form of a progress bar.

在用户化妆的过程中,通过本申请实施例提供的化妆进度检测方法,实时检测初始帧图像之后的每帧图像相对于初始帧图像的化妆进度,并将检测的化妆进度显示给用户,使用户能够直观地看到自己的遮瑕化妆进度,提高化妆效率。During the process of the user's makeup, the makeup progress detection method provided by the embodiment of the present application can detect in real time the makeup progress of each frame image after the initial frame image relative to the initial frame image, and display the detected makeup progress to the user, so that the user You can intuitively see the progress of your concealer makeup and improve your makeup efficiency.

为了便于理解本申请实施例提供的方法,下面结合附图进行说明。如图22所示,根据初始帧图像及其对应的第一人脸关键点,当前帧图像及其对应的第二人脸关键点,分别对初始帧图像和当前帧图像中的人脸进行矫正和裁剪,然后调用皮肤检测模型分别检测初始帧图像和当前帧图像各自对应的脸部瑕疵信息。基于二者的脸部瑕疵信息计算当前帧图像与初始帧图像之间的脸部瑕疵差异值。若该脸部瑕疵差异值大于预设阈值,则计算脸部瑕疵差异值与初始帧图像对应的瑕疵总数之间的比值,得到当前化妆进度。若该脸部瑕疵差异值小于或等于预设阈值,则在初始帧图像上渲染生成遮瑕上妆后的结果图像,检测该结果图像对应的第三人脸关键点,根据第三人脸关键点对结果图像中的人脸进行矫正和裁剪。将初始帧图像、当前帧图像和结果图像各自对应的人脸区域图像转换为HLS颜色空间下仅包含饱和度通道的图像。通过预设滤波算法分别计算转换后的三个人脸区域图像各自对应的平滑因子。计算当前帧图像对应的平滑因子与初始帧图像对应的平滑因子之间的第一差值。计算结果图像对应的平滑因子与初始帧图像对应的平滑因子之间的第二差值。计算第一差值与第二差值之间的比值,将该比值作为当前帧图像对应的当前化妆进度。In order to facilitate understanding of the method provided by the embodiment of the present application, the following description will be made in conjunction with the accompanying drawings. As shown in Figure 22, according to the initial frame image and its corresponding first face key point, the current frame image and its corresponding second face key point, the face in the initial frame image and the current frame image are corrected respectively and cropping, and then call the skin detection model to detect the facial defect information corresponding to the initial frame image and the current frame image respectively. Based on the facial blemish information of the two, the facial blemish difference value between the current frame image and the initial frame image is calculated. If the facial blemish difference value is greater than the preset threshold, the ratio between the facial blemish difference value and the total number of blemishes corresponding to the initial frame image is calculated to obtain the current makeup progress. If the facial blemish difference value is less than or equal to the preset threshold, the result image after concealment and makeup is generated by rendering on the initial frame image, and the third face key point corresponding to the result image is detected, and according to the third face key point Rectify and crop the faces in the resulting image. Convert the corresponding face area images of the initial frame image, current frame image and result image into images containing only the saturation channel in the HLS color space. The smoothing factors corresponding to the converted three human face region images are respectively calculated through a preset filtering algorithm. Calculate a first difference between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image. A second difference between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image is calculated. Calculate the ratio between the first difference and the second difference, and use the ratio as the current makeup progress corresponding to the current frame image.

在本申请实施例中,获取用户化妆过程的当前帧图像与初始帧图像,确定出从当前帧图像到初始帧图像的脸部瑕疵差异值,在脸部瑕疵差异值大于预设阈值时,计算该脸部瑕疵差异值与初始帧图像对应的总瑕疵数之间的比值,即得到了遮瑕上妆的当前化妆进度。在脸部瑕疵差异值小于或等于预设阈值时,在初始帧图像的基础上模拟出完成遮瑕上妆的结果图像,分别确定出初始帧图像、结果图像和当前帧图像各自对应的平滑因子,计算当前帧图像与初始帧图像之间的平滑因子的差值,以及计算出结果图像与初始帧图像之间的平滑因子的差值,两个差值之间的比值即为当前化妆进度。In the embodiment of the present application, the current frame image and the initial frame image of the user's makeup process are acquired, and the difference value of facial blemishes from the current frame image to the initial frame image is determined. When the facial blemish difference value is greater than the preset threshold, calculate The ratio between the facial blemish difference value and the total number of blemishes corresponding to the initial frame image is the current makeup progress of concealer makeup. When the facial blemish difference value is less than or equal to the preset threshold, the result image of the concealed makeup is simulated on the basis of the initial frame image, and the corresponding smoothing factors of the initial frame image, the result image and the current frame image are respectively determined, Calculate the difference between the smoothing factor between the current frame image and the initial frame image, and calculate the difference between the smoothing factor between the result image and the initial frame image, and the ratio between the two differences is the current makeup progress.

如此在当前帧图像与初始帧图像之间的脸部瑕疵差异值大于预设阈值时,化妆进度的确定过程简单,运算量小,能够快速确定出当前化妆进度,效率很高,能够满足遮瑕化妆进度检测的实时性要求。而在脸部瑕疵差异值小于或等于预设阈值时,引入滤波算法计算各图像对应的平滑因子,依据平滑因子的变化情况来确定化妆进度,可以对更细微的变化进行更精确的计算,避免了化妆进度出现短时间突增的情况发生。In this way, when the facial blemish difference between the current frame image and the initial frame image is greater than the preset threshold, the process of determining the makeup progress is simple, the amount of calculation is small, and the current makeup progress can be quickly determined, which is very efficient and can meet the needs of concealer makeup. Real-time requirements for progress detection. When the facial blemish difference value is less than or equal to the preset threshold, a filtering algorithm is introduced to calculate the smoothing factor corresponding to each image, and the makeup progress is determined according to the change of the smoothing factor, which can perform more accurate calculations for more subtle changes, avoiding This prevents the short-term sudden increase in makeup progress.

进一步地,本申请实施例利用人脸关键点,对视频帧中用户的人脸区域进行矫正和裁剪,提高了识别人脸区域的准确性。Further, in the embodiment of the present application, the face key points are used to correct and crop the user's face area in the video frame, so as to improve the accuracy of recognizing the face area.

本申请实施例仅通过图像处理即可准确地检测出遮瑕的化妆进度,运算量小,成本低,减少了服务器的处理压力,提高了遮瑕化妆进度检测的效率,能够满足遮瑕化妆进度检测的实时性要求,减少了算法对硬件资源的依赖以及人力的投入成本。The embodiment of the present application can accurately detect the concealer makeup progress only through image processing, with a small amount of calculation and low cost, which reduces the processing pressure on the server, improves the efficiency of concealer makeup progress detection, and can meet the real-time detection of concealer makeup progress. Performance requirements, reducing the algorithm's dependence on hardware resources and human input costs.

实施例八Embodiment eight

1、一种妆容颜色识别方法,该方法包括:1. A makeup color recognition method, the method comprising:

获取用户的化妆图像;Get the user's makeup image;

识别所述化妆图像中所述用户面部预设部位的颜色所属的色调类别;Identifying the hue category to which the color of the preset part of the user's face in the makeup image belongs;

根据所述预设部位对应的所述色调类别,确定所述用户的妆容色调。According to the tone category corresponding to the preset part, the user's makeup tone is determined.

2、根据1,识别所述化妆图像中所述用户面部预设部位的颜色所属的色调类别,包括:2. According to 1, identifying the hue category to which the color of the preset part of the user's face in the makeup image belongs, including:

检测所述化妆图像对应的人脸关键点;Detecting the key points of the face corresponding to the makeup image;

根据所述人脸关键点,获取所述化妆图像对应的人脸区域图像;Acquiring a face area image corresponding to the makeup image according to the key points of the face;

识别所述人脸区域图像中所述用户面部预设部位的颜色所属的色调类别。Identifying the hue category to which the color of the preset part of the user's face in the face area image belongs.

3、根据2,识别所述人脸区域图像中所述用户面部预设部位的颜色所属的色调类别,包括:3. According to 2, identify the hue category to which the color of the preset part of the user's face in the face area image belongs, including:

根据所述人脸关键点,从所述人脸区域图像中截取出预设部位对应的目标区域图像;According to the key points of the human face, intercepting the image of the target area corresponding to the preset part from the image of the human face area;

获取所述目标区域图像的像素主色;Acquiring the pixel dominant color of the image of the target area;

根据所述像素主色,确定所述预设部位的颜色所属的色调类别。According to the main color of the pixel, the hue category to which the color of the preset part belongs is determined.

4、根据3,所述预设部位包括面部皮肤;根据所述人脸关键点,从所述人脸区域图像中截取出预设部位对应的目标区域图像,包括:4. According to 3, the preset part includes facial skin; according to the key points of the face, the target area image corresponding to the preset part is intercepted from the face area image, including:

根据所述人脸关键点,从所述人脸区域图像中截取出面部图像;According to the key points of the human face, a facial image is intercepted from the facial area image;

根据所述人脸关键点,从所述面部图像中抠除眉毛区域、眼睛区域和嘴部区域,得到面部皮肤对应的目标区域图像。According to the key points of the human face, the eyebrow area, the eye area and the mouth area are cut out from the facial image to obtain an image of the target area corresponding to the facial skin.

5、根据3,所述预设部位包括眼影部位;根据所述人脸关键点,从所述人脸区域图像中截取出预设部位对应的目标区域图像,包括:5. According to 3, the preset part includes an eye shadow part; according to the key points of the face, the target area image corresponding to the preset part is intercepted from the face area image, including:

根据所述人脸关键点包括的眼部关键点,从所述人脸区域图像中截取出眼部图像;According to the eye key points included in the human face key points, an eye image is intercepted from the human face region image;

对所述眼部图像进行预设次数的图像膨胀处理;performing image expansion processing on the eye image for a preset number of times;

根据所述眼部关键点包括的眼睛关键点,从膨胀处理后的所述眼部区域中抠除眼睛区域,得到眼影部位对应的目标区域图像。According to the eye key points included in the eye key points, the eye area is cut out from the dilated eye area to obtain a target area image corresponding to the eye shadow part.

6、根据3,所述预设部位包括嘴部;根据所述人脸关键点,从所述人脸区域图像中截取出预设部位对应的目标区域图像,包括:6. According to 3, the preset part includes the mouth; according to the key points of the face, the target area image corresponding to the preset part is intercepted from the face area image, including:

分别对所述人脸关键点包括的上嘴唇上沿关键点、上嘴唇下沿关键点、下嘴唇上沿关键点和下嘴唇下沿关键点进行插值,得到上嘴唇上沿插值点、上嘴唇下沿插值点、下嘴唇上沿插值点及下嘴唇下沿插值点;Respectively interpolate the upper lip upper edge key point, the upper lip lower edge key point, the lower lip upper lip key point and the lower lip lower edge key point to obtain the upper lip upper edge interpolation point, the upper lip upper edge key point, and the upper lip edge key point. Lower edge interpolation point, lower lip upper edge interpolation point and lower lip lower edge interpolation point;

根据所述上嘴唇上沿关键点、所述上嘴唇上沿插值点、所述上嘴唇下沿关键点和所述上嘴唇下沿插值点,从所述人脸区域图像中截取出上嘴唇图像;According to the key point of the upper edge of the upper lip, the interpolation point of the upper edge of the upper lip, the key point of the lower edge of the upper lip and the interpolation point of the lower edge of the upper lip, the upper lip image is intercepted from the image of the human face region ;

根据所述下嘴唇上沿关键点、所述下嘴唇上沿插值点、所述下嘴唇下沿关键点和所述下嘴唇下沿插值点,从所述人脸区域图像中截取出下嘴唇图像;According to the key point of the upper edge of the lower lip, the interpolation point of the upper edge of the lower lip, the key point of the lower edge of the lower lip and the interpolation point of the lower edge of the lower lip, the lower lip image is intercepted from the image of the human face region ;

将所述上嘴唇图像和所述下嘴唇图像拼接为嘴部对应的目标区域图像。Stitching the upper lip image and the lower lip image into a target area image corresponding to the mouth.

7、根据3,根据所述像素主色,确定所述预设部位的颜色所属的色调类别,包括:7. According to 3, according to the main color of the pixel, determine the hue category to which the color of the preset part belongs, including:

将所述像素主色的颜色空间转换到所述预设部位对应的预设颜色空间;converting the color space of the main color of the pixel to the preset color space corresponding to the preset part;

根据所述预设颜色空间下所述像素主色的各颜色通道值,确定所述各颜色通道值所属的色调区间;According to each color channel value of the main color of the pixel in the preset color space, determine the hue interval to which each color channel value belongs;

将所述色调区间对应的色调类别确定为所述预设部位的颜色所属的色调类别。The hue category corresponding to the hue interval is determined as the hue category to which the color of the preset part belongs.

8、根据7,所述预设部位包括面部皮肤、眼影部位、嘴部中的至少之一;面部皮肤对应的预设颜色空间包括LAB颜色空间,眼影部位和嘴部对应的预设颜色空间包括HSV颜色空间。8. According to 7, the preset parts include at least one of facial skin, eye shadow parts, and mouth; the preset color space corresponding to facial skin includes LAB color space, and the preset color space corresponding to eye shadow parts and mouth includes HSV color space.

9、根据1-8任一,所述预设部位包括面部皮肤、眼影部位和嘴部中的一个或多个部位,根据所述预设部位对应的所述色调类别,确定所述用户的妆容色调,包括:9. According to any one of 1-8, the preset parts include one or more parts of facial skin, eye shadow parts and mouth, and the makeup of the user is determined according to the color tone category corresponding to the preset parts shades, including:

确定每个预设部位的所述色调类别对应的颜色系数;Determine the color coefficient corresponding to the hue category of each preset part;

根据每个预设部位对应的预设权重和所述颜色系数,计算所述用户的妆容颜色系数;calculating the makeup color coefficient of the user according to the preset weight corresponding to each preset part and the color coefficient;

将所述妆容颜色系数对应的色调类别确定为所述用户的妆容色调。The tone category corresponding to the makeup color coefficient is determined as the user's makeup tone.

10、根据2-8任一,根据所述人脸关键点,获取所述化妆图像对应的人脸区域图像,包括:10. According to any one of 2-8, according to the key points of the human face, the facial area image corresponding to the makeup image is acquired, including:

根据所述人脸关键点,对所述化妆图像及所述人脸关键点进行旋转矫正;performing rotation correction on the makeup image and the key points of the human face according to the key points of the human face;

根据矫正后的所述人脸关键点,从矫正后的所述化妆图像中截取包含人脸区域的图像;According to the corrected key points of the human face, an image containing a human face area is intercepted from the corrected makeup image;

将所述包含人脸区域的图像缩放至预设尺寸,得到所述化妆图像对应的人脸区域图像。The image containing the face area is scaled to a preset size to obtain a face area image corresponding to the makeup image.

11、根据10,根据所述人脸关键点,对所述化妆图像及所述人脸关键点进行旋转矫正,包括:11. According to 10, performing rotation correction on the makeup image and the key points of the face according to the key points of the face, including:

根据所述人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标;Determine left eye center coordinates and right eye center coordinates respectively according to the left eye key points and right eye key points included in the face key points;

根据所述左眼中心坐标和所述右眼中心坐标,确定所述化妆图像对应的旋转角度及旋转中心点坐标;According to the left eye center coordinates and the right eye center coordinates, determine the rotation angle and rotation center point coordinates corresponding to the makeup image;

根据所述旋转角度和所述旋转中心点坐标,对所述化妆图像及所述人脸关键点进行旋转矫正。Perform rotation correction on the makeup image and the key points of the human face according to the rotation angle and the coordinates of the rotation center point.

12、根据10,所述根据矫正后的所述人脸关键点,从矫正后的所述化妆图像中截取包含人脸区域的图像,包括:12. According to 10, according to the corrected key points of the human face, the image containing the human face area is intercepted from the corrected makeup image, including:

根据矫正后的所述人脸关键点,对矫正后的所述化妆图像中包含的人脸区域进行图像截取。According to the corrected key points of the human face, image interception is performed on the human face area contained in the corrected makeup image.

13、根据12,根据矫正后的所述人脸关键点,对矫正后的所述化妆图像中包含的人脸区域进行图像截取,包括:13. According to 12, according to the corrected key points of the human face, image interception is performed on the human face area contained in the corrected makeup image, including:

从矫正后的所述人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值;Determine the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value from the corrected described face key points;

根据所述最小横坐标值、所述最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的所述化妆图像中人脸区域对应的截取框;According to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value, and the maximum ordinate value, determine a clipping frame corresponding to the face area in the corrected makeup image;

根据所述截取框,从矫正后的所述化妆图像中截取出包含所述人脸区域的图像。According to the clipping frame, an image including the face area is clipped from the rectified makeup image.

14、根据13,还包括:14. According to 13, also includes:

将所述截取框放大预设倍数;Enlarging the interception frame by a preset multiple;

根据放大后的所述截取框,从矫正后的所述化妆图像中截取出包含所述人脸区域的图像。According to the enlarged clipping frame, an image including the face area is clipped from the corrected makeup image.

15、根据10,还包括:15. According to 10, also includes:

根据所述包含人脸区域的图像的尺寸及所述预设尺寸,对矫正后的所述人脸关键点进行缩放平移处理。According to the size of the image including the human face area and the preset size, zooming and translation processing is performed on the corrected key points of the human face.

16、根据1-8任一项,还包括:16. According to any one of 1-8, it also includes:

检测所述化妆图像中是否至少包含一个完整人脸图像;Detecting whether at least one complete human face image is included in the makeup image;

如果是,则执行所述识别所述化妆图像中所述用户面部预设部位的颜色所属的色调类别的操作;If yes, then perform the operation of identifying the tone category to which the color of the preset part of the user's face in the makeup image belongs;

如果否,则发送提示信息给所述用户的终端,所述提示信息用于提示所述用户提供至少包含一个完整人脸图像的化妆图像。If not, sending prompt information to the user's terminal, where the prompt information is used to prompt the user to provide a makeup image including at least one complete human face image.

17、一种妆容颜色识别装置,该装置包括:17. A makeup color recognition device, comprising:

获取模块,用于获取用户的化妆图像;An acquisition module, configured to acquire the makeup image of the user;

识别模块,用于识别所述化妆图像中所述用户面部预设部位的颜色所属的色调类别;An identification module, configured to identify the tone category to which the color of the preset part of the user's face in the makeup image belongs;

确定模块,用于根据所述预设部位对应的所述色调类别,确定所述用户的妆容色调。A determining module, configured to determine the user's makeup tone according to the tone category corresponding to the preset part.

18、一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序以实现1-16任一项所述的方法。18. An electronic device, comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor runs the computer program to implement any one of 1-16. described method.

19、一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行实现1-16任一项所述的方法。19. A computer-readable storage medium, on which a computer program is stored, and the program is executed by a processor to implement the method described in any one of 1-16.

本申请实施例提供一种妆容颜色识别方法,该方法识别化妆图像中用户预设部位的颜色所属的色调类别,进而依据预设部位对应的色调类别,确定用户的妆容色调。仅通过图像处理就能准确地识别出用户的妆容色调,无需采用深度学习模型进行处理,运算量小,成本低,减少了服务器的处理压力。An embodiment of the present application provides a makeup color recognition method. The method recognizes the hue category to which the color of the user's preset part in the makeup image belongs, and then determines the user's makeup tone according to the hue category corresponding to the preset part. The user's makeup tone can be accurately identified only through image processing without using a deep learning model for processing. The amount of calculation is small, the cost is low, and the processing pressure on the server is reduced.

参见图23,该方法具体包括以下步骤:Referring to Figure 23, the method specifically includes the following steps:

步骤901:获取用户的化妆图像。Step 901: Obtain a makeup image of a user.

用户的化妆图像中包括至少一个人脸图像。该化妆图像可以为单独一张图像,也可以为用户的化妆视频中的任意一个视频帧。本申请实施例的执行主体为服务器。用户的手机或电脑等用户终端上安装有与服务器提供的识别妆容颜色的服务相匹配的客户端。该客户端中设置有用于提交化妆图像的接口。当用户需要识别某个妆容对应的妆容色调时,用户通过该接口来提交化妆图像。The user's makeup image includes at least one face image. The makeup image can be a single image, or any video frame in the user's makeup video. The execution subject of this embodiment of the application is a server. A client that matches the service of identifying makeup color provided by the server is installed on a user terminal such as a user's mobile phone or computer. The client is provided with an interface for submitting makeup images. When the user needs to identify the makeup color corresponding to a certain makeup, the user submits the makeup image through this interface.

具体地,当客户端检测到用户点击该接口时显示一个妆容颜色识别界面,该界面中可以包括拍摄按钮和/或本地文件上传接口。若客户端检测到用户点击拍摄按钮,则调用用户终端上的摄像头来拍摄化妆图像或化妆视频。将拍摄的化妆图像或化妆视频发送给服务器。若客户端检测用户点击本地文件上传接口,则显示本地文件夹列表,以使用户从中选择需要的化妆图像或化妆视频。客户端将用户选择的化妆图像或化妆视频发送给服务器。Specifically, when the client detects that the user clicks on the interface, a makeup color recognition interface is displayed, and the interface may include a shooting button and/or a local file upload interface. If the client detects that the user clicks the shooting button, the camera on the user terminal is invoked to take a makeup image or a makeup video. Send the captured makeup image or makeup video to the server. If the client detects that the user clicks on the local file upload interface, a list of local folders is displayed so that the user can select a desired makeup image or makeup video therefrom. The client sends the makeup image or makeup video selected by the user to the server.

服务器接收用户终端发送的化妆图像或化妆视频。若服务器接收到用户终端发送的化妆视频,则将当前接收到的当前帧图像作为待识别的化妆图像。The server receives the makeup image or makeup video sent by the user terminal. If the server receives the makeup video sent by the user terminal, the currently received current frame image is used as the makeup image to be recognized.

在本申请的另一些实施例中,服务器获得用户的化妆图像后,还检测该化妆图像中是否至少包含一个完整人脸图像。若该化妆图像中包含一个或多个完整人脸图像,则按照本实施例提供的方法来识别该化妆图像中每个完整人脸图像对应的妆容色调。若检测出该化妆图像中不包含人脸图像,或者包含的所有人脸图像都不完整,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户提供至少包含一个完整人脸图像的化妆图像。例如,提示信息可以为“请保持化妆图像中至少包括一个完整人脸”。In other embodiments of the present application, after the server obtains the makeup image of the user, it also detects whether the makeup image contains at least one complete human face image. If the makeup image includes one or more complete human face images, the makeup tone corresponding to each complete human face image in the makeup image is identified according to the method provided in this embodiment. If it is detected that the makeup image does not contain a face image, or all the face images included are incomplete, a prompt message is sent to the user's terminal. The user's terminal receives and displays the prompt information to prompt the user to provide a makeup image including at least one complete face image. For example, the prompt information may be "please keep at least one complete human face in the makeup image".

服务器通过本步骤获得用户的化妆图像后,通过如下步骤902和903的操作来确定识别该化妆图像中用户脸部的妆容色调。After the server obtains the makeup image of the user through this step, it determines and recognizes the makeup tone of the user's face in the makeup image through the operations of the following steps 902 and 903 .

步骤902:识别化妆图像中用户面部预设部位的颜色所属的色调类别。Step 902: Identify the tone category to which the color of the preset part of the user's face in the makeup image belongs.

化妆图像中包括至少一个完整人脸图像,由于识别每个完整人脸图像的妆容色调的过程均相同,因此本申请实施例以识别一个完整人脸图像的妆容色调的过程为例进行说明。对于化妆图像包括的至少一个完整人脸图像,从中选择任意一个完整人脸图像作为当前待处理的用户人脸。The makeup image includes at least one complete human face image. Since the process of identifying the makeup tone of each complete human face image is the same, the embodiment of the present application uses the process of identifying the makeup tone of a complete human face image as an example for illustration. For at least one complete human face image included in the makeup image, any one of the complete human face images is selected as the user's face currently to be processed.

服务器具体通过如下步骤S1-S3的操作来识别该用户人脸面部预设部位的颜色所属的色调类别,包括:The server specifically recognizes the hue category to which the color of the preset part of the user's face belongs through the following steps S1-S3, including:

S1:检测化妆图像对应的人脸关键点。S1: Detect the key points of the face corresponding to the makeup image.

服务器中配置了预先训练好的用于检测人脸关键点的检测模型,通过该检测模型提供人脸关键点检测的接口服务。服务器获取到用户的化妆图像后,调用人脸关键点检测的接口服务,通过检测模型识别出该化妆图像中用户人脸的所有人脸关键点。The server is configured with a pre-trained detection model for detecting key points of the face, and the interface service of key point detection of the face is provided through the detection model. After the server acquires the user's makeup image, it calls the interface service of facial key point detection, and identifies all the key points of the user's face in the makeup image through the detection model.

其中,识别出的人脸关键点包括用户脸部轮廓上的关键点及嘴巴、鼻子、眼睛、眉毛等部位的关键点。识别出的人脸关键点的数目可以为106个。Among them, the recognized key points of the human face include key points on the outline of the user's face and key points of the mouth, nose, eyes, eyebrows and other parts. The number of recognized face key points may be 106.

S2:根据人脸关键点,获取化妆图像对应的人脸区域图像。S2: Acquire the face area image corresponding to the makeup image according to the key points of the face.

服务器具体通过如下步骤S20-S22的操作来获取当前用户人脸对应的人脸区域图像,包括:The server specifically obtains the face area image corresponding to the current user's face through the following steps S20-S22, including:

S20:根据人脸关键点,对化妆图像及人脸关键点进行旋转矫正。S20: Perform rotation correction on the makeup image and the key points of the face according to the key points of the face.

具体地,根据该用户人脸对应的人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标。从人脸关键点确定出左眼区域的全部左眼关键点及右眼区域的全部右眼关键点。对确定出的全部左眼关键点的横坐标取均值,以及对全部左眼关键点的纵坐标取均值,将左眼对应的横坐标的均值和纵坐标的均值组成一个坐标,将该坐标确定为左眼中心坐标。按照同样的方式确定出右眼中心坐标。Specifically, according to the left eye key point and the right eye key point included in the face key points corresponding to the user's face, the left eye center coordinates and the right eye center coordinates are respectively determined. All the left-eye key points in the left-eye area and all the right-eye key points in the right-eye area are determined from the face key points. Take the mean value of the abscissa of all the determined key points of the left eye, and take the mean of the ordinate of all the key points of the left eye, combine the mean value of the abscissa and the mean value of the ordinate corresponding to the left eye into a coordinate, and determine the coordinate is the coordinates of the center of the left eye. Determine the coordinates of the center of the right eye in the same manner.

然后根据左眼中心坐标和右眼中心坐标,确定化妆图像对应的旋转角度及旋转中心点坐标。如图4所示,根据左眼中心坐标和右眼中心坐标计算出两个坐标的水平差值dx和竖直差值dy,以及计算出左眼中心坐标和右眼中心坐标的两眼连线长度d。依据两眼连线长度d、水平差值dx和竖直差值dy,计算出两眼连线与水平方向的夹角θ,该夹角θ即为化妆图像对应的旋转角度。然后根据左眼中心坐标和右眼中心坐标计算出两眼连线的中心点坐标,该中点点坐标即为化妆图像对应的旋转中心点坐标。Then, according to the center coordinates of the left eye and the center coordinates of the right eye, the corresponding rotation angle and the coordinates of the rotation center point of the makeup image are determined. As shown in Figure 4, the horizontal difference dx and the vertical difference dy of the two coordinates are calculated according to the left eye center coordinates and the right eye center coordinates, and the line connecting the two eyes between the left eye center coordinates and the right eye center coordinates is calculated length d. According to the length d of the connecting line between the two eyes, the horizontal difference dx and the vertical difference dy, the angle θ between the connecting line between the two eyes and the horizontal direction is calculated, and the included angle θ is the corresponding rotation angle of the makeup image. Then calculate the coordinates of the center point of the line connecting the two eyes according to the center coordinates of the left eye and the center coordinates of the right eye, and the coordinates of the midpoint point are the coordinates of the rotation center point corresponding to the makeup image.

根据计算出的旋转角度和旋转中心点坐标,对化妆图像及人脸关键点进行旋转矫正。具体将旋转角度和旋转中心点坐标输入用于计算图片的旋转矩阵的预设函数中,该预设函数可以为OpenCV中的函数cv2.getRotationMatrix2D()。通过调用该预设函数获得化妆图像对应的旋转矩阵。然后计算化妆图像和该旋转矩阵的乘积,得到矫正后的化妆图像。利用旋转矩阵对化妆图像进行矫正的操作,也可以通过调用OpenCV中的函数cv2.warpAffine()来完成。According to the calculated rotation angle and the coordinates of the rotation center point, the makeup image and the key points of the face are corrected for rotation. Specifically, the rotation angle and the coordinates of the rotation center point are input into a preset function used to calculate the rotation matrix of the picture, and the preset function may be the function cv2.getRotationMatrix2D() in OpenCV. The rotation matrix corresponding to the makeup image is obtained by calling this preset function. Then calculate the product of the makeup image and the rotation matrix to obtain the corrected makeup image. Correcting the makeup image using the rotation matrix can also be done by calling the function cv2.warpAffine() in OpenCV.

对于人脸关键点,需要对每个人脸关键点逐一进行矫正,以与矫正后的化妆图像相对应。在对人脸关键点逐一矫正时,需要进行两次坐标系的转换,第一次将以化妆图像左上角为原点的坐标系转化为以左下角为原点的坐标系,第二次进一步将以左下角为原点的坐标系转化为以上述旋转中心点坐标为坐标原点的坐标系,如图5所示。经过两次坐标系转换后对每个人脸关键点进行如下公式(1)的转换,可完成对人脸关键点的旋转矫正。For face key points, each face key point needs to be corrected one by one to correspond to the corrected makeup image. When correcting the key points of the face one by one, two coordinate system conversions are required. The first time the coordinate system with the upper left corner of the makeup image as the origin is transformed into the coordinate system with the lower left corner as the origin, and the second time is further transformed with The coordinate system with the lower left corner as the origin is transformed into a coordinate system with the above-mentioned rotation center point coordinates as the coordinate origin, as shown in FIG. 5 . After two coordinate system transformations, the transformation of the following formula (1) is performed on each key point of the face, and the rotation correction of the key points of the face can be completed.

Figure RE-GDA0003326777920000941
Figure RE-GDA0003326777920000941

在公式(1)中,x0、y0分别为旋转矫正前人脸关键点的横坐标和纵坐标,x、 y分别为旋转矫正人脸关键点的横坐标和纵坐标,θ为上述旋转角度。In formula (1), x 0 and y 0 are the abscissa and ordinate of the key points of the face before rotation correction, respectively, x and y are the abscissa and ordinate of the key points of the face after rotation correction, and θ is the above-mentioned rotation angle.

经过矫正后的化妆图像和人脸关键点是基于整张图像的,整张图像不仅包含用户的人脸信息,还包括其他多余的图像信息,因此需要通过如下步骤S21对矫正后的图像进行人脸区域的裁剪。The corrected makeup image and face key points are based on the entire image. The entire image not only contains the user’s face information, but also includes other redundant image information. Therefore, it is necessary to perform artificial intelligence on the corrected image through the following step S21 Cropping of the face region.

S21:根据矫正后的人脸关键点,从矫正后的化妆图像中截取包含人脸区域的图像。S21: According to the corrected key points of the human face, an image including the human face area is intercepted from the corrected makeup image.

根据矫正后的人脸关键点,对矫正后的化妆图像中包含该用户人脸的区域进行图像截取。首先从矫正后的人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值。然后根据最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的化妆图像中人脸区域对应的截取框。具体地,将最小横坐标值和最小纵坐标值组成一个坐标点,将该坐标点作为人脸区域对应的截取框的左上角顶点。将最大横坐标值和最大纵坐标值组成另一个坐标点,将该坐标点作为人脸区域对应的截取框的右下角顶点。根据上述左上角顶点和右下角顶点在矫正后的化妆图像中确定截取框的位置,从矫正后的化妆图像中截取出位于该截取框内的图像,即截取出包含该用户人脸的图像。According to the key points of the corrected face, image interception is performed on the area containing the user's face in the corrected makeup image. Firstly, the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value are determined from the corrected face key points. Then according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value, determine the clipping frame corresponding to the face area in the corrected makeup image. Specifically, the minimum abscissa value and the minimum ordinate value form a coordinate point, and the coordinate point is used as the upper left vertex of the interception frame corresponding to the face area. The maximum abscissa value and the maximum ordinate value form another coordinate point, and this coordinate point is used as the vertex of the lower right corner of the interception frame corresponding to the face area. Determine the position of the clipping frame in the corrected makeup image according to the above-mentioned upper left corner vertex and lower right corner vertex, and cut out the image located in the clipping frame from the corrected makeup image, that is, cut out the image containing the user's face.

在本申请的另一些实施例中,为了确保将用户的全部的人脸区域截取出来,避免截取不完整导致后续化妆进度检测误差很大的情况发生,还可以将上述截取框放大预设倍数,该预设倍数可以为1.15或1.25等。本申请实施例并不限制预设倍数的具体取值,实际应用中可根据需求来设定该预设倍数。将上述截取框向周围放大预设倍数之后,从矫正后的化妆图像中截取出位于放大后的截取框中的图像,从而截取出包含用户的完整人脸区域的图像。In other embodiments of the present application, in order to ensure that all of the user's face area is intercepted, and to avoid incomplete interception resulting in a large error in the follow-up makeup progress detection, the above interception frame can also be enlarged by a preset multiple, The preset multiple can be 1.15 or 1.25 and so on. The embodiment of the present application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical applications. After the aforementioned clipping frame is enlarged by a preset multiple, the image located in the enlarged clipping frame is clipped from the corrected makeup image, so as to clip an image including the complete face area of the user.

S22:将包含人脸区域的图像缩放至预设尺寸,得到化妆图像对应的人脸区域图像。S22: Scale the image containing the face area to a preset size to obtain an image of the face area corresponding to the makeup image.

通过上述方式从化妆图像中截取出包含用户的人脸区域的图像后,将包含人脸区域的图像缩放至预设尺寸,得到化妆图像对应的人脸区域图像。该预设尺寸可以为390×390或400×400等。本申请实施例并不限制预设尺寸的具体取值,实际应用中可根据需求进行设定。After the image including the face area of the user is intercepted from the makeup image in the above manner, the image including the face area is scaled to a preset size to obtain the face area image corresponding to the makeup image. The preset size may be 390×390 or 400×400 or the like. The embodiment of the present application does not limit the specific value of the preset size, which can be set according to requirements in practical applications.

为了使人脸关键点与缩放后的人脸区域图像相适应,将截取的包含人脸区域的图像缩放至预设尺寸后,还需根据缩放前包含人脸区域的图像的尺寸及该预设尺寸,对上述矫正后的人脸关键点进行缩放平移处理。具体地,依据缩放前包含人脸区域的图像的尺寸及图像所需缩放至的预设尺寸,确定每个人脸关键点的平移方向及平移距离,进而根据每个人脸关键点对应的平移方向和平移距离,分别对每个人脸关键点进行平移操作,并记录平移后的每个人脸关键点的坐标。In order to make the key points of the face adapt to the scaled face area image, after scaling the captured image containing the face area to the preset size, it is also necessary to Size, scale and translate the above-mentioned corrected face key points. Specifically, according to the size of the image containing the face area before scaling and the preset size to which the image needs to be scaled, determine the translation direction and translation distance of each key point of the face, and then according to the translation direction and translation distance corresponding to each key point of the face The translation distance is to perform a translation operation on each key point of the face, and record the coordinates of each key point of the face after translation.

通过上述方式从化妆图像中获得人脸区域图像,并通过旋转矫正和平移缩放等操作使人脸关键点与获得的人脸区域图像相适应。之后通过步骤S3的操作来识别用户面部预设部位的颜色所属的色调类别。The face area image is obtained from the makeup image in the above way, and the key points of the face are adapted to the obtained face area image through operations such as rotation correction and translation scaling. Afterwards, the tone category to which the color of the preset part of the user's face belongs is identified through the operation of step S3.

S3:识别该人脸区域图像中用户面部预设部位的颜色所属的色调类别。S3: Identify the tone category to which the color of the preset part of the user's face in the face area image belongs.

具体通过如下步骤S30-S32的操作来识别预设部位对应的色调类别,包括:Specifically, through the following steps S30-S32, the color tone category corresponding to the preset position is identified, including:

S30:根据人脸关键点,从人脸区域图像中截取出预设部位对应的目标区域图像。S30: According to the key points of the face, intercept the target area image corresponding to the preset part from the face area image.

在本申请实施例中,预设部位可以包括面部皮肤、眼影部位、嘴部等部位的一个或多个。In the embodiment of the present application, the preset parts may include one or more of facial skin, eye shadow parts, mouth and other parts.

对于面部皮肤,首先根据该用户人脸的所有人脸关键点,从人脸区域图像中截取出面部图像,以去除头发及背景区域。然后根据人脸关键点包括的所有眉毛关键点,从面部图像中抠除眉毛区域。根据人脸关键点包括的所有眼睛关键点,从面部图像中抠除眼睛区域。以及根据人脸关键点包括的所有嘴部关键点,从面部图像中抠除嘴部区域。从而得到了仅包括面部皮肤的目标区域图像,如此后续便于识别面部皮肤的色调,排除其他部位的颜色对识别面部皮肤的色调的干扰,提高识别面部皮肤的色调的准确性。For the facial skin, firstly, according to all the key points of the user's face, the face image is cut out from the face area image to remove the hair and background area. Then, according to all the eyebrow key points included in the face key points, the eyebrow area is removed from the face image. According to all the eye key points included in the face key points, the eye area is cut out from the face image. And according to all the key points of the mouth included in the key points of the face, the mouth area is removed from the face image. Thus, the image of the target area including only the facial skin is obtained, which facilitates subsequent identification of the facial skin tone, eliminates the interference of other parts of the color on the identification of the facial skin tone, and improves the accuracy of identifying the facial skin tone.

对于眼影部位,首先根据人脸关键点包括的眼部关键点,从人脸区域图像中截取出眼部图像,该眼部图像包括上下眼皮区域及眼睛区域。眼影化妆是位于上下眼皮区域的,眼睛区域中眼白和眼球的颜色会干扰对眼影部位的颜色识别。因此需要将眼睛区域去除。首先对获得的眼部图像进行预设次数的图像膨胀处理,以确保膨胀处理后的眼部图像中包括带眼妆的眼影部位。预设次数可以为3次或4 次等。然后根据眼部关键点包括的眼睛关键点,从膨胀处理后的眼部区域中抠除眼睛区域,得到眼影部位对应的目标区域图像。如此后续便于识别眼影的色调,排除眼睛的颜色对识别眼影色调的干扰,提高识别眼影色调的准确性。For the eye shadow part, first, according to the eye key points included in the face key points, the eye image is intercepted from the face area image, and the eye image includes the upper and lower eyelid areas and the eye area. Eye shadow makeup is located in the upper and lower eyelid areas, and the color of the whites and eyeballs in the eye area will interfere with the color recognition of the eye shadow area. Therefore the eye area needs to be removed. Firstly, image expansion processing is performed on the obtained eye image for a preset number of times, so as to ensure that the eye image with eye makeup includes the eye shadow part after the expansion processing. The preset number of times may be 3 times or 4 times, etc. Then, according to the eye key points included in the eye key points, the eye area is removed from the expanded eye area to obtain the target area image corresponding to the eye shadow. Such follow-up is convenient for identifying the color tone of the eye shadow, eliminates the interference of eye color on the identification of the eye shadow color tone, and improves the accuracy of identifying the eye shadow color tone.

对于嘴部,首先从该用户人脸对应的人脸关键点中确定出上嘴唇上沿关键点、上嘴唇下沿关键点、下嘴唇上沿关键点和下嘴唇下沿关键点。对上嘴唇上沿关键点进行线性插值,得到多个上嘴唇上沿插值点。对上嘴唇下沿关键点进行插值,得到多个上嘴唇下沿插值点。根据上嘴唇上沿关键点、上嘴唇上沿插值点、上嘴唇下沿关键点和上嘴唇下沿插值点,从人脸区域图像中截取出上嘴唇图像。For the mouth, first determine the key points of the upper edge of the upper lip, the key points of the lower edge of the upper lip, the key points of the upper edge of the lower lip and the key points of the lower edge of the user's face from the key points of the face corresponding to the user's face. Perform linear interpolation on key points along the upper edge of the upper lip to obtain multiple interpolation points on the upper edge of the upper lip. Interpolate the key points on the lower edge of the upper lip to obtain multiple interpolation points on the lower edge of the upper lip. According to the upper lip upper edge key point, upper lip upper edge interpolation point, upper lip lower edge key point and upper lip lower edge interpolation point, the upper lip image is intercepted from the face area image.

具体地,多个上嘴唇上沿关键点与多个上嘴唇上沿插值点顺次连线能够得到更加平滑的曲线,该曲线即为上嘴唇上缘的边界线。多个上嘴唇下沿关键点与多个上嘴唇下沿插值点顺次连线能够得到更加平滑的曲线,该曲线即为上嘴唇下缘的边界线。上嘴唇上缘的边界线与上嘴唇下缘的边界线围成的区域即为上嘴唇区域。从人脸区域图像中截取出该上嘴唇区域,得到上嘴唇图像。Specifically, a smoother curve can be obtained by sequentially connecting a plurality of key points on the upper edge of the upper lip with a plurality of interpolation points on the upper edge of the upper lip, and the curve is the boundary line of the upper edge of the upper lip. A smoother curve can be obtained by sequentially connecting multiple key points on the lower edge of the upper lip with multiple interpolation points on the lower edge of the upper lip, which is the boundary line of the lower edge of the upper lip. The area enclosed by the boundary line of the upper edge of the upper lip and the lower edge of the upper lip is the area of the upper lip. The upper lip area is cut out from the face area image to obtain an upper lip image.

对下嘴唇上沿关键点进行线性插值,得到多个下嘴唇上沿插值点。对下嘴唇下沿关键点进行线性插值,得到多个下嘴唇下沿插值点。根据下嘴唇上沿关键点、下嘴唇上沿插值点、下嘴唇下沿关键点和下嘴唇下沿插值点,从人脸区域图像中截取出下嘴唇图像。Perform linear interpolation on key points on the upper edge of the lower lip to obtain multiple interpolation points on the upper edge of the lower lip. Perform linear interpolation on the key points along the lower edge of the lower lip to obtain multiple interpolation points on the lower edge of the lower lip. According to the key point of the upper edge of the lower lip, the interpolation point of the upper edge of the lower lip, the key point of the lower edge of the lower lip and the interpolation point of the lower edge of the lower lip, the lower lip image is cut out from the image of the face region.

具体地,多个下嘴唇上沿关键点与多个下嘴唇上沿插值点顺次连线能够得到更加平滑的曲线,该曲线即为下嘴唇上缘的边界线。多个下嘴唇下沿关键点与多个下嘴唇下沿插值点顺次连线能够得到更加平滑的曲线,该曲线即为下嘴唇下缘的边界线。下嘴唇上缘的边界线与下嘴唇下缘的边界线围成的区域即为下嘴唇区域。从人脸区域图像中截取出该下嘴唇区域,得到下嘴唇图像。Specifically, a smoother curve can be obtained by sequentially connecting a plurality of key points on the upper edge of the lower lip with a plurality of interpolation points on the upper edge of the lower lip, and the curve is the boundary line of the upper edge of the lower lip. A smoother curve can be obtained by sequentially connecting multiple key points on the lower edge of the lower lip with multiple interpolation points on the lower edge of the lower lip, which is the boundary line of the lower edge of the lower lip. The area enclosed by the boundary line of the upper edge of the lower lip and the lower edge of the lower lip is the area of the lower lip. The lower lip area is cut out from the face area image to obtain the lower lip image.

将得到的上嘴唇图像和下嘴唇图像拼接为嘴部对应的目标区域图像。如此能够去除上嘴唇与下嘴唇之间的区域,尤其在张嘴的情况下能够去除口腔内部区域的图像,避免上嘴唇与下嘴唇之间的区域影响对嘴部色调的识别,提高识别嘴部口红色调的准确性。The obtained upper lip image and lower lip image are stitched into the target area image corresponding to the mouth. In this way, the area between the upper lip and the lower lip can be removed, especially when the mouth is opened, the image of the inner area of the mouth can be removed, so as to avoid the area between the upper lip and the lower lip from affecting the recognition of the color of the mouth, and improve the recognition of lipstick in the mouth Tonal accuracy.

本申请实施例对预设部位不进行特殊限定,预设部位也可以为其他部位,并依据人脸关键点,从人脸区域图像中扣取出预设部位对应的目标区域图像,以便从目标区域图像中识别出预设部位的颜色。The embodiment of the present application does not specifically limit the preset position, and the preset position can also be other parts, and according to the key points of the face, the target area image corresponding to the preset position is deducted from the face area image, so that the target area image can be obtained from the target area. The color of the preset part is recognized in the image.

S31:获取目标区域图像的像素主色。S31: Obtain the dominant color of the pixels of the image of the target area.

通过上述步骤S30获得预设部位对应的目标区域图像之后,对目标区域图像进行主色调提取,确定目标区域图像的像素主色。具体地,将由opencv读出的目标区域图像由BGRA颜色空间转换成符合PIL读取格式的RGBA颜色空间,然后调用get_palette(image)函数建立一个调色板,使用中值切割算法来聚集相似的颜色,获取目标区域图像的像素主色。After the target area image corresponding to the preset location is obtained through the above step S30, the main color tone is extracted from the target area image to determine the pixel main color of the target area image. Specifically, convert the target area image read by opencv from the BGRA color space to the RGBA color space that conforms to the PIL reading format, then call the get_palette(image) function to create a palette, and use the median cut algorithm to gather similar colors , to obtain the pixel dominant color of the target area image.

S32:根据目标区域图像的像素主色,确定预设部位的颜色所属的色调类别。S32: Determine the hue category to which the color of the preset part belongs according to the main color of the pixels in the image of the target area.

目标区域图像的像素主色包括R、G、B三个颜色通道的值,对于不同的预设部位妆容色调有各自的特点,如面部皮肤的妆容色调可能包括粉红、白皙、自然、小麦、古铜、黝黑等不同色调,对于粉红、白皙、自然等颜色较浅的色调很难通过RGB颜色空间来判断该像素主色对应的色调类别。而对于嘴部的口红色调可能包括粉裸色、红色、橘色、红棕色等色调,像粉红、红色等差异较小的色调很难通过RGB颜色空间进行区分。因此本申请实施例预先配置好了不同预设部位对应的预设颜色空间,通过预设部位对应的预设颜色空间能够更容易地区分该预设部位的不同妆容色调。而且本申请实施例还预先配置了不同预设部位对应的多个色调类别,以及配置了每种色调类别对应的色调区间。色调类别对应的色调区间中包括属于该色调类别的颜色在对应的预设颜色空间下各颜色通道的取值区间。The main color of pixels in the image of the target area includes the values of the three color channels R, G, and B. The makeup tones of different preset parts have their own characteristics. For example, the makeup tones of facial skin may include pink, fair, natural, wheaten, ancient For different tones such as copper and dark, it is difficult to judge the hue category corresponding to the main color of the pixel through the RGB color space for lighter tones such as pink, white, and natural. For the lipstick tone of the mouth, it may include pink nude, red, orange, reddish brown and other shades. It is difficult to distinguish pink, red and other shades with small differences through the RGB color space. Therefore, in the embodiment of the present application, preset color spaces corresponding to different preset parts are pre-configured, and different makeup tones of the preset parts can be more easily distinguished through the preset color spaces corresponding to the preset parts. Moreover, the embodiment of the present application also pre-configures a plurality of hue categories corresponding to different preset positions, and configures a hue range corresponding to each hue category. The hue interval corresponding to the hue category includes the value interval of each color channel of the color belonging to the hue category in the corresponding preset color space.

通过步骤S31获得目标区域图像的像素主色之后,将目标区域图像的像素主色的颜色空间又RGB转换到预设部位对应的预设颜色空间。然后根据该预设颜色空间下像素主色的各颜色通道值,确定各颜色通道值所属的色调区间。将该色调区间对应的色调类别确定为预设部位的颜色所属的色调类别。After obtaining the dominant pixel color of the image of the target area through step S31, the color space of the dominant color of the pixel of the image of the target area is converted from RGB to the preset color space corresponding to the preset location. Then, according to each color channel value of the main color of the pixel in the preset color space, determine the hue interval to which each color channel value belongs. The tone category corresponding to the tone interval is determined as the tone category to which the color of the preset part belongs.

在本申请实施例中,预设部位包括面部皮肤、眼影部位、嘴部中是至少之一;面部皮肤对应的预设颜色空间可以包括LAB颜色空间,LAB颜色空间是由一个亮度通道和两个颜色通道组成的,每个颜色用L、A、B三个值表示。其中,L表示亮度,A表示从绿色到红色的分量,B表示从蓝色到黄色的分量。LAB颜色空间下较浅的颜色也能够准确地区分开。眼影部位和嘴部对应的预设颜色空间可以包括HSV颜色空间,HSV颜色空间下对于色调相差较小的颜色也能够准确地区分开。In the embodiment of the present application, the preset part includes at least one of facial skin, eye shadow part, and mouth; the preset color space corresponding to the facial skin may include the LAB color space, and the LAB color space is composed of one brightness channel and two Composed of color channels, each color is represented by three values of L, A, and B. Among them, L represents brightness, A represents the component from green to red, and B represents the component from blue to yellow. Lighter colors in the LAB color space can also be accurately distinguished. The preset color space corresponding to the eye shadow part and the mouth part may include the HSV color space, and the colors with a small hue difference can also be accurately distinguished under the HSV color space.

因此在本步骤中,对于面部皮肤部位,将面部皮肤对应的目标区域图像的像素主色由RGB颜色空间转换到LAB颜色空间。根据LAB颜色空间下面部皮肤对应的像素主色的L、A、B三个通道值,确定L、A、B三个通道值所属的色调区间。将该色调区间对应的色调类别确定为该用户人脸的肤色所属的色调类别。Therefore, in this step, for the facial skin part, the pixel dominant color of the target area image corresponding to the facial skin is converted from the RGB color space to the LAB color space. According to the L, A, and B three channel values of the main color of the pixel corresponding to the facial skin under the LAB color space, determine the hue interval to which the three channel values of L, A, and B belong. The hue category corresponding to the hue interval is determined as the hue category to which the skin color of the user's face belongs.

对于眼影部位,将眼影部位对应的目标区域图像的像素主色由RGB颜色空间转换到HSV颜色空间。根据HSV颜色空间下眼影部位对应的像素主色的H、S、 V三个通道值,确定H、S、V三个通道值所属的色调区间。将该色调区间对应的色调类别确定为该用户人脸的眼影颜色所属的色调类别。For the eye shadow part, the pixel main color of the target area image corresponding to the eye shadow part is converted from the RGB color space to the HSV color space. According to the three channel values of H, S and V of the main color of the pixel corresponding to the eye shadow part in the HSV color space, determine the hue interval to which the three channel values of H, S and V belong. The hue category corresponding to the hue interval is determined as the hue category to which the eye shadow color of the user's face belongs.

对于嘴部,将嘴部对应的目标区域图像的像素主色由RGB颜色空间转换到 HSV颜色空间。根据HSV颜色空间下嘴部对应的像素主色的H、S、V三个通道值,确定H、S、V三个通道值所属的色调区间。将该色调区间对应的色调类别确定为该用户人脸的口红颜色所属的色调类别。For the mouth, the pixel main color of the target area image corresponding to the mouth is converted from the RGB color space to the HSV color space. According to the H, S, and V channel values of the main color of the pixel corresponding to the mouth in the HSV color space, determine the hue interval to which the H, S, and V channel values belong. The hue category corresponding to the hue interval is determined as the hue category to which the lipstick color of the user's face belongs.

步骤903:根据预设部位对应的色调类别,确定用户的妆容色调。Step 903: Determine the user's makeup tone according to the tone category corresponding to the preset part.

本申请实施例中预设部位为一个或多个部位,预先配置了每种色调类别对应的颜色系数,如白皙色调对应的颜色系数可以为0.2,棕橘色调对应的颜色系数可以为0.3,等等。通过上述步骤识别出每个预设部位对应的色调类别后,分别确定每个预设部位的色调类别对应的颜色系数。根据每个预设部位对应的预设权重和颜色系数,计算用户的妆容颜色系数。具体地,分别计算每个预设部位对应的预设权重与颜色系数的乘积,然后对每个预设部位对应的乘积进行求和,将得到的和值作为用户的妆容颜色系数。In the embodiment of the present application, the preset part is one or more parts, and the color coefficient corresponding to each hue category is pre-configured. For example, the color coefficient corresponding to the fair tone can be 0.2, and the color coefficient corresponding to the brown-orange hue can be 0.3, etc. Wait. After the hue category corresponding to each preset part is identified through the above steps, the color coefficient corresponding to the hue category of each preset part is respectively determined. Calculate the user's makeup color coefficient according to the preset weight and color coefficient corresponding to each preset part. Specifically, the product of the preset weight corresponding to each preset part and the color coefficient is calculated respectively, and then the products corresponding to each preset part are summed, and the obtained sum value is used as the makeup color coefficient of the user.

本申请实施例还预先配置了不同妆容的色调类别对应的颜色系数区间。通过上述方式确定出用户的妆容颜色系数之后,确定该妆容颜色系数所属的颜色系数区间,将该颜色系数区间对应的色调类别确定为用户的妆容色调。该妆容色调可以认为是该用户人脸面部妆容的主色调。The embodiment of the present application also pre-configures the color coefficient intervals corresponding to the hue categories of different makeup looks. After the user's makeup color coefficient is determined in the above manner, the color coefficient interval to which the makeup color coefficient belongs is determined, and the hue category corresponding to the color coefficient interval is determined as the user's makeup hue. The makeup color can be considered as the main color of the user's facial makeup.

作为一种示例,预设部位包括面部皮肤、眼影部位和嘴部。识别出面部皮肤的肤色色调、眼影部位的眼影色调以及嘴部的口红色调之后,分别确定肤色色调对应的颜色系数、眼影色调对应的颜色系数以及口红色调对应的颜色系数。假设肤色色调为“白皙”对应的颜色系数为0.2,眼影色调“棕橘”对应的颜色系数为 0.3,口红色调“裸粉”对应的颜色系数为0.5,且面部皮肤、眼影部位和嘴部的预设权重均为0.33,则计算0.33*0.2+0.33*0.3+0.33*0.5=0.33,即计算出该用户的妆容颜色系数为0.33。假设裸粉色调对应的颜色系数区间为0-10,则确定出该用户的妆容色调为裸粉色调。As an example, the preset parts include facial skin, eye shadow parts and mouth. After identifying the skin color tone of the facial skin, the eye shadow tone of the eye shadow part, and the lipstick tone of the mouth, respectively determine the color coefficient corresponding to the skin color tone, the color coefficient corresponding to the eye shadow tone, and the color coefficient corresponding to the lipstick tone. Assume that the color coefficient corresponding to the skin color tone is "fair" is 0.2, the color coefficient corresponding to the eye shadow tone "brown orange" is 0.3, the color coefficient corresponding to the lipstick tone "nude pink" is 0.5, and the facial skin, eye shadow parts and mouth The preset weights are all 0.33, then calculate 0.33*0.2+0.33*0.3+0.33*0.5=0.33, that is, calculate the makeup color coefficient of the user to be 0.33. Assuming that the color coefficient range corresponding to the nude pink tone is 0-10, it is determined that the user's makeup tone is the nude pink tone.

作为另一些示例,本申请实施例中预设部位也可以只包括面部皮肤和嘴部,结合面部肤色和口红颜色来识别用户的妆容色调。或者,预设部位也可以只包括面部皮肤和眼影部位,结合面部肤色和眼影颜色来识别用户的妆容色调。或者,预设部位也可以只包括眼影部位和嘴部,结合口红颜色和眼影颜色来识别用户的妆容色调。或者,预设部位也可以只包括面部皮肤,通过面部肤色来识别用户的妆容色调。或者,预设部位也可以只包括眼影部位,通过眼影颜色来识别用户的妆容色调。或者,预设部位也可以只包括嘴部,通过口红颜色来识别用户的妆容色调。As some other examples, the preset parts in this embodiment of the present application may only include facial skin and mouth, and the user's makeup tone is identified in combination with facial skin color and lipstick color. Alternatively, the preset parts may only include facial skin and eye shadow parts, and the user's makeup tone can be identified in combination with facial skin color and eye shadow color. Alternatively, the preset parts may only include the eye shadow part and the mouth part, and the user's makeup color can be identified by combining the lipstick color and the eye shadow color. Alternatively, the preset part may only include facial skin, and the user's makeup tone is identified through the facial skin color. Alternatively, the preset parts may only include the eye shadow part, and the user's makeup tone is identified through the color of the eye shadow. Alternatively, the preset part may only include the mouth, and the user's makeup tone is identified through the lipstick color.

在本申请实施例中,识别化妆图像中用户预设部位的颜色所属的色调类别,进而依据预设部位对应的色调类别,确定用户的妆容色调。对用户的面部进行至少一个部位的色调提取,利用提取的色调自动识别用户的妆容色调。进一步地,通过对肤色、眼影、口红等一个或多个维度的妆面组合,仅提取对妆容颜色影响较大的部位的颜色信息,排除其他妆效对妆容色调的影响,增加了对妆容色调识别的统一性和准确性。且仅通过图像处理就能准确地识别出用户的妆容色调,无需采用深度学习模型进行处理,运算量小,成本低,减少了服务器的处理压力,减少了算法对硬件资源的依赖以及人力的投入成本。In the embodiment of the present application, the tone category of the color of the user's preset part in the makeup image is identified, and then the user's makeup tone is determined according to the tone category corresponding to the preset part. Extract the tone of at least one part of the user's face, and use the extracted tone to automatically identify the tone of the user's makeup. Furthermore, by combining one or more dimensions of makeup such as skin color, eye shadow, lipstick, etc., only the color information of parts that have a greater influence on the makeup color is extracted, and the influence of other makeup effects on the makeup tone is excluded, increasing the accuracy of the makeup tone. Uniformity and accuracy of recognition. And only through image processing can accurately identify the user's makeup color without using deep learning model for processing, the amount of calculation is small, the cost is low, the processing pressure of the server is reduced, and the algorithm's dependence on hardware resources and human input are reduced cost.

本申请实施例还提供一种妆容颜色识别装置,该装置用于执行上述任一实施例提供的妆容颜色识别方法。如图24所示,该装置包括:An embodiment of the present application further provides a makeup color recognition device, which is used to implement the makeup color recognition method provided in any one of the above embodiments. As shown in Figure 24, the device includes:

获取模块100,用于获取用户的化妆图像;An acquisition module 100, configured to acquire the makeup image of the user;

识别模块200,用于识别化妆图像中用户面部预设部位的颜色所属的色调类别;An identification module 200, configured to identify the hue category to which the color of the preset part of the user's face in the makeup image belongs;

确定模块300,用于根据预设部位对应的色调类别,确定用户的妆容色调。The determination module 300 is configured to determine the user's makeup tone according to the tone category corresponding to the preset part.

识别模块200,用于检测化妆图像对应的人脸关键点;根据人脸关键点,获取化妆图像对应的人脸区域图像;识别人脸区域图像中用户面部预设部位的颜色所属的色调类别。The identification module 200 is used to detect the key points of the face corresponding to the makeup image; obtain the face area image corresponding to the makeup image according to the key points of the face; and identify the hue category to which the color of the preset part of the user's face in the face area image belongs.

识别模块200,用于根据人脸关键点,从人脸区域图像中截取出预设部位对应的目标区域图像;获取目标区域图像的像素主色;根据像素主色,确定预设部位的颜色所属的色调类别。The identification module 200 is used to intercept the target area image corresponding to the preset part from the face area image according to the key points of the face; obtain the main color of the pixel of the target area image; determine the color of the preset part according to the main color of the pixel. category of hues.

预设部位包括面部皮肤;识别模块200,用于根据人脸关键点,从人脸区域图像中截取出面部图像;根据人脸关键点,从面部图像中抠除眉毛区域、眼睛区域和嘴部区域,得到面部皮肤对应的目标区域图像。The preset parts include facial skin; the identification module 200 is used to intercept the facial image from the facial area image according to the key points of the human face; according to the key points of the human face, the eyebrow area, the eye area and the mouth are removed from the facial image region to obtain the image of the target region corresponding to the facial skin.

预设部位包括眼影部位;识别模块200,用于根据人脸关键点包括的眼部关键点,从人脸区域图像中截取出眼部图像;对眼部图像进行预设次数的图像膨胀处理;根据眼部关键点包括的眼睛关键点,从膨胀处理后的眼部区域中抠除眼睛区域,得到眼影部位对应的目标区域图像。The preset parts include eye shadow parts; the identification module 200 is used to intercept the eye image from the face area image according to the eye key points included in the face key points; perform image expansion processing on the eye image for a preset number of times; According to the eye key points included in the eye key points, the eye area is cut out from the expanded eye area to obtain a target area image corresponding to the eye shadow.

预设部位包括嘴部;识别模块200,用于分别对人脸关键点包括的上嘴唇上沿关键点、上嘴唇下沿关键点、下嘴唇上沿关键点和下嘴唇下沿关键点进行插值,得到上嘴唇上沿插值点、上嘴唇下沿插值点、下嘴唇上沿插值点及下嘴唇下沿插值点;根据上嘴唇上沿关键点、上嘴唇上沿插值点、上嘴唇下沿关键点和上嘴唇下沿插值点,从人脸区域图像中截取出上嘴唇图像;根据下嘴唇上沿关键点、下嘴唇上沿插值点、下嘴唇下沿关键点和下嘴唇下沿插值点,从人脸区域图像中截取出下嘴唇图像;将上嘴唇图像和下嘴唇图像拼接为嘴部对应的目标区域图像。The preset parts include the mouth; the identification module 200 is used to interpolate the key points on the upper edge of the upper lip, the lower edge of the upper lip, the upper edge of the lower lip, and the lower edge of the lower lip. , get the upper lip interpolation point, the upper lip lower edge interpolation point, the lower lip upper edge interpolation point and the lower lip lower edge interpolation point; point and the interpolation point along the lower edge of the upper lip, and the upper lip image is intercepted from the image of the face area; according to the key point on the upper edge of the lower lip, the interpolation point on the upper edge of the lower lip, the key point on the lower edge of the lower lip and the interpolation point on the lower edge of the lower lip, The lower lip image is cut out from the face area image; the upper lip image and the lower lip image are stitched into a target area image corresponding to the mouth.

确定模块300,用于将像素主色的颜色空间转换到预设部位对应的预设颜色空间;根据预设颜色空间下像素主色的各颜色通道值,确定各颜色通道值所属的色调区间;将色调区间对应的色调类别确定为预设部位的颜色所属的色调类别。The determining module 300 is used to convert the color space of the main color of the pixel to the preset color space corresponding to the preset position; according to the color channel values of the main color of the pixel in the preset color space, determine the hue interval to which each color channel value belongs; The tone category corresponding to the tone interval is determined as the tone category to which the color of the preset part belongs.

预设部位包括面部皮肤、眼影部位、嘴部中是至少之一;面部皮肤对应的预设颜色空间包括LAB颜色空间,眼影部位和嘴部对应的预设颜色空间包括HSV 颜色空间。The preset part includes at least one of facial skin, eye shadow part, and mouth; the preset color space corresponding to the facial skin includes LAB color space, and the preset color space corresponding to the eye shadow part and mouth includes HSV color space.

预设部位包括面部皮肤、眼影部位和嘴部中的一个或多个部位,确定模块300,用于确定每个预设部位的色调类别对应的颜色系数;根据每个预设部位对应的预设权重和颜色系数,计算用户的妆容颜色系数;将妆容颜色系数对应的色调类别确定为用户的妆容色调。The preset parts include one or more parts in facial skin, eye shadow parts and mouth, and the determination module 300 is used to determine the color coefficient corresponding to the hue category of each preset part; The weight and color coefficient are used to calculate the user's makeup color coefficient; and the hue category corresponding to the makeup color coefficient is determined as the user's makeup hue.

识别模块200,用于根据人脸关键点,对化妆图像及人脸关键点进行旋转矫正;根据矫正后的人脸关键点,从矫正后的化妆图像中截取包含人脸区域的图像;将包含人脸区域的图像缩放至预设尺寸,得到化妆图像对应的人脸区域图像。The identification module 200 is used to rotate and correct the makeup image and the key points of the face according to the key points of the face; according to the key points of the face after correction, the image containing the face area is intercepted from the corrected makeup image; The image of the face area is scaled to a preset size to obtain the image of the face area corresponding to the makeup image.

识别模块200,用于根据人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标;根据左眼中心坐标和右眼中心坐标,确定化妆图像对应的旋转角度及旋转中心点坐标;根据旋转角度和旋转中心点坐标,对化妆图像及人脸关键点进行旋转矫正。The identification module 200 is configured to determine the left eye center coordinates and right eye center coordinates respectively according to the left eye key points and right eye key points included in the face key points; determine the makeup image corresponding The rotation angle and the coordinates of the rotation center point; according to the rotation angle and the coordinates of the rotation center point, the makeup image and the key points of the face are rotated and corrected.

识别模块200,用于根据矫正后的人脸关键点,对矫正后的化妆图像中包含的人脸区域进行图像截取。The recognition module 200 is configured to perform image capture on the face area contained in the corrected makeup image according to the key points of the corrected face.

识别模块200,用于从矫正后的人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值;根据最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的化妆图像中人脸区域对应的截取框;根据截取框,从矫正后的化妆图像中截取出包含人脸区域的图像。The identification module 200 is used to determine the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value from the corrected face key points; according to the minimum abscissa value, minimum ordinate value, maximum abscissa value value and the maximum ordinate value, determine the clipping frame corresponding to the face area in the corrected makeup image; according to the clipping frame, cut out the image containing the face area from the corrected makeup image.

识别模块200,还用于将截取框放大预设倍数;根据放大后的截取框,从矫正后的化妆图像中截取出包含人脸区域的图像。The recognition module 200 is further configured to enlarge the clipping frame by a preset multiple; and according to the zoomed-in clipping frame, clip an image containing a human face area from the corrected makeup image.

识别模块200,还用于根据包含人脸区域的图像的尺寸及预设尺寸,对矫正后的人脸关键点进行缩放平移处理。The identification module 200 is further configured to perform scaling and translation processing on the corrected key points of the face according to the size of the image including the face area and the preset size.

该装置还包括:完整人脸检测模块,用于检测化妆图像中是否至少包含一个完整人脸图像;如果是,则执行识别化妆图像中用户面部预设部位的颜色所属的色调类别的操作;如果否,则发送提示信息给用户的终端,提示信息用于提示用户提供至少包含一个完整人脸图像的化妆图像。The device also includes: a complete human face detection module, which is used to detect whether at least one complete human face image is included in the makeup image; if so, perform the operation of identifying the hue category to which the color of the preset part of the user's face in the makeup image belongs; if If not, a prompt message is sent to the user's terminal, where the prompt message is used to prompt the user to provide a makeup image including at least one complete face image.

本申请的上述实施例提供的妆容颜色识别装置与本申请实施例提供的妆容颜色识别方法出于相同的发明构思,具有与其存储的应用程序所采用、运行或实现的方法相同的有益效果。The makeup color recognition device provided by the above embodiments of the present application is based on the same inventive concept as the makeup color recognition method provided by the embodiments of the present application, and has the same beneficial effect as the method adopted, run or realized by its stored application program.

本申请实施方式还提供一种电子设备,以执行上述妆容颜色识别方法。其示出了本申请的一些实施方式所提供的一种电子设备的示意图。电子设备包括:处理器,存储器,总线和通信接口,所述处理器、通信接口和存储器通过总线连接;所述存储器中存储有可在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序时执行本申请前述任一实施方式所提供的妆容颜色识别方法。Embodiments of the present application further provide an electronic device to implement the above method for recognizing makeup color. It shows a schematic diagram of an electronic device provided by some embodiments of the present application. The electronic equipment includes: a processor, a memory, a bus and a communication interface, the processor, the communication interface and the memory are connected through a bus; a computer program that can run on the processor is stored in the memory, and the processor runs The computer program executes the makeup color recognition method provided in any one of the foregoing embodiments of the present application.

其中,存储器可能包含高速随机存取存储器(RAM:Random Access Memory),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。通过至少一个通信接口(可以是有线或者无线)实现该装置网元与至少一个其他网元之间的通信连接,可以使用互联网、广域网、本地网、城域网等。Wherein, the memory may include a high-speed random access memory (RAM: Random Access Memory), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the device network element and at least one other network element is realized through at least one communication interface (which may be wired or wireless), and the Internet, wide area network, local network, metropolitan area network, etc. can be used.

总线可以是ISA总线、PCI总线或EISA总线等。所述总线可以分为地址总线、数据总线、控制总线等。其中,存储器用于存储程序,所述处理器在接收到执行指令后,执行所述程序,前述本申请实施例任一实施方式揭示的所述妆容颜色识别方法可以应用于处理器中,或者由处理器实现。The bus can be an ISA bus, a PCI bus, or an EISA bus, etc. The bus can be divided into address bus, data bus, control bus and so on. Wherein, the memory is used to store a program, and the processor executes the program after receiving the execution instruction, and the makeup color recognition method disclosed in any one of the above-mentioned embodiments of the present application can be applied to the processor, or by Processor implementation.

处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器 (DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。A processor may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software. The above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, referred to as CPU), a network processor (Network Processor, referred to as NP), etc.; it can also be a digital signal processor (DSP), an application-specific integrated circuit ( ASIC), off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. Various methods, steps, and logic block diagrams disclosed in the embodiments of the present application may be implemented or executed. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, and the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.

本申请实施例提供的电子设备与本申请实施例提供的妆容颜色识别方法出于相同的发明构思,具有与其采用、运行或实现的方法相同的有益效果。The electronic device provided in the embodiment of the present application is based on the same inventive concept as the makeup color recognition method provided in the embodiment of the present application, and has the same beneficial effect as the method adopted, operated or realized.

本申请实施方式还提供一种与前述实施方式所提供的妆容颜色识别方法对应的计算机可读存储介质,其示出的计算机可读存储介质为光盘,其上存储有计算机程序(即程序产品),所述计算机程序在被处理器运行时,会执行前述任意实施方式所提供的妆容颜色识别方法。Embodiments of the present application also provide a computer-readable storage medium corresponding to the makeup color recognition method provided in the foregoing embodiments, where the computer-readable storage medium shown is an optical disc on which a computer program (that is, a program product) is stored. , when the computer program is run by the processor, it will execute the makeup color recognition method provided in any of the foregoing implementation manners.

需要说明的是,所述计算机可读存储介质的例子还可以包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他光学、磁性存储介质,在此不再一一赘述。It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random Access memory (RAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other optical and magnetic storage media will not be repeated here.

本申请的上述实施例提供的计算机可读存储介质与本申请实施例提供的妆容颜色识别方法出于相同的发明构思,具有与其存储的应用程序所采用、运行或实现的方法相同的有益效果。The computer-readable storage medium provided by the above-mentioned embodiments of the present application is based on the same inventive concept as the makeup color recognition method provided by the embodiments of the present application, and has the same beneficial effect as the method adopted, executed or realized by the stored application program.

实施例九Embodiment nine

1、一种图像处理方法,包括:1. An image processing method, comprising:

检测用户的人脸图像中面部瑕疵对应的关键点坐标;Detect key point coordinates corresponding to facial blemishes in the user's face image;

根据所述面部瑕疵对应的关键点坐标和预设素材图像,生成瑕疵纹理贴图;According to the key point coordinates corresponding to the facial blemish and the preset material image, a blemish texture map is generated;

根据所述人脸图像和所述瑕疵纹理贴图,生成所述人脸图像对应的遮瑕人脸图像。Generate a concealed human face image corresponding to the human face image according to the human face image and the blemish texture map.

2、根据1,所述根据所述面部瑕疵对应的关键点坐标和预设素材图像,生成瑕疵纹理贴图,包括:2. According to 1, the defect texture map is generated according to the key point coordinates corresponding to the facial defects and the preset material image, including:

获取所述人脸图像对应的空白纹理图像;Obtain a blank texture image corresponding to the face image;

根据所述面部瑕疵对应的关键点坐标,在所述空白纹理图像上定位出所述面部瑕疵对应的位置;Locating a position corresponding to the facial blemish on the blank texture image according to the key point coordinates corresponding to the facial blemish;

在所述空白纹理图像上所述面部瑕疵对应的位置处对所述预设素材图像进行贴图,得到瑕疵纹理贴图。Mapping the preset material image at the position corresponding to the facial blemish on the blank texture image to obtain a blemish texture map.

3、根据1,所述根据所述人脸图像和所述瑕疵纹理贴图,生成所述人脸图像对应的遮瑕人脸图像,包括:3. According to 1, according to the face image and the blemish texture map, generating a blemish-covered face image corresponding to the face image includes:

对所述人脸图像进行模糊处理,得到所述人脸图像对应的磨皮图像;Performing blurring processing on the face image to obtain a dermabrasion image corresponding to the face image;

根据所述瑕疵纹理贴图,对所述人脸图像中的瑕疵区域与所述磨皮图像中位置相同的瑕疵区域进行图像融合,得到所述人脸图像对应的遮瑕人脸图像。According to the blemish texture map, image fusion is performed on the blemish area in the face image and the blemish area in the skin care image at the same position to obtain a blemish-covered face image corresponding to the face image.

4、根据3,根据所述瑕疵纹理贴图,对所述人脸图像中的瑕疵区域与所述磨皮图像中位置相同的瑕疵区域进行图像融合,包括:4. According to 3, according to the blemish texture map, image fusion is performed on the blemish area in the face image and the blemish area in the same position in the skin care image, including:

从所述瑕疵纹理贴图中获取第一关键点坐标处的像素点的透明度A值;所述第一关键点坐标为所述面部瑕疵对应的关键点坐标中的任一关键点坐标;Obtain the transparency A value of the pixel at the first key point coordinates from the blemish texture map; the first key point coordinates are any key point coordinates in the key point coordinates corresponding to the facial blemish;

根据所述透明度A值,对所述人脸图像中所述第一关键点坐标处的像素点与所述磨皮图像中所述第一关键点坐标处的像素点进行融合处理。According to the transparency A value, fusion processing is performed on the pixel point at the coordinate of the first key point in the face image and the pixel point at the coordinate of the first key point in the skin care image.

5、根据4,所述根据所述透明度A值,对所述人脸图像中所述第一关键点坐标处的像素点与所述磨皮图像中所述第一关键点坐标处的像素点进行融合处理,包括:5. According to 4, according to the transparency A value, the pixel point at the coordinate of the first key point in the face image and the pixel point at the coordinate of the first key point in the skin care image are compared Perform fusion processing, including:

从所述人脸图像中获取所述第一关键点坐标处的像素点的第一RGB颜色值,以及从所述磨皮图像中获取所述第一关键点坐标处的像素点的第二RGB颜色值;Acquiring the first RGB color value of the pixel at the first key point coordinates from the face image, and acquiring the second RGB of the pixel at the first key point coordinates from the skin care image color value;

根据所述第一RGB颜色值、所述第二RGB颜色值和所述透明度A值,计算所述人脸图像中所述第一关键点坐标处的像素点与所述磨皮图像中所述第一关键点坐标处的像素点融合后的融合像素值;According to the first RGB color value, the second RGB color value and the transparency A value, calculate the difference between the pixel at the first key point coordinates in the face image and the pixel in the dermabrasion image The fused pixel value after the pixel point fusion at the first key point coordinates;

将所述人脸图像中所述第一关键点坐标处的像素点当前的像素值重置为所述融合像素值。and resetting the current pixel value of the pixel at the coordinate of the first key point in the face image to the fused pixel value.

6、根据5,根据所述第一RGB颜色值、所述第二RGB颜色值和所述透明度 A值,计算所述人脸图像中所述第一关键点坐标处的像素点与所述磨皮图像中所述第一关键点坐标处的像素点融合后的融合像素值,包括:6. According to 5, according to the first RGB color value, the second RGB color value and the transparency A value, calculate the relationship between the pixel point at the first key point coordinate in the face image and the grinding The fused pixel value after the fusion of the pixels at the first key point coordinates in the skin image, including:

计算所述第一RGB颜色值与所述透明度A值之间的第一乘积以及所述第二RGB颜色值与所述透明度A值之间的第二乘积;计算所述第一乘积和所述第二乘积的平均值;calculating a first product between the first RGB color value and the transparency A value and a second product between the second RGB color value and the transparency A value; calculating the first product and the the average of the second product;

将所述平均值确定为所述人脸图像中所述第一关键点坐标处的像素点与所述磨皮图像中所述第一关键点坐标处的像素点融合后的融合像素值。The average value is determined as a fused pixel value after fusing the pixel point at the coordinate of the first key point in the face image and the pixel point at the coordinate of the first key point in the dermabrasion image.

7、根据1-6任一项,所述预设素材图像为透明度渐变的图像,像素点的透明度由所述预设素材图像的边缘向所述预设素材图像的中心递减。7. According to any one of 1-6, the preset material image is an image with gradual transparency, and the transparency of pixels decreases from the edge of the preset material image to the center of the preset material image.

8、根据1-6任一项,还包括:8. According to any one of 1-6, it also includes:

检测所述人脸图像中是否包含人脸区域;Detecting whether a face area is included in the face image;

如果是,则执行所述检测用户的人脸图像中面部瑕疵对应的关键点坐标的操作;If yes, then perform the operation of detecting the key point coordinates corresponding to facial blemishes in the user's face image;

如果否,则发送提示信息给所述用户的终端,所述提示信息用于提示所述用户所述人脸图像中需包含人脸区域。If not, send prompt information to the terminal of the user, where the prompt information is used to prompt the user that the face image needs to include a face area.

9、一种图像处理装置,包括:9. An image processing device, comprising:

检测模块,用于检测用户的人脸图像中面部瑕疵对应的关键点坐标;A detection module, configured to detect key point coordinates corresponding to facial blemishes in the user's face image;

第一生成模块,用于根据所述面部瑕疵对应的关键点坐标和预设素材图像,生成瑕疵纹理贴图;The first generation module is used to generate a texture map of defects according to the key point coordinates corresponding to the facial defects and the preset material image;

第二生成模块,用于根据所述人脸图像和所述瑕疵纹理贴图,生成所述人脸图像对应的遮瑕人脸图像。The second generating module is configured to generate a concealed human face image corresponding to the human face image according to the human face image and the blemish texture map.

10、一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序以实现1-8任一项所述的方法。10. An electronic device, comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor runs the computer program to realize any of items 1-8. described method.

11、一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行实现1-8任一项所述的方法。11. A computer-readable storage medium, on which a computer program is stored, and the program is executed by a processor to implement the method described in any one of 1-8.

目前,人们对修图功能的需求越来越多,例如对于人脸的痘痘、色斑等瑕疵,用户拍照之后通常想要通过修图功能将这些瑕疵去除。相关技术中通常需要用户对人脸图像中的瑕疵区域针对每个像素进行模糊处理,如此需要用户自己确定图像中的瑕疵位置,并手动触发模糊处理,准确性差,且效率很低。At present, people have more and more demands for photo editing functions. For example, for blemishes such as acne and color spots on the face, users usually want to use the photo editing function to remove these blemishes after taking pictures. In the related art, the user is usually required to blur the defect area in the face image for each pixel, so the user needs to determine the position of the defect in the image and manually trigger the blur processing, which has poor accuracy and low efficiency.

基于此本申请实施例提供一种图像处理方法,该方法自动识别人脸图像中的痘痘、斑点等面部瑕疵,在人脸图像对应的空白纹理图像中针对面部瑕疵进行素材贴图得到瑕疵纹理贴图。基于该瑕疵纹理贴图,自动生成具有遮瑕效果的遮瑕人脸图像。如此无需用户人工针对每个像素进行优化,大大提升了处理性能,能够获得实时对人脸图像进行遮瑕的效果。Based on this, the embodiment of the present application provides an image processing method, which automatically identifies facial blemishes such as pimples and spots in the face image, and performs material mapping for facial blemishes in the blank texture image corresponding to the face image to obtain a blemish texture map . Based on the blemish texture map, a blemish-covered human face image with a blemish-covering effect is automatically generated. In this way, there is no need for the user to manually optimize each pixel, which greatly improves the processing performance and can obtain the effect of concealing the face image in real time.

参见图25,该方法具体包括以下步骤:Referring to Figure 25, the method specifically includes the following steps:

步骤1001:检测用户的人脸图像中面部瑕疵对应的关键点坐标。Step 1001: Detect key point coordinates corresponding to facial blemishes in the user's face image.

本申请实施例的执行主体为服务器或其他任意能够提供遮瑕修图功能的终端,本申请实施例以执行主体为服务器为例进行说明。The execution subject of the embodiment of the present application is a server or any other terminal capable of providing the function of concealing and retouching images. The embodiment of the present application takes the execution subject as the server as an example for illustration.

当用户需要使用遮瑕修图功能时,用户将需要修图的人脸图像发送给服务器。具体地,用户的手机或电脑等用户终端上可以设置有与服务器提供的遮瑕修图功能相匹配的客户端,该客户端中设置有用于提交待处理图像的接口。当客户端检测到用户点击该接口时显示修图界面,该修图界面中可以包括拍摄按钮和/或本地文件上传接口。当客户端检测到用户点击该拍摄按钮时,调用用户终端上的摄像头拍摄用户的人脸图像。当客户端检测到用户点击本地文件上传接口时,显示本地文件夹目录,以使用户从中选择需要上传的人脸图像。When the user needs to use the function of concealing and retouching, the user sends the face image to be retouched to the server. Specifically, a user terminal such as a user's mobile phone or a computer may be provided with a client that matches the concealment and retouching function provided by the server, and the client may be provided with an interface for submitting an image to be processed. When the client detects that the user clicks on the interface, a picture editing interface is displayed, and the picture editing interface may include a shooting button and/or a local file upload interface. When the client detects that the user clicks the capture button, it invokes the camera on the user terminal to capture the user's face image. When the client detects that the user clicks on the local file upload interface, the local folder directory is displayed so that the user can select the face image to be uploaded.

客户端通过摄像头拍摄到用户的人脸图像,或者接收到用户上传的人脸图像,之后发送该人脸图像给服务器。服务器接收用户的人脸图像。The client captures the user's face image through the camera, or receives the user's uploaded face image, and then sends the face image to the server. The server receives the face image of the user.

在本申请的另一些实施例中,服务器获得用户的人脸图像后,还检测该人脸图像中是否包含人脸区域。若该人脸图像中包含人脸区域,则按照本实施例提供的方法来对人脸区域中存在的面部瑕疵进行遮瑕处理。若检测出该人脸图像中不包含人脸区域,则发送提示信息给用户的终端。用户的终端接收并显示该提示信息,以提示用户该人脸图像中需包含人脸区域。例如,提示信息可以为“请提供包含人脸区域的人脸图像”。In some other embodiments of the present application, after the server obtains the user's face image, it also detects whether the face image contains a face area. If the face image includes a face area, the facial blemishes existing in the face area are concealed according to the method provided in this embodiment. If it is detected that the face image does not contain a face area, a prompt message is sent to the user's terminal. The user's terminal receives and displays the prompt information to remind the user that the face image needs to include a face area. For example, the prompt information may be "please provide a face image including a face area".

通过上述方式获得包含人脸区域的人脸图像之后,通过预设的皮肤检测模型检测该人脸图像中是否存在痘痘或斑点等面部瑕疵,若检测出该人脸图像中不包含面部瑕疵,则直接返回该人脸图像给用户终端,或者,返回用于提示不存在面部瑕疵的信息给用户终端。After obtaining the face image that includes the face area by the above method, detect whether there are facial blemishes such as acne or spots in the face image through the preset skin detection model, if it is detected that the face image does not contain facial blemishes, Then directly return the face image to the user terminal, or return information for prompting that there is no facial blemish to the user terminal.

若通过预设的皮肤检测模型检测到该人脸图像中包括至少一处面部瑕疵,则识别并记录每一处面部瑕疵对应的关键点坐标,面部瑕疵对应的关键点坐标包括面部瑕疵的顶点坐标和纹理坐标。If it is detected by the preset skin detection model that at least one facial blemish is included in the face image, the key point coordinates corresponding to each facial blemish are identified and recorded, and the key point coordinates corresponding to the facial blemish include the vertex coordinates of the facial blemish and texture coordinates.

步骤1002:根据面部瑕疵对应的关键点坐标和预设素材图像,生成瑕疵纹理贴图。Step 1002: Generate a blemish texture map according to the key point coordinates corresponding to the facial blemish and the preset material image.

首先获取用户的人脸图像对应的空白纹理图像。可以将人脸图像中带有颜色的纹理修改为空白纹理得到空白纹理图像。即该空白纹理图中人脸对应的位置处是空白的。Firstly, obtain the blank texture image corresponding to the user's face image. A blank texture image can be obtained by modifying the colored texture in the face image to a blank texture. That is, the position corresponding to the face in the blank texture map is blank.

然后根据检测到的面部瑕疵对应的关键点坐标,在空白纹理图像上定位出面部瑕疵对应的位置。利用GPU(Graphics Processing Unit,图形处理器)的渲染技术,在空白纹理图像上面部瑕疵对应的位置处对预设素材图像进行贴图,得到瑕疵纹理贴图。Then, according to the key point coordinates corresponding to the detected facial blemishes, the position corresponding to the facial blemishes is located on the blank texture image. Using GPU (Graphics Processing Unit, Graphics Processing Unit) rendering technology, the preset material image is textured at the position corresponding to the facial blemish on the blank texture image to obtain a blemish texture map.

其中,预设素材图像的颜色可以为红色、黄色、绿色等任意颜色。预设素材图像为透明度渐变的图像,像素点的透明度由预设素材图像的边缘向预设素材图像的中心递减。预设素材图像是仿照痘痘、色斑等瑕疵的特点制作的,面部瑕疵大多呈圆形或椭圆形等形状,边缘颜色较浅,越往中心颜色越深。因此本申请实施例中预设素材图像设置为圆形或椭圆形图像,且由边缘向中心透明度是递减的,即越靠近边缘越透明,越靠近中心位置越不透明。如此使得预设素材图像更加贴合真实的面部瑕疵的特点,后续利用该预设素材图像来对人脸图像中的面部瑕疵进行遮瑕处理,准确性更高,遮瑕效果更好。Wherein, the color of the preset material image may be any color such as red, yellow, or green. The preset material image is an image with a gradient of transparency, and the transparency of pixels decreases from the edge of the preset material image to the center of the preset material image. The preset material images are made according to the characteristics of blemishes such as acne and pigmentation. Most facial blemishes are round or oval in shape, with lighter edges and darker colors towards the center. Therefore, in the embodiment of the present application, the preset material image is set as a circular or elliptical image, and the transparency decreases from the edge to the center, that is, the closer to the edge, the more transparent, and the closer to the center, the more opaque. In this way, the preset material image is more suitable for the characteristics of real facial blemishes, and the preset material image is subsequently used to cover the facial blemishes in the face image, with higher accuracy and better concealment effect.

将预设素材图像贴在空白纹理图中用户的面部瑕疵对应的关键点坐标所在的位置处,得到瑕疵纹理贴图。则该瑕疵纹理贴图中既包含了面部瑕疵的位置信息,又包含了预设素材图像的透明度分布信息。Paste the preset material image on the location of the key point coordinates corresponding to the user's facial blemishes in the blank texture map to obtain a blemish texture map. Then, the blemish texture map includes not only the location information of the facial blemish, but also the transparency distribution information of the preset material image.

如图26所示,(a)图为用户的人脸图像,(b)图为(a)图所示人脸图像对应的瑕疵纹理贴图。(b)图中的圆形斑点图像即为预设素材图像,从图中可以看出预设素材图像的边缘颜色较淡,透明度较高,越往中心颜色越深,透明度越低。As shown in Figure 26, (a) is the face image of the user, and (b) is the blemish texture map corresponding to the face image shown in (a). (b) The circular spot image in the figure is the preset material image. It can be seen from the figure that the edge color of the preset material image is lighter and the transparency is higher, and the color is darker and the transparency is lower toward the center.

步骤1003:根据人脸图像和瑕疵纹理贴图,生成人脸图像对应的遮瑕人脸图像。Step 1003: Generate a concealed face image corresponding to the face image according to the face image and the blemish texture map.

服务器具体通过如下步骤S1和S2的操作来生成遮瑕人脸图像,具体包括:The server specifically generates the concealed human face image through the following steps S1 and S2, specifically including:

S1:对人脸图像进行模糊处理,得到人脸图像对应的磨皮图像。S1: Blurring the face image to obtain a dermabrasion image corresponding to the face image.

通过预设模糊算法对人脸图像进行模糊处理,该预设模糊算法可以为均值模糊算法、高斯模糊算法等。本申请实施例可以仅对人脸图像中的人脸区域进行模糊处理,首先从人脸图像中识别出人脸区域,然后通过预设模糊算法对该人脸区域进行全脸磨皮处理,得到对应的磨皮图像。The face image is blurred by a preset blur algorithm, and the preset blur algorithm may be a mean blur algorithm, a Gaussian blur algorithm, or the like. In the embodiment of the present application, only the face area in the face image can be blurred. First, the face area is identified from the face image, and then the face area is subjected to a full-face skin smoothing process through a preset blur algorithm to obtain Corresponding microdermabrasion images.

S2:根据瑕疵纹理贴图,对人脸图像中的瑕疵区域与磨皮图像中位置相同的瑕疵区域进行图像融合,得到人脸图像对应的遮瑕人脸图像。S2: According to the blemish texture map, perform image fusion on the blemish area in the face image and the blemish area in the same position in the microdermabrasion image to obtain a blemish-covered face image corresponding to the face image.

本申请实施例仅针对人脸图像中检测出的面部瑕疵所在的瑕疵区域,将人脸图像中的瑕疵区域与磨皮图像中位置相同的瑕疵区域进行图像融合,对人脸图像中的非瑕疵区域不进行处理,保留人脸图像中非瑕疵区域的原始图像不变。且对于瑕疵区域进行融合时,基于瑕疵纹理贴图中提供的面部瑕疵的位置及预设素材图像的透明度分布情况进行图像融合。The embodiment of the present application only focuses on the blemish area where the facial blemish detected in the face image is located, performs image fusion on the blemish area in the face image and the blemish area in the same position in the skin care image, and performs image fusion on the non-blemish area in the face image The area is not processed, and the original image of the non-defective area in the face image is kept unchanged. And when performing fusion on the blemish area, the image fusion is performed based on the position of the facial blemish provided in the blemish texture map and the transparency distribution of the preset material image.

由于对于瑕疵区域的每个像素点对应的融合过程都是相同的,因此本申请实施例仅以一个像素点为例进行详细说明。Since the fusion process corresponding to each pixel in the defect area is the same, this embodiment of the present application only uses one pixel as an example for detailed description.

具体地,将面部瑕疵对应的关键点坐标中的任一关键点坐标称为第一关键点坐标。在本申请实施例中各图像的颜色空间均为RGBA,其中A也称为Alpha,是透明度参数。从瑕疵纹理贴图中获取第一关键点坐标处的像素点的透明度A值。然后根据该透明度A值,对人脸图像中第一关键点坐标处的像素点与磨皮图像中第一关键点坐标处的像素点进行融合处理。Specifically, any one of the key point coordinates corresponding to facial blemishes is referred to as a first key point coordinate. In the embodiment of the present application, the color space of each image is RGBA, where A is also called Alpha, which is a transparency parameter. Obtain the transparency A value of the pixel at the coordinate of the first key point from the blemish texture map. Then, according to the transparency A value, fusion processing is performed on the pixel point at the coordinate of the first key point in the face image and the pixel point at the coordinate of the first key point in the microdermabrasion image.

具体地,从人脸图像中获取第一关键点坐标处的像素点的第一RGB颜色值,以及从磨皮图像中获取第一关键点坐标处的像素点的第二RGB颜色值。根据第一 RGB颜色值、第二RGB颜色值和透明度A值,计算人脸图像中第一关键点坐标处的像素点与磨皮图像中第一关键点坐标处的像素点融合后的融合像素值。将人脸图像中第一关键点坐标处的像素点当前的像素值重置为融合像素值。Specifically, the first RGB color value of the pixel at the first key point coordinate is obtained from the face image, and the second RGB color value of the pixel at the first key point coordinate is obtained from the skin care image. According to the first RGB color value, the second RGB color value and the transparency A value, calculate the fusion pixel after the pixel at the first key point coordinate in the face image is fused with the pixel at the first key point coordinate in the microdermabrasion image value. Reset the current pixel value of the pixel at the coordinate of the first key point in the face image to the fused pixel value.

在计算第一关键点坐标对应的融合像素值的过程中,计算第一RGB颜色值与透明度A值之间的第一乘积以及第二RGB颜色值与透明度A值之间的第二乘积;计算第一乘积和第二乘积的平均值。将该平均值确定为人脸图像中第一关键点坐标处的像素点与磨皮图像中第一关键点坐标处的像素点融合后的融合像素值。In the process of calculating the fused pixel value corresponding to the first key point coordinates, calculate the first product between the first RGB color value and the transparency A value and the second product between the second RGB color value and the transparency A value; calculate The average of the first product and the second product. The average value is determined as the fused pixel value after the pixel at the coordinate of the first key point in the face image is fused with the pixel at the coordinate of the first key point in the skin care image.

对于检测出的面部瑕疵对应的每个关键点坐标,都按照上述方式分别计算每个关键点坐标对应的人脸图像中的像素点与磨皮图像中的像素点进行融合后的融合像素值。然后将人脸图像中每个关键点坐标处像素点当前的像素值分别替换为对应的融合像素值,从而在人脸图像中完成了人脸图像中的瑕疵区域与磨皮图像中对应位置处的瑕疵区域之间的融合。For each key point coordinate corresponding to the detected facial blemish, the fused pixel value after the pixel point in the face image corresponding to each key point coordinate is fused with the pixel point in the microdermabrasion image is calculated in the above manner. Then replace the current pixel value of the pixel point at the coordinates of each key point in the face image with the corresponding fusion pixel value, thereby completing the blemish area in the face image and the corresponding position in the skin care image in the face image Blending between blemish areas.

在本申请的另一些实施例中,人脸图像中可以包括多个用户的人脸区域,可以通过本申请实施例提供的遮瑕处理方式并行或串行的对每个人脸区域进行遮瑕处理。In other embodiments of the present application, the face image may include the face regions of multiple users, and the concealment processing may be performed on each face region in parallel or serially through the concealment processing method provided in the embodiment of the present application.

通过上述方式对人脸图像中的人脸区域进行遮瑕处理得到人脸图像对应的遮瑕人脸图像之后,还可以将该遮瑕人脸图像返回给用户终端。用户终端接收并显示该遮瑕人脸图像。After performing concealment processing on the face area in the face image in the above manner to obtain a concealed face image corresponding to the face image, the concealed face image may also be returned to the user terminal. The user terminal receives and displays the concealed face image.

在本申请实施例中,自动识别人脸图像中的面部瑕疵,在人脸图像对应的空白纹理图像中针对面部瑕疵进行素材贴图得到瑕疵纹理贴图。对人脸图像进行全脸磨皮得到磨皮图像。基于瑕疵纹理贴图,将人脸图像中的瑕疵区域与磨皮图像中位置相同的瑕疵区域进行图像融合,自动生成具有遮瑕效果的遮瑕人脸图像。无需用户人工针对每个像素进行优化,在人脸图像对应的空白纹理图中面部瑕疵对应的位置处贴上预设素材图像,预设素材图像的透明度是由边缘向中心递减的,符合痘痘、斑点等面部瑕疵的实际特点。基于瑕疵纹理贴图对人脸图像与磨皮图像中的瑕疵区域进行融合,能够获得更加自然的祛除瑕疵的效果。而且整个过程自动进行,无需人工干预,大大提升了处理性能,能够获得实时对人脸图像进行遮瑕的效果。In the embodiment of the present application, facial blemishes in the face image are automatically identified, and material mapping is performed on the facial blemishes in the blank texture image corresponding to the face image to obtain a blemish texture map. The whole face microdermabrasion is performed on the face image to obtain the microdermabrasion image. Based on the blemish texture map, image fusion is performed between the blemish area in the face image and the blemish area in the same position in the microdermabrasion image to automatically generate a blemish-covering face image with a blemish effect. There is no need for the user to manually optimize each pixel, and paste the preset material image at the position corresponding to the facial blemish in the blank texture map corresponding to the face image. The transparency of the preset material image decreases from the edge to the center, which is in line with acne The actual characteristics of facial blemishes such as spots and spots. Based on the blemish texture map, the face image and the blemish area in the microdermabrasion image are fused to obtain a more natural blemish removal effect. Moreover, the whole process is carried out automatically without manual intervention, which greatly improves the processing performance and can obtain the effect of concealing blemishes on human face images in real time.

本申请实施例还提供一种图像处理装置,该装置用于执行上述任一实施例提供的图像处理方法。如图27所示,该装置包括:An embodiment of the present application further provides an image processing device, which is configured to execute the image processing method provided in any one of the foregoing embodiments. As shown in Figure 27, the device includes:

检测模块400,用于检测用户的人脸图像中面部瑕疵对应的关键点坐标;A detection module 400, configured to detect key point coordinates corresponding to facial blemishes in the user's face image;

第一生成模块500,用于根据面部瑕疵对应的关键点坐标和预设素材图像,生成瑕疵纹理贴图;The first generation module 500 is used to generate a texture map of defects according to the key point coordinates corresponding to the facial defects and the preset material image;

第二生成模块600,用于根据人脸图像和瑕疵纹理贴图,生成人脸图像对应的遮瑕人脸图像。The second generating module 600 is configured to generate a concealed human face image corresponding to the human face image according to the human face image and the blemish texture map.

第一生成模块500,用于获取人脸图像对应的空白纹理图像;根据面部瑕疵对应的关键点坐标,在空白纹理图像上定位出面部瑕疵对应的位置;在空白纹理图像上面部瑕疵对应的位置处对预设素材图像进行贴图,得到瑕疵纹理贴图。The first generation module 500 is used to obtain a blank texture image corresponding to the face image; locate the corresponding position of the facial defect on the blank texture image according to the key point coordinates corresponding to the facial defect; locate the corresponding position of the facial defect on the blank texture image Texture the preset material image to obtain the defect texture map.

第二生成模块600,用于对人脸图像进行模糊处理,得到人脸图像对应的磨皮图像;根据瑕疵纹理贴图,对人脸图像中的瑕疵区域与磨皮图像中位置相同的瑕疵区域进行图像融合,得到人脸图像对应的遮瑕人脸图像。The second generation module 600 is used to blur the face image to obtain a skin-removal image corresponding to the face image; according to the blemish texture map, perform a blurring process on the blemish area in the face image and the blemish area in the blemish image The images are fused to obtain a concealed face image corresponding to the face image.

第二生成模块600,用于从瑕疵纹理贴图中获取第一关键点坐标处的像素点的透明度A值;第一关键点坐标为面部瑕疵对应的关键点坐标中的任一关键点坐标;根据透明度A值,对人脸图像中第一关键点坐标处的像素点与磨皮图像中第一关键点坐标处的像素点进行融合处理。The second generating module 600 is used to obtain the transparency A value of the pixel at the first key point coordinates from the blemish texture map; the first key point coordinates are any key point coordinates in the key point coordinates corresponding to facial blemishes; according to The transparency A value is used to fuse the pixel at the coordinate of the first key point in the face image and the pixel at the coordinate of the first key point in the microdermabrasion image.

第二生成模块600,用于从人脸图像中获取第一关键点坐标处的像素点的第一RGB颜色值,以及从磨皮图像中获取第一关键点坐标处的像素点的第二RGB颜色值;根据第一RGB颜色值、第二RGB颜色值和透明度A值,计算人脸图像中第一关键点坐标处的像素点与磨皮图像中第一关键点坐标处的像素点融合后的融合像素值;将人脸图像中第一关键点坐标处的像素点当前的像素值重置为融合像素值。The second generation module 600 is used to obtain the first RGB color value of the pixel at the first key point coordinates from the face image, and obtain the second RGB of the pixel at the first key point coordinates from the dermabrasion image Color value; according to the first RGB color value, the second RGB color value and the transparency A value, calculate the pixel point at the first key point coordinates in the face image and the pixel point at the first key point coordinates in the microdermabrasion image after fusion The fused pixel value; the current pixel value of the pixel point at the coordinates of the first key point in the face image is reset to the fused pixel value.

第二生成模块600,用于计算第一RGB颜色值与透明度A值之间的第一乘积以及第二RGB颜色值与透明度A值之间的第二乘积;计算第一乘积和第二乘积的平均值;将平均值确定为人脸图像中第一关键点坐标处的像素点与磨皮图像中第一关键点坐标处的像素点融合后的融合像素值。The second generation module 600 is used to calculate the first product between the first RGB color value and the transparency A value and the second product between the second RGB color value and the transparency A value; calculate the first product and the second product Average value; the average value is determined as the fused pixel value after the pixel at the first key point coordinate in the face image is fused with the pixel at the first key point coordinate in the skin care image.

预设素材图像为透明度渐变的图像,像素点的透明度由预设素材图像的边缘向预设素材图像的中心递减。The preset material image is an image with a gradient of transparency, and the transparency of pixels decreases from the edge of the preset material image to the center of the preset material image.

该装置包括:人脸检测模块,用于检测人脸图像中是否包含人脸区域;如果是,则执行检测用户的人脸图像中面部瑕疵对应的关键点坐标的操作;如果否,则发送提示信息给用户的终端,提示信息用于提示用户人脸图像中需包含人脸区域。The device includes: a face detection module, which is used to detect whether the face image contains a face area; if yes, perform the operation of detecting the key point coordinates corresponding to the facial blemishes in the user's face image; if not, send a prompt The information is sent to the user's terminal, and the prompt information is used to prompt the user that the face image needs to include a face area.

本申请的上述实施例提供的图像处理装置与本申请实施例提供的图像处理方法出于相同的发明构思,具有与其存储的应用程序所采用、运行或实现的方法相同的有益效果。The image processing device provided in the above embodiments of the present application is based on the same inventive concept as the image processing method provided in the embodiments of the present application, and has the same beneficial effect as the method adopted, executed or realized by the stored application program.

本申请实施方式还提供一种电子设备,以执行上述图像处理方法。其示出了本申请的一些实施方式所提供的一种电子设备的示意图。电子设备包括:处理器,存储器,总线和通信接口,所述处理器、通信接口和存储器通过总线连接;所述存储器中存储有可在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序时执行本申请前述任一实施方式所提供的图像处理方法。Embodiments of the present application further provide an electronic device to execute the above image processing method. It shows a schematic diagram of an electronic device provided by some embodiments of the present application. The electronic equipment includes: a processor, a memory, a bus and a communication interface, the processor, the communication interface and the memory are connected through a bus; a computer program that can run on the processor is stored in the memory, and the processor runs The computer program executes the image processing method provided in any one of the foregoing embodiments of the present application.

其中,存储器可能包含高速随机存取存储器(RAM:Random Access Memory),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。通过至少一个通信接口(可以是有线或者无线)实现该装置网元与至少一个其他网元之间的通信连接,可以使用互联网、广域网、本地网、城域网等。Wherein, the memory may include a high-speed random access memory (RAM: Random Access Memory), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the device network element and at least one other network element is realized through at least one communication interface (which may be wired or wireless), and the Internet, wide area network, local network, metropolitan area network, etc. can be used.

总线可以是ISA总线、PCI总线或EISA总线等。所述总线可以分为地址总线、数据总线、控制总线等。其中,存储器用于存储程序,所述处理器在接收到执行指令后,执行所述程序,前述本申请实施例任一实施方式揭示的所述图像处理方法可以应用于处理器中,或者由处理器实现。The bus can be an ISA bus, a PCI bus, or an EISA bus, etc. The bus can be divided into address bus, data bus, control bus and so on. Wherein, the memory is used to store the program, and the processor executes the program after receiving the execution instruction, and the image processing method disclosed in any implementation mode of the above-mentioned embodiments of the present application can be applied to the processor, or can be processed by the processor implement.

处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器 (DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。A processor may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software. The above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, referred to as CPU), a network processor (Network Processor, referred to as NP), etc.; it can also be a digital signal processor (DSP), an application-specific integrated circuit ( ASIC), off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. Various methods, steps, and logic block diagrams disclosed in the embodiments of the present application may be implemented or executed. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, and the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.

本申请实施例提供的电子设备与本申请实施例提供的图像处理方法出于相同的发明构思,具有与其采用、运行或实现的方法相同的有益效果。The electronic device provided in the embodiment of the present application is based on the same inventive concept as the image processing method provided in the embodiment of the present application, and has the same beneficial effect as the method adopted, operated or implemented.

本申请实施方式还提供一种与前述实施方式所提供的图像处理方法对应的计算机可读存储介质,其示出的计算机可读存储介质为光盘,其上存储有计算机程序(即程序产品),所述计算机程序在被处理器运行时,会执行前述任意实施方式所提供的图像处理方法。Embodiments of the present application also provide a computer-readable storage medium corresponding to the image processing method provided in the foregoing embodiments, where the computer-readable storage medium shown is an optical disc on which a computer program (ie, a program product) is stored. When the computer program is run by the processor, it will execute the image processing method provided in any of the foregoing implementation manners.

需要说明的是,所述计算机可读存储介质的例子还可以包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他光学、磁性存储介质,在此不再一一赘述。It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random Access memory (RAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other optical and magnetic storage media will not be repeated here.

本申请的上述实施例提供的计算机可读存储介质与本申请实施例提供的图像处理方法出于相同的发明构思,具有与其存储的应用程序所采用、运行或实现的方法相同的有益效果。The computer-readable storage medium provided by the above embodiments of the present application is based on the same inventive concept as the image processing method provided by the embodiments of the present application, and has the same beneficial effect as the method adopted, run or realized by the stored application program.

实施例十Embodiment ten

本申请实施例还提供一种化妆进度检测装置,该装置用于执行上述任一实施例提供的化妆进度检测方法。如图28所示,该装置包括:An embodiment of the present application further provides a makeup progress detection device, which is used to implement the makeup progress detection method provided in any one of the above embodiments. As shown in Figure 28, the device includes:

视频获取模块700,用于获取用户当前进行特定妆容的实时化妆视频;A video acquisition module 700, configured to acquire a real-time makeup video of the user currently performing a specific makeup look;

化妆进度确定模块800,用于根据实时化妆视频的初始帧图像和当前帧图像,确定用户进行特定妆容的当前化妆进度。The makeup progress determination module 800 is configured to determine the current makeup progress of the user performing a specific makeup look according to the initial frame image and the current frame image of the real-time makeup video.

特定妆容包括高光妆容、或修容妆容;化妆进度确定模块800,用于获取特定妆容对应的至少一个目标上妆区域;根据目标上妆区域,从初始帧图像中获取特定妆容对应的第一目标区域图像,及从当前帧图像中获取特定妆容对应的第二目标区域图像;根据第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。The specific makeup includes highlight makeup or grooming makeup; the makeup progress determination module 800 is configured to obtain at least one target makeup area corresponding to the specific makeup; according to the target makeup area, obtain the first target corresponding to the specific makeup from the initial frame image The area image, and the second target area image corresponding to the specific makeup is obtained from the current frame image; according to the first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined.

化妆进度确定模块800,用于检测初始帧图像对应的第一人脸关键点;根据第一人脸关键点,获取初始帧图像对应的人脸区域图像;根据第一人脸关键点和目标上妆区域,从人脸区域图像中获取特定妆容对应的第一目标区域图像。The makeup progress determination module 800 is used to detect the first human face key point corresponding to the initial frame image; according to the first human face key point, obtain the human face area image corresponding to the initial frame image; The makeup area is used to obtain the first target area image corresponding to the specific makeup from the face area image.

化妆进度确定模块800,用于从第一人脸关键点中确定出位于人脸区域图像中目标上妆区域对应的区域轮廓上的一个或多个目标关键点;根据目标上妆区域对应的目标关键点,生成人脸区域图像对应的掩膜图像;对掩膜图像和人脸区域图像进行与运算,获得特定妆容对应的第一目标区域图像。The makeup progress determination module 800 is used to determine one or more target key points located on the contour of the area corresponding to the target makeup area in the face area image from the first human face key points; The key point is to generate a mask image corresponding to the face area image; perform an AND operation on the mask image and the face area image to obtain the first target area image corresponding to the specific makeup.

化妆进度确定模块800,用于若存在目标上妆区域对应的目标关键点的数目为多个,则根据目标关键点,确定目标上妆区域在人脸区域图像中的每个边缘坐标;将每个边缘坐标围成的区域内的所有像素点的像素值均修改为预设值,得到目标上妆区域对应的掩膜区域;若存在目标上妆区域对应的目标关键点的数目为一个,则以目标关键点为中心,绘制预设大小的椭圆区域,将椭圆区域内的所有像素点的像素值均修改为预设值,得到目标上妆区域对应的掩膜区域;将掩膜区域之外的所有像素点的像素值均修改为零,得到人脸区域图像对应的掩膜图像。The makeup progress determination module 800 is used to determine each edge coordinate of the target makeup area in the face area image according to the target key points if there are multiple target key points corresponding to the target makeup area; The pixel values of all pixels in the area surrounded by edge coordinates are modified to preset values, and the mask area corresponding to the target makeup area is obtained; if there is one target key point corresponding to the target makeup area, then With the target key point as the center, draw an ellipse area with a preset size, modify the pixel values of all pixels in the ellipse area to the preset value, and obtain the mask area corresponding to the target makeup area; The pixel values of all the pixels in are modified to zero, and the mask image corresponding to the face area image is obtained.

特定妆容包括腮红妆容;化妆进度确定模块800,用于获取特定妆容对应的至少一个目标上妆区域;根据目标上妆区域,生成美妆掩码图;根据美妆掩码图、初始帧图像和当前帧图像,确定当前帧图像对应的当前化妆进度。The specific makeup includes blush makeup; the makeup progress determination module 800 is used to obtain at least one target makeup area corresponding to the specific makeup; generate a beauty mask map according to the target makeup area; and the current frame image to determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于以美妆掩码图为参照,从初始帧图像中获取上妆的第一目标区域图像,以及从当前帧图像中获取上妆的第二目标区域图像;根据第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to use the makeup mask image as a reference to obtain the first target area image of makeup from the initial frame image, and obtain the second target area image of makeup from the current frame image; according to the A target area image and a second target area image are used to determine the current makeup progress corresponding to the current frame image.

特定妆容包括眼线妆容;化妆进度确定模块800,用于获取初始帧图像和当前帧图像对应的美妆掩码图;根据初始帧图像,模拟生成眼线上妆完成后的结果图像;根据美妆掩码图、结果图像、初始帧图像和当前帧图像,确定当前帧图像对应的当前化妆进度。The specific makeup includes eyeliner makeup; the makeup progress determination module 800 is used to obtain the beauty mask map corresponding to the initial frame image and the current frame image; according to the initial frame image, simulate the result image after the eyeliner makeup is completed; code image, result image, initial frame image, and current frame image to determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于以初始帧图像对应的美妆掩码图为参照,从初始帧图像中获取上妆的第一目标区域图像;根据当前帧图像对应的美妆掩码图,从当前帧图像中获取上妆的第二目标区域图像;根据结果图像获取眼线上妆的第三目标区域图像;根据第一目标区域图像、第二目标区域图像和第三目标区域图像,确定当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is configured to use the makeup mask corresponding to the initial frame image as a reference to obtain the image of the first target area for makeup application from the initial frame image; according to the makeup mask corresponding to the current frame image, from Obtain the second target area image of makeup in the current frame image; acquire the third target area image of eyeliner makeup according to the result image; determine the current frame according to the first target area image, the second target area image and the third target area image The current makeup progress corresponding to the image.

化妆进度确定模块800,用于分别将第一目标区域图像、第二目标区域图像和第三目标区域图像转换为HLS颜色空间下包含饱和度通道的图像;根据转换后的第一目标区域图像、第二目标区域图像和第三目标区域图像,确定当前帧图像对应的当前化妆进度。Makeup progress determination module 800, configured to respectively convert the first target area image, the second target area image and the third target area image into images containing saturation channels in the HLS color space; according to the converted first target area image, The second target area image and the third target area image determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于分别计算转换后的第一目标区域图像对应的第一平均像素值、第二目标区域图像对应的第二平均像素值和第三目标区域图像对应的第三平均像素值;计算第二平均像素值与第一平均像素值之间的第一差值,以及计算第三平均像素值与第一平均像素值之间的第二差值;计算第一差值与第二差值之间的比值,得到当前帧图像对应的当前化妆进度。Makeup progress determination module 800, configured to calculate respectively the first average pixel value corresponding to the converted first target area image, the second average pixel value corresponding to the second target area image, and the third average pixel value corresponding to the third target area image value; calculate the first difference between the second average pixel value and the first average pixel value, and calculate the second difference between the third average pixel value and the first average pixel value; calculate the first difference and the first difference The ratio between the two difference values is used to obtain the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于对第一目标区域图像和第二目标区域图像进行对齐处理;对第一目标区域图像和第三目标区域图像进行对齐处理。The makeup progress determination module 800 is configured to perform alignment processing on the first target area image and the second target area image; and perform alignment processing on the first target area image and the third target area image.

化妆进度确定模块800,用于分别对第一目标区域图像和第二目标区域图像进行二值化处理,得到第一目标区域图像对应的第一二值化掩膜图像及第二目标区域图像对应的第二二值化掩膜图像;对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,得到第一目标区域图像与第二目标区域图像的相交区域对应的第二掩膜图像。Makeup progress determination module 800, configured to perform binarization processing on the first target area image and the second target area image respectively, to obtain the first binarized mask image corresponding to the first target area image and the second target area image corresponding to The second binarized mask image; the AND operation is performed on the first binarized mask image and the second binarized mask image to obtain the first target area image corresponding to the intersection area of the second target area image Two mask images.

化妆进度确定模块800,用于获取初始帧图像对应的人脸区域图像及结果图像对应的人脸区域图像;对第二掩膜图像和初始帧图像对应的人脸区域图像进行与运算,得到初始帧图像对应的新的第一目标区域图像;对第二掩膜图像和结果图像对应的人脸区域图像进行与运算,得到结果图像对应的新的第二目标区域图像。The makeup progress determination module 800 is used to obtain the face area image corresponding to the initial frame image and the face area image corresponding to the result image; perform an AND operation on the face area image corresponding to the second mask image and the initial frame image to obtain the initial A new first target area image corresponding to the frame image; an AND operation is performed on the second mask image and the face area image corresponding to the result image to obtain a new second target area image corresponding to the result image.

化妆进度确定模块800,用于获取用户选择的眼线样式图;若初始帧图像中用户的眼部状态为睁眼状态,则获取眼线样式图对应的睁眼样式图;将睁眼样式图确定为初始帧图像对应的美妆掩码图;若初始帧图像中用户的眼部状态为闭眼状态,则获取眼线样式图对应的闭眼样式图,并将闭眼样式图确定为初始帧图像对应的美妆掩码图。The makeup progress determination module 800 is used to obtain the eyeliner style diagram selected by the user; if the user's eye state in the initial frame image is an eye-opening state, then obtain the eye-opening style diagram corresponding to the eyeliner style diagram; determine the eye-opening style diagram as The beauty mask map corresponding to the initial frame image; if the user's eye state in the initial frame image is the closed eye state, then obtain the closed eye pattern map corresponding to the eyeliner pattern map, and determine the closed eye pattern map as the initial frame image corresponding beauty mask illustration.

特定妆容包括眼影妆容;化妆进度确定模块800,用于获取眼影掩码图;根据眼影上妆的每个目标上妆区域,分别从眼影掩码图中拆分出每个目标上妆区域对应的美妆掩码图;根据初始帧图像、当前帧图像及每个目标上妆区域对应的美妆掩码图,确定当前帧图像对应的当前化妆进度。The specific makeup includes eye shadow makeup; the makeup progress determination module 800 is used to obtain the eye shadow mask map; according to each target makeup area of the eye shadow makeup, split the target makeup area corresponding to each target makeup area from the eye shadow mask map. Makeup mask map; determine the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image, and the beauty mask map corresponding to each target makeup area.

化妆进度确定模块800,用于分别以每个目标上妆区域对应的美妆掩码图为参照,从初始帧图像中获取每个目标上妆区域对应的第一目标区域图像;分别以每个目标上妆区域对应的美妆掩码图为参照,从当前帧图像中获取每个目标上妆区域对应的第二目标区域图像;根据每个目标上妆区域对应的第一目标区域图像及第二目标区域图像,确定当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to obtain the first target area image corresponding to each target makeup area from the initial frame image with reference to the beauty mask map corresponding to each target makeup area; The beauty mask image corresponding to the target makeup area is used as a reference, and the second target area image corresponding to each target makeup area is obtained from the current frame image; according to the first target area image and the second target area image corresponding to each target makeup area 2. Target area image, determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于分别将每个目标上妆区域对应的第一目标区域图像和第二目标区域图像转换为HLS颜色空间下包含预设单通道分量的图像;根据转换后的每个目标上妆区域对应的第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。Makeup progress determination module 800, configured to respectively convert the first target area image and the second target area image corresponding to each target makeup area into images containing preset single-channel components in the HLS color space; The first target area image and the second target area image corresponding to the target makeup area determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于分别计算转换后同一目标上妆区域对应的第一目标区域图像和第二目标区域图像中位置相同的像素点对应的预设单通道分量的差值绝对值;统计每个目标上妆区域对应的差值绝对值满足预设化妆完成条件的像素点数目;分别计算每个目标上妆区域对应的像素点数目与对应目标上妆区域中的像素点总数目之间的比值,得到每个目标上妆区域对应的化妆进度;根据每个目标上妆区域对应的化妆进度及每个目标上妆区域对应的预设权重,计算当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to calculate the absolute value of the difference between the preset single-channel components corresponding to the pixels in the same position in the first target area image and the second target area image corresponding to the same target makeup area after conversion; statistics The absolute value of the difference corresponding to each target makeup area satisfies the number of pixels that meet the preset makeup completion conditions; calculate the difference between the number of pixels corresponding to each target makeup area and the total number of pixels in the corresponding target makeup area to obtain the makeup progress corresponding to each target makeup area; according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area, calculate the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于检测初始帧图像对应的第一人脸关键点;根据第一人脸关键点,获取初始帧图像对应的人脸区域图像;以美妆掩码图为参照,从人脸区域图像中获取上妆的第一目标区域图像。The makeup progress determination module 800 is used to detect the first human face key point corresponding to the initial frame image; according to the first human face key point, obtain the corresponding human face area image of the initial frame image; The image of the first target area for applying makeup is acquired from the face area image.

化妆进度确定模块800,用于分别将美妆掩码图和人脸区域图像转换为二值化图像;对美妆掩码图对应的二值化图像和人脸区域图像对应的二值化图像进行与运算,获得美妆掩码图与人脸区域图像的相交区域对应的第一掩膜图像;对第一掩膜图像与初始帧图像对应的人脸区域图像进行与运算,获得第一目标区域图像。The makeup progress determination module 800 is used to convert the beauty mask image and the face area image into binary images respectively; the binary image corresponding to the beauty mask image and the binary image corresponding to the face area image Perform an AND operation to obtain the first mask image corresponding to the intersection area of the beauty mask image and the face area image; perform an AND operation on the face area image corresponding to the first mask image and the initial frame image to obtain the first target area image.

化妆进度确定模块800,用于根据美妆掩码图对应的标准人脸关键点,确定美妆掩码图中位于每个上妆区域的轮廓上的一个或多个第一定位点;根据第一人脸关键点,从人脸区域图像中确定出与每个第一定位点对应的第二定位点;对美妆掩码图进行拉伸处理,将每个第一定位点拉伸至对应的每个第二定位点对应的位置处。The makeup progress determination module 800 is used to determine one or more first positioning points on the outline of each makeup area in the beauty mask image according to the standard face key points corresponding to the beauty mask image; according to the A face key point, determine the second positioning point corresponding to each first positioning point from the face area image; stretch the beauty mask map, and stretch each first positioning point to the corresponding at the position corresponding to each second positioning point of .

化妆进度确定模块800,用于将美妆掩码图拆分为多个子掩码图,每个子掩码图中包括至少一个目标上妆区域;分别将每个子掩码图及人脸区域图像转换为二值化图像;分别对每个子掩码图对应的二值化图像与人脸区域图像对应的二值化图像进行与运算,获得每个子掩码图各自对应的子掩膜图像;分别对每个子掩膜图像与初始帧图像对应的人脸区域图像进行与运算,获得初始帧图像对应的多个子目标区域图像;将多个子目标区域图像合并为初始帧图像对应的第一目标区域图像。The makeup progress determination module 800 is used to split the beauty mask image into multiple sub-mask images, each sub-mask image includes at least one target makeup area; convert each sub-mask image and the face area image respectively is a binarized image; respectively perform an AND operation on the binarized image corresponding to each sub-mask image and the binarized image corresponding to the face area image to obtain the respective sub-mask image corresponding to each sub-mask image; Perform an AND operation on each sub-mask image with the face area image corresponding to the initial frame image to obtain multiple sub-target area images corresponding to the initial frame image; merge the multiple sub-target area images into a first target area image corresponding to the initial frame image.

化妆进度确定模块800,用于根据美妆掩码图对应的标准人脸关键点,确定第一子掩码图中位于目标上妆区域的轮廓上的一个或多个第一定位点,第一子掩码图为多个子掩码图中的任一子掩码图;根据第一人脸关键点,从人脸区域图像中确定出与每个第一定位点对应的第二定位点;对第一子掩码图进行拉伸处理,将每个第一定位点拉伸至对应的每个第二定位点对应的位置处。The makeup progress determination module 800 is configured to determine one or more first positioning points located on the outline of the target makeup area in the first sub-mask image according to the standard face key points corresponding to the beauty mask image, the first The sub-mask is any sub-mask in a plurality of sub-masks; according to the first key point of people's face, the second positioning point corresponding to each first positioning point is determined from the face area image; Stretching is performed on the first sub-mask image, and each first positioning point is stretched to a corresponding position corresponding to each second positioning point.

特定妆容包括眉毛妆容;化妆进度确定模块800,用于从初始帧图像中获取眉毛对应的第一目标区域图像,及从当前帧图像中获取眉毛对应的第二目标区域图像;根据第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。The specific makeup includes eyebrow makeup; the makeup progress determination module 800 is used to obtain the first target area image corresponding to the eyebrows from the initial frame image, and obtain the second target area image corresponding to the eyebrows from the current frame image; according to the first target area image and the second target area image to determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于检测初始帧图像对应的第一人脸关键点;根据第一人脸关键点,获取初始帧图像对应的人脸区域图像;根据第一人脸关键点中包括的眉毛关键点,从人脸区域图像中获取眉毛对应的第一目标区域图像。The makeup progress determination module 800 is used to detect the first human face key point corresponding to the initial frame image; according to the first human face key point, obtain the human face area image corresponding to the initial frame image; according to the first human face key point included Eyebrow key points, obtain the first target area image corresponding to the eyebrows from the face area image.

化妆进度确定模块800,用于对第一人脸关键点包括的眉头至眉峰之间的眉毛关键点进行插值,得到多个插值点;从人脸区域图像中截取出眉头至眉峰之间所有眉毛关键点及多个插值点连接而成的闭合区域,得到眉头至眉峰之间的部分眉毛图像;从人脸区域图像中截取出眉峰至眉尾之间的所有眉毛关键点连接而成的闭合区域,得到眉峰至眉尾之间的部分眉毛图像;将眉头至眉峰之间的部分眉毛图像与眉峰至眉尾之间的部分眉毛图像拼接为眉毛对应的第一目标区域图像。The makeup progress determination module 800 is used to interpolate the eyebrow key points between the eyebrow head and the eyebrow peak included in the first human face key point to obtain multiple interpolation points; intercept all eyebrows between the eyebrow head and the eyebrow peak from the face area image A closed area formed by connecting key points and multiple interpolation points to obtain a part of the eyebrow image between the head of the eyebrow and the peak of the eyebrow; a closed area formed by connecting all eyebrow key points between the peak of the eyebrow and the end of the eyebrow is intercepted from the image of the face area , to obtain a part of the eyebrow image between the eyebrow peak and the eyebrow tail; the part of the eyebrow image between the eyebrow head and the eyebrow peak and the part of the eyebrow image between the eyebrow peak and the eyebrow tail are stitched into the first target area image corresponding to the eyebrows.

化妆进度确定模块800,用于分别将第一目标区域图像和第二目标区域图像转换为HSV颜色空间下包含预设单通道分量的图像;根据转换后的第一目标区域图像和第二目标区域图像,确定当前帧图像对应的当前化妆进度。Makeup progress determination module 800, configured to respectively convert the first target area image and the second target area image into images containing preset single-channel components in the HSV color space; according to the converted first target area image and the second target area image Image, determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于分别计算转换后的第一目标区域图像和第二目标区域图像中位置相同的像素点对应的预设单通道分量的差值绝对值;统计对应的差值绝对值满足预设化妆完成条件的像素点数目;计算统计的像素点数目与第一目标区域图像中所有目标上妆区域中的像素点总数目之间的比值,得到当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to calculate the absolute value of the difference of the preset single-channel component corresponding to the pixels in the same position in the converted first target area image and the second target area image respectively; count the corresponding absolute value of the difference The number of pixels that meet the preset makeup completion conditions; calculate the ratio between the statistical number of pixels and the total number of pixels in all target makeup areas in the first target area image, and obtain the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于分别对第一目标区域图像和第二目标区域图像进行二值化处理,得到第一目标区域图像对应的第一二值化掩膜图像和第二目标区域图像对应的第二二值化掩膜图像;对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,得到第一目标区域图像与第二目标区域图像的相交区域对应的第二掩膜图像;获取初始帧图像对应的人脸区域图像及当前帧图像对应的人脸区域图像;对第二掩膜图像和初始帧图像对应的人脸区域图像进行与运算,得到初始帧图像对应的新第一目标区域图像;对第二掩膜图像和当前帧图像对应的人脸区域图像进行与运算,得到当前帧图像对应的新第二目标区域图像。Makeup progress determination module 800, configured to perform binarization processing on the first target area image and the second target area image respectively, to obtain the first binarized mask image corresponding to the first target area image and the corresponding second target area image The second binarized mask image; the AND operation is performed on the first binarized mask image and the second binarized mask image to obtain the first target area image corresponding to the intersection area of the second target area image Two mask images; obtain the face area image corresponding to the initial frame image and the face area image corresponding to the current frame image; perform an AND operation on the face area image corresponding to the second mask image and the initial frame image to obtain the initial frame image The corresponding new first target area image; performing an AND operation on the second mask image and the face area image corresponding to the current frame image to obtain a new second target area image corresponding to the current frame image.

化妆进度确定模块800,用于分别对第一目标区域图像和第二目标区域图像中的上妆区域进行边界腐蚀处理。The makeup progress determination module 800 is configured to perform boundary erosion processing on the makeup areas in the first target area image and the second target area image respectively.

特定妆容包括粉底妆容;化妆进度确定模块800,用于根据初始帧图像,模拟生成完成特定妆容后的结果图像;分别获取初始帧图像、结果图像和当前帧图像对应的整体图像亮度;分别获取初始帧图像、结果图像和当前帧图像对应的人脸区域亮度;根据初始帧图像、结果图像和当前帧图像各自对应的整体图像亮度和人脸区域亮度,确定当前帧图像对应的当前化妆进度。The specific makeup includes foundation makeup; the makeup progress determination module 800 is used to simulate and generate the result image after the specific makeup is completed according to the initial frame image; respectively obtain the overall image brightness corresponding to the initial frame image, the result image and the current frame image; respectively obtain the initial The brightness of the face area corresponding to the frame image, the result image, and the current frame image; according to the overall image brightness and the brightness of the face area corresponding to the initial frame image, the result image, and the current frame image, determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于分别将初始帧图像、结果图像和当前帧图像转换为灰度图像;分别计算转换后初始帧图像、结果图像和当前帧图像各自对应的灰度图像中像素点的灰度平均值;将初始帧图像、结果图像和当前帧图像各自对应的灰度平均值分别确定为初始帧图像、结果图像和当前帧图像各自对应的整体图像亮度。The makeup progress determination module 800 is used to convert the initial frame image, the result image and the current frame image into grayscale images respectively; Gray-scale average value: determine the respective gray-scale average values corresponding to the initial frame image, the result image and the current frame image as respective overall image luminances corresponding to the initial frame image, the result image and the current frame image.

化妆进度确定模块800,用于分别获取初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像;分别将初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像转换为人脸灰度图像;分别计算初始帧图像、结果图像和当前帧图像各自对应的人脸灰度图像中像素点的灰度平均值,得到初始帧图像、结果图像和当前帧图像各自对应的人脸区域亮度。The makeup progress determination module 800 is used to respectively acquire the corresponding human face area images of the initial frame image, the result image and the current frame image; respectively convert the corresponding human face area images of the initial frame image, the result image and the current frame image into human faces Grayscale image: Calculate the grayscale average of the pixels in the grayscale image of the face corresponding to the initial frame image, the result image and the current frame image respectively, and obtain the corresponding face areas of the initial frame image, the result image and the current frame image brightness.

化妆进度确定模块800,用于根据初始帧图像对应的整体图像亮度和人脸区域亮度以及当前帧图像对应的整体图像亮度和人脸区域亮度,确定当前帧图像对应的第一环境变化亮度;根据初始帧图像对应的整体图像亮度和人脸区域亮度以及结果图像对应的整体图像亮度和人脸区域亮度,确定结果图像对应的第二环境变化亮度;根据第一环境变化亮度、第二环境变化亮度、初始帧图像对应的人脸区域亮度、当前帧图像对应的人脸区域亮度、结果图像对应的人脸区域亮度,确定当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to determine the first environment change brightness corresponding to the current frame image according to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the current frame image; The overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the result image determine the second environment change brightness corresponding to the result image; according to the first environment change brightness and the second environment change brightness , the brightness of the face area corresponding to the initial frame image, the brightness of the face area corresponding to the current frame image, and the brightness of the face area corresponding to the result image, to determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于计算初始帧图像对应的整体图像亮度与其对应的人脸区域亮度之间的差值,得到初始帧图像的环境亮度;计算当前帧图像对应的整体图像亮度与其对应的人脸区域亮度之间的差值,得到当前帧图像的环境亮度;将当前帧图像的环境亮度与初始帧图像的环境亮度之间的差值绝对值确定为当前帧图像对应的第一环境变化亮度。The makeup progress determination module 800 is used to calculate the difference between the brightness of the overall image corresponding to the initial frame image and the brightness of the corresponding face area to obtain the ambient brightness of the initial frame image; calculate the brightness of the overall image corresponding to the current frame image and its corresponding The difference between the brightness of the face area obtains the ambient brightness of the current frame image; the absolute value of the difference between the ambient brightness of the current frame image and the ambient brightness of the initial frame image is determined as the first environmental change corresponding to the current frame image brightness.

化妆进度确定模块800,用于根据第一环境变化亮度、初始帧图像对应的人脸区域亮度、当前帧图像对应的人脸区域亮度,确定当前帧图像对应的上妆亮度变化值;根据第二环境变化亮度、初始帧图像对应的人脸区域亮度、结果图像对应的人脸区域亮度,确定结果图像对应的上妆亮度变化值;计算当前帧图像对应的上妆亮度变化值与结果图像对应的上妆亮度变化值的比值,得到当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to determine the makeup brightness change value corresponding to the current frame image according to the brightness of the first environmental change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the current frame image; Change the brightness of the environment, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the result image, determine the change value of the makeup brightness corresponding to the result image; calculate the change value of the makeup brightness corresponding to the current frame image and the result image The ratio of the brightness change value of the makeup application to obtain the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于计算当前帧图像对应的人脸区域亮度与初始帧图像对应的人脸区域亮度之间的差值,得到当前帧图像对应的总亮度变化值;计算总亮度变化值与第一环境变化亮度之间的差值,得到当前帧图像对应的上妆亮度变化值。The makeup progress determination module 800 is used to calculate the difference between the brightness of the face area corresponding to the current frame image and the brightness of the face area corresponding to the initial frame image to obtain the total brightness change value corresponding to the current frame image; calculate the total brightness change value The difference between the brightness of the first environmental change and the brightness of the first environment change is used to obtain the change value of the makeup brightness corresponding to the current frame image.

化妆进度确定模块800,用于若第一环境变化亮度大于预设阈值,则将上一帧图像对应的化妆进度确定为当前帧图像对应的当前化妆进度;发送第一提示信息给用户的终端,第一提示信息用于提示用户回到初始帧图像对应的亮度环境下上妆。The makeup progress determination module 800 is configured to determine the makeup progress corresponding to the previous frame image as the current makeup progress corresponding to the current frame image if the brightness of the first environmental change is greater than the preset threshold; send the first prompt information to the user's terminal, The first prompt information is used to prompt the user to return to the brightness environment corresponding to the initial frame image to apply makeup.

特定妆容包括遮瑕妆容;化妆进度确定模块800,用于分别获取初始帧图像和当前帧图像各自对应的脸部瑕疵信息;根据初始帧图像对应的脸部瑕疵信息和当前帧图像对应的脸部瑕疵信息,计算当前帧图像与初始帧图像之间的脸部瑕疵差异值;若脸部瑕疵差异值大于预设阈值,则根据脸部瑕疵差异值和初始帧图像对应的脸部瑕疵信息,计算当前帧图像对应的当前化妆进度;若脸部瑕疵差异值小于或等于预设阈值,则获取用户完成遮瑕上妆后的结果图像,根据初始帧图像、结果图像和当前帧图像,确定当前帧图像对应的当前化妆进度。The specific makeup includes concealer makeup; the makeup progress determination module 800 is used to obtain the facial blemish information corresponding to the initial frame image and the current frame image respectively; according to the facial blemish information corresponding to the initial frame image and the facial blemish corresponding to the current frame image information to calculate the facial blemish difference value between the current frame image and the initial frame image; if the facial blemish difference value is greater than the preset threshold, then calculate the current facial blemish difference value and the facial blemish information corresponding to the initial frame image. The current makeup progress corresponding to the frame image; if the facial blemish difference value is less than or equal to the preset threshold, the result image after the user completes the concealer and makeup is obtained, and the current frame image is determined according to the initial frame image, the result image and the current frame image. current make-up progress.

化妆进度确定模块800,用于分别计算每种瑕疵类别下初始帧图像对应的瑕疵数目与当前帧图像对应的瑕疵数目之间的差值;计算每种瑕疵类别对应的差值之和,将得到的和值作为当前帧图像与初始帧图像之间的脸部瑕疵差异值。The makeup progress determination module 800 is used to calculate the difference between the number of blemishes corresponding to the initial frame image and the number of blemishes corresponding to the current frame image under each blemish category; calculate the sum of the differences corresponding to each blemish category, and you will get The sum value of is used as the face blemish difference value between the current frame image and the initial frame image.

化妆进度确定模块800,用于计算初始帧图像对应的脸部瑕疵信息中各瑕疵类别对应的瑕疵数目之和,得到总瑕疵数;计算脸部瑕疵差异值与总瑕疵数之间的比值,将比值作为当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to calculate the sum of the number of blemishes corresponding to each blemish category in the facial blemish information corresponding to the initial frame image to obtain the total number of blemishes; to calculate the ratio between the difference value of facial blemishes and the total number of blemishes, and The ratio is used as the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于根据初始帧图像,模拟生成用户完成遮瑕上妆后的结果图像;分别获取初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像;根据初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像,确定当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to simulate and generate the result image after the user completes the concealer and makeup according to the initial frame image; respectively obtain the corresponding face area images of the initial frame image, the result image and the current frame image; according to the initial frame image, The face area images corresponding to the result image and the current frame image respectively determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于分别将初始帧图像、结果图像和当前帧图像各自对应的人脸区域图像转换为HLS颜色空间下包含饱和度通道的图像;通过预设滤波算法分别计算转换后初始帧图像、结果图像和当前帧图像各自的人脸区域图像对应的平滑因子;根据初始帧图像、结果图像和当前帧图像各自对应的平滑因子,确定当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to respectively convert the corresponding face area images of the initial frame image, the result image and the current frame image into images containing saturation channels in the HLS color space; The smoothing factors corresponding to the face area images of the frame image, the result image and the current frame image; according to the smoothing factors corresponding to the initial frame image, the result image and the current frame image, determine the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于计算当前帧图像对应的平滑因子与初始帧图像对应的平滑因子之间的第一差值;计算结果图像对应的平滑因子与初始帧图像对应的平滑因子之间的第二差值;计算第一差值与第二差值之间的比值,将比值作为当前帧图像对应的当前化妆进度。The makeup progress determination module 800 is used to calculate the first difference between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image; the first difference between the smoothing factor corresponding to the calculation result image and the smoothing factor corresponding to the initial frame image The second difference: calculate the ratio between the first difference and the second difference, and use the ratio as the current makeup progress corresponding to the current frame image.

化妆进度确定模块800,用于分别获取初始帧图像和当前帧图像各自对应的人脸区域图像;通过预设的皮肤检测模型分别检测初始帧图像和当前帧图像各自对应的人脸区域图像中各瑕疵类别对应的瑕疵数目,得到初始帧图像和当前帧图像各自对应的脸部瑕疵信息。The makeup progress determination module 800 is used to respectively acquire the face area images corresponding to the initial frame image and the current frame image; respectively detect each of the face area images corresponding to the initial frame image and the current frame image through the preset skin detection model. The number of blemishes corresponding to the blemish category is used to obtain the facial blemish information corresponding to the initial frame image and the current frame image respectively.

化妆进度确定模块800,用于根据初始帧图像对应的第一人脸关键点,对初始帧图像及第一人脸关键点进行旋转矫正;根据矫正后的第一人脸关键点,从矫正后的初始帧图像中截取包含人脸区域的图像;将包含人脸区域的图像缩放至预设尺寸,得到初始帧图像对应的人脸区域图像。The makeup progress determination module 800 is used to perform rotation correction on the initial frame image and the first human face key point according to the first human face key point corresponding to the initial frame image; The image containing the face area is intercepted from the initial frame image; the image containing the face area is scaled to a preset size to obtain the face area image corresponding to the initial frame image.

化妆进度确定模块800,用于根据第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标;根据左眼中心坐标和右眼中心坐标,确定初始帧图像对应的旋转角度及旋转中心点坐标;根据旋转角度和旋转中心点坐标,对初始帧图像及第一人脸关键点进行旋转矫正。The makeup progress determination module 800 is used to determine the left eye center coordinates and right eye center coordinates respectively according to the left eye key points and right eye key points included in the first human face key points; according to the left eye center coordinates and right eye center coordinates, Determine the rotation angle and the coordinates of the rotation center point corresponding to the initial frame image; perform rotation correction on the initial frame image and the key points of the first face according to the rotation angle and the rotation center point coordinates.

化妆进度确定模块800,用于根据矫正后的第一人脸关键点,对矫正后的初始帧图像中包含的人脸区域进行图像截取。The makeup progress determination module 800 is configured to, according to the corrected first key points of the human face, perform image interception on the human face area included in the corrected initial frame image.

化妆进度确定模块800,用于从矫正后的第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值;根据最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的初始帧图像中人脸区域对应的截取框;根据截取框,从矫正后的初始帧图像中截取出包含人脸区域的图像。The makeup progress determination module 800 is used to determine the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value from the corrected first human face key point; according to the minimum abscissa value, the minimum ordinate value , the maximum abscissa value and the maximum ordinate value, determine the interception frame corresponding to the face area in the corrected initial frame image; according to the interception frame, intercept the image containing the face area from the corrected initial frame image.

化妆进度确定模块800,还用于将截取框放大预设倍数;根据放大后的截取框,从矫正后的初始帧图像中截取出包含人脸区域的图像。The makeup progress determination module 800 is further configured to enlarge the clipping frame by a preset multiple; and according to the zoomed-in clipping frame, clip an image containing a face area from the corrected initial frame image.

化妆进度确定模块800,还用于根据包含人脸区域的图像的尺寸及预设尺寸,对矫正后的第一人脸关键点进行缩放平移处理。The makeup progress determination module 800 is further configured to perform scaling and translation processing on the corrected first key points of the human face according to the size of the image including the human face area and the preset size.

该装置还包括:人脸检测模块,用于检测初始帧图像和当前帧图像中是否均仅包含同一个用户的人脸图像;如果是,则执行确定用户进行特定妆容的当前化妆进度的操作;如果否,则发送提示信息给用户的终端,提示信息用于提示用户保持实时化妆视频中仅出现同一个用户的人脸。The device also includes: a face detection module, which is used to detect whether the initial frame image and the current frame image only contain the face image of the same user; if so, perform the operation of determining the current makeup progress of the user for a specific makeup look; If not, send prompt information to the user's terminal, and the prompt information is used to prompt the user to keep only the face of the same user appearing in the real-time makeup video.

本申请的上述实施例提供的化妆进度检测装置与本申请实施例提供的化妆进度检测方法出于相同的发明构思,具有与其存储的应用程序所采用、运行或实现的方法相同的有益效果。The makeup progress detection device provided in the above embodiments of the present application is based on the same inventive concept as the makeup progress detection method provided in the embodiments of the present application, and has the same beneficial effects as the methods adopted, run or realized by its stored application programs.

本申请实施方式还提供一种电子设备,以执行上述化妆进度检测方法。请参考图29,其示出了本申请的一些实施方式所提供的一种电子设备的示意图。如图 29所示,电子设备11包括:处理器1100,存储器1101,总线1102和通信接口1103,所述处理器1100、通信接口1103和存储器1101通过总线1102连接;所述存储器 1101中存储有可在所述处理器1100上运行的计算机程序,所述处理器1100运行所述计算机程序时执行本申请前述任一实施方式所提供的化妆进度检测方法。The embodiment of the present application also provides an electronic device to implement the above method for detecting the progress of makeup. Please refer to FIG. 29 , which shows a schematic diagram of an electronic device provided by some embodiments of the present application. As shown in Figure 29, the electronic device 11 includes: a processor 1100, a memory 1101, a bus 1102 and a communication interface 1103, the processor 1100, the communication interface 1103 and the memory 1101 are connected through the bus 1102; A computer program running on the processor 1100, when the processor 1100 runs the computer program, executes the makeup progress detection method provided in any one of the foregoing embodiments of the present application.

其中,存储器1101可能包含高速随机存取存储器(RAM:Random Access Memory),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。通过至少一个通信接口1103(可以是有线或者无线)实现该装置网元与至少一个其他网元之间的通信连接,可以使用互联网、广域网、本地网、城域网等。Wherein, the memory 1101 may include a high-speed random access memory (RAM: Random Access Memory), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the device network element and at least one other network element is realized through at least one communication interface 1103 (which may be wired or wireless), and the Internet, wide area network, local network, metropolitan area network, etc. can be used.

总线1102可以是ISA总线、PCI总线或EISA总线等。所述总线可以分为地址总线、数据总线、控制总线等。其中,存储器1101用于存储程序,所述处理器 1100在接收到执行指令后,执行所述程序,前述本申请实施例任一实施方式揭示的所述化妆进度检测方法可以应用于处理器1100中,或者由处理器1100实现。The bus 1102 can be an ISA bus, a PCI bus, or an EISA bus, etc. The bus can be divided into address bus, data bus, control bus and so on. Wherein, the memory 1101 is used to store a program, and the processor 1100 executes the program after receiving an execution instruction, and the makeup progress detection method disclosed in any implementation manner of the above-mentioned embodiments of the present application can be applied to the processor 1100 , or implemented by the processor 1100.

处理器1100可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1100中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1100可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1101,处理器1100读取存储器1101中的信息,结合其硬件完成上述方法的步骤。The processor 1100 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above method may be implemented by an integrated logic circuit of hardware in the processor 1100 or instructions in the form of software. The above-mentioned processor 1100 can be a general-purpose processor, including a central processing unit (Central Processing Unit, referred to as CPU), a network processor (Network Processor, referred to as NP), etc.; it can also be a digital signal processor (DSP), an application-specific integrated circuit (ASIC), off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. Various methods, steps, and logic block diagrams disclosed in the embodiments of the present application may be implemented or executed. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, and the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register. The storage medium is located in the memory 1101, and the processor 1100 reads the information in the memory 1101, and completes the steps of the above method in combination with its hardware.

本申请实施例提供的电子设备与本申请实施例提供的化妆进度检测方法出于相同的发明构思,具有与其采用、运行或实现的方法相同的有益效果。The electronic device provided in the embodiment of the present application is based on the same inventive concept as the method for detecting the progress of makeup provided in the embodiment of the present application, and has the same beneficial effect as the method adopted, operated or realized.

本申请实施方式还提供一种与前述实施方式所提供的化妆进度检测方法对应的计算机可读存储介质,请参考图30,其示出的计算机可读存储介质为光盘30,其上存储有计算机程序(即程序产品),所述计算机程序在被处理器运行时,会执行前述任意实施方式所提供的化妆进度检测方法。The embodiment of the present application also provides a computer-readable storage medium corresponding to the makeup progress detection method provided in the above-mentioned embodiment. Please refer to FIG. A program (that is, a program product), when the computer program is run by a processor, it will execute the makeup progress detection method provided in any of the foregoing implementation manners.

需要说明的是,所述计算机可读存储介质的例子还可以包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他光学、磁性存储介质,在此不再一一赘述。It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random Access memory (RAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other optical and magnetic storage media will not be repeated here.

本申请的上述实施例提供的计算机可读存储介质与本申请实施例提供的化妆进度检测方法出于相同的发明构思,具有与其存储的应用程序所采用、运行或实现的方法相同的有益效果。The computer-readable storage medium provided by the above embodiments of the present application is based on the same inventive concept as the makeup progress detection method provided by the embodiments of the present application, and has the same beneficial effect as the method adopted, run or implemented by the stored application program.

需要说明的是:It should be noted:

在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本申请的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的结构和技术,以便不模糊对本说明书的理解。In the description provided herein, numerous specific details are set forth. However, it is understood that the embodiments of the application may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure the understanding of this description.

类似地,应当理解,为了精简本申请并帮助理解各个发明方面中的一个或多个,在上面对本申请的示例性实施例的描述中,本申请的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下示意图:即所要求保护的本申请要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本申请的单独实施例。Similarly, it should be understood that in the above description of exemplary embodiments of the application, in order to streamline the application and to facilitate understanding of one or more of the various inventive aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or its description. This method of disclosure, however, is not to be interpreted as reflecting a schematic representation that the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following this Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this application.

此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本申请的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。In addition, those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments but not others, combinations of features from different embodiments are meant to be within the scope of the present application. and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.

以上所述,仅为本申请较佳的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above is only a preferred embodiment of the present application, but the scope of protection of the present application is not limited thereto. Any person familiar with the technical field can easily conceive of changes or changes within the technical scope disclosed in this application Replacement should be covered within the protection scope of this application. Therefore, the protection scope of the present application should be determined by the protection scope of the claims.

Claims (55)

1.一种化妆进度检测方法,其特征在于,包括:1. A makeup progress detection method, characterized in that, comprising: 获取用户当前进行特定妆容的实时化妆视频;Obtain real-time makeup videos of users currently performing specific makeup looks; 根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度;According to the initial frame image and the current frame image of the real-time makeup video, determine the current makeup progress of the user performing the specific makeup; 其中,根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:Wherein, according to the initial frame image and the current frame image of the real-time makeup video, determining the current makeup progress of the user performing the specific makeup includes: 所述特定妆容包括高光妆容、修容妆容、腮红妆容、眼线妆容、眼影妆容或眉毛妆容中的至少一种,从所述初始帧图像中获取所述特定妆容对应的第一目标区域图像,及从所述当前帧图像中获取所述特定妆容对应的第二目标区域图像;获取仅包含所述特定妆容对应的通道分量的单通道的所述第一目标区域图像和所述第二目标区域图像;对仅包含所述通道分量的单通道的所述第一目标区域图像和所述第二目标区域图像分别进行二值化处理,得到第一目标区域图像对应的第一二值化掩膜图像和第二目标区域图像对应的第二二值化掩膜图像;对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,得到相交区域掩膜图像;基于所述相交区域掩膜图像,分别生成新的第一目标区域图像和新的第二目标区域图像;根据新的第一目标区域图像和新的第二目标区域图像,确定所述用户进行所述特定妆容的当前化妆进度。The specific makeup includes at least one of highlight makeup, trimming makeup, blush makeup, eyeliner makeup, eye shadow makeup or eyebrow makeup, and the first target area image corresponding to the specific makeup is obtained from the initial frame image, and acquiring a second target area image corresponding to the specific makeup from the current frame image; acquiring a single-channel image of the first target area and the second target area that only contain channel components corresponding to the specific makeup image; perform binarization processing on the first target area image and the second target area image of a single channel that only contain the channel components, to obtain a first binarization mask corresponding to the first target area image A second binarized mask image corresponding to the image and the second target area image; an AND operation is performed on the first binarized mask image and the second binarized mask image to obtain an intersection area mask image; based on the The intersecting area mask image generates a new first target area image and a new second target area image respectively; according to the new first target area image and the new second target area image, it is determined that the user performs the specific makeup current make-up progress. 2.根据权利要求1所述的方法,其特征在于,所述特定妆容包括高光妆容、或修容妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:2. The method according to claim 1, wherein the specific makeup includes high-gloss makeup or grooming makeup; and according to the initial frame image and current frame image of the real-time makeup video, it is determined that the user performs The current make-up progress for the specific look in question, including: 获取所述特定妆容对应的至少一个目标上妆区域;Obtain at least one target makeup area corresponding to the specific makeup; 根据所述目标上妆区域,从所述初始帧图像中获取所述特定妆容对应的第一目标区域图像,及从所述当前帧图像中获取所述特定妆容对应的第二目标区域图像;Acquiring a first target area image corresponding to the specific makeup from the initial frame image according to the target makeup area, and acquiring a second target area image corresponding to the specific makeup from the current frame image; 根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined. 3.根据权利要求2所述的方法,其特征在于,所述根据所述目标上妆区域,从所述初始帧图像中获取所述特定妆容对应的第一目标区域图像,包括:3. The method according to claim 2, wherein said acquiring the first target area image corresponding to the specific makeup from the initial frame image according to the target makeup area comprises: 检测所述初始帧图像对应的第一人脸关键点;Detecting the first human face key point corresponding to the initial frame image; 根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像;Acquiring a face region image corresponding to the initial frame image according to the first face key point; 根据所述第一人脸关键点和所述目标上妆区域,从所述人脸区域图像中获取所述特定妆容对应的第一目标区域图像。According to the first human face key point and the target makeup area, a first target area image corresponding to the specific makeup is acquired from the human face area image. 4.根据权利要求3所述的方法,其特征在于,所述根据所述第一人脸关键点和所述目标上妆区域,从所述人脸区域图像中提取所述特定妆容对应的第一目标区域图像,包括:4. The method according to claim 3, characterized in that, according to the first human face key point and the target makeup area, extracting the first corresponding to the specific makeup from the human face area image An image of the target area, including: 从所述第一人脸关键点中确定出位于所述人脸区域图像中所述目标上妆区域对应的区域轮廓上的一个或多个目标关键点;Determining one or more target key points located on the contour of the area corresponding to the target makeup area in the human face area image from the first human face key points; 根据所述目标上妆区域对应的目标关键点,生成所述人脸区域图像对应的掩膜图像;Generate a mask image corresponding to the face area image according to the target key points corresponding to the target makeup area; 对所述掩膜图像和所述人脸区域图像进行与运算,获得所述特定妆容对应的第一目标区域图像。An AND operation is performed on the mask image and the face area image to obtain a first target area image corresponding to the specific makeup. 5.根据权利要求4所述的方法,其特征在于,所述根据所述目标上妆区域对应的目标关键点,生成所述人脸区域图像对应的掩膜图像,包括:5. The method according to claim 4, wherein the generating a mask image corresponding to the face area image according to the target key points corresponding to the target makeup area comprises: 若存在目标上妆区域对应的目标关键点的数目为多个,则根据所述目标关键点,确定所述目标上妆区域在所述人脸区域图像中的每个边缘坐标;将所述每个边缘坐标围成的区域内的所有像素点的像素值均修改为预设值,得到所述目标上妆区域对应的掩膜区域;If there are multiple target key points corresponding to the target makeup area, then according to the target key points, determine each edge coordinate of the target makeup area in the face area image; The pixel values of all the pixel points in the area surrounded by two edge coordinates are all modified to preset values, so as to obtain the mask area corresponding to the target makeup area; 若存在目标上妆区域对应的目标关键点的数目为一个,则以所述目标关键点为中心,绘制预设大小的椭圆区域,将所述椭圆区域内的所有像素点的像素值均修改为预设值,得到所述目标上妆区域对应的掩膜区域;If there is one target key point corresponding to the target makeup area, then take the target key point as the center, draw an ellipse area of preset size, and modify the pixel values of all pixels in the ellipse area to A preset value to obtain a mask area corresponding to the target makeup area; 将所述掩膜区域之外的所有像素点的像素值均修改为零,得到所述人脸区域图像对应的掩膜图像。Modifying the pixel values of all pixels outside the mask area to zero to obtain a mask image corresponding to the face area image. 6.根据权利要求1所述的方法,其特征在于,所述特定妆容包括腮红妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:6. The method according to claim 1, wherein the specific makeup includes blush makeup; and according to the initial frame image and the current frame image of the real-time makeup video, it is determined that the user performs the specific makeup Current makeup progress, including: 获取所述特定妆容对应的至少一个目标上妆区域;Obtain at least one target makeup area corresponding to the specific makeup; 根据所述目标上妆区域,生成美妆掩码图;Generate a beauty mask map according to the target makeup area; 根据所述美妆掩码图、所述初始帧图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度。According to the makeup mask image, the initial frame image and the current frame image, determine the current makeup progress corresponding to the current frame image. 7.根据权利要求6所述的方法,其特征在于,所述根据所述美妆掩码图、所述初始帧图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度,包括:7. The method according to claim 6, wherein the current makeup progress corresponding to the current frame image is determined according to the beauty mask image, the initial frame image and the current frame image, include: 以所述美妆掩码图为参照,从所述初始帧图像中获取上妆的第一目标区域图像,以及从所述当前帧图像中获取上妆的第二目标区域图像;Taking the makeup mask image as a reference, acquiring a makeup-on first target area image from the initial frame image, and acquiring a makeup-on second target area image from the current frame image; 根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined. 8.根据权利要求1所述的方法,其特征在于,所述特定妆容包括眼线妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:8. The method according to claim 1, wherein the specific makeup includes eyeliner makeup; and according to the initial frame image and the current frame image of the real-time makeup video, it is determined that the user performs the specific makeup. Current makeup progress, including: 获取所述初始帧图像和所述当前帧图像对应的美妆掩码图;Acquiring the beauty mask corresponding to the initial frame image and the current frame image; 根据所述初始帧图像,模拟生成眼线上妆完成后的结果图像;According to the initial frame image, simulate and generate the result image after eyeliner makeup is completed; 根据所述美妆掩码图、所述结果图像、所述初始帧图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度。According to the makeup mask map, the result image, the initial frame image and the current frame image, determine the current makeup progress corresponding to the current frame image. 9.根据权利要求8所述的方法,其特征在于,所述根据所述美妆掩码图、所述结果图像、所述初始帧图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度,包括:9. The method according to claim 8, characterized in that, according to the cosmetic mask map, the result image, the initial frame image and the current frame image, it is determined that the current frame image corresponds to Current makeup progress, including: 以所述初始帧图像对应的美妆掩码图为参照,从所述初始帧图像中获取上妆的第一目标区域图像;Taking the makeup mask image corresponding to the initial frame image as a reference, acquiring the image of the first target area of makeup from the initial frame image; 根据所述当前帧图像对应的美妆掩码图,从所述当前帧图像中获取上妆的第二目标区域图像;Acquiring a makeup-on second target area image from the current frame image according to the makeup mask image corresponding to the current frame image; 根据所述结果图像获取眼线上妆的第三目标区域图像;Acquiring the third target area image of eyeliner makeup according to the result image; 根据所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image, the second target area image and the third target area image, the current makeup progress corresponding to the current frame image is determined. 10.根据权利要求9所述的方法,其特征在于,所述根据所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:10. The method according to claim 9, characterized in that, according to the first target area image, the second target area image and the third target area image, determine the corresponding Current makeup progress, including: 分别将所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像转换为HLS颜色空间下包含饱和度通道的图像;Converting the first target area image, the second target area image, and the third target area image into images containing a saturation channel in the HLS color space, respectively; 根据转换后的所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the converted first target area image, the second target area image and the third target area image, the current makeup progress corresponding to the current frame image is determined. 11.根据权利要求10所述的方法,其特征在于,所述根据转换后的所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:11. The method according to claim 10, wherein the current frame is determined according to the converted first target area image, the second target area image and the third target area image The current makeup progress corresponding to the image, including: 分别计算转换后的所述第一目标区域图像对应的第一平均像素值、所述第二目标区域图像对应的第二平均像素值和所述第三目标区域图像对应的第三平均像素值;respectively calculating a first average pixel value corresponding to the converted first target area image, a second average pixel value corresponding to the second target area image, and a third average pixel value corresponding to the third target area image; 计算第二平均像素值与所述第一平均像素值之间的第一差值,以及计算所述第三平均像素值与所述第一平均像素值之间的第二差值;calculating a first difference between a second average pixel value and the first average pixel value, and calculating a second difference between the third average pixel value and the first average pixel value; 计算所述第一差值与所述第二差值之间的比值,得到所述当前帧图像对应的当前化妆进度。Calculate the ratio between the first difference and the second difference to obtain the current makeup progress corresponding to the current frame image. 12.根据权利要求9-11任一项所述的方法,其特征在于,所述根据所述第一目标区域图像、所述第二目标区域图像和所述第三目标区域图像,确定所述当前帧图像对应的当前化妆进度之前,还包括:12. The method according to any one of claims 9-11, characterized in that, according to the first target area image, the second target area image and the third target area image, the Before the current makeup progress corresponding to the current frame image, it also includes: 对所述第一目标区域图像和所述第二目标区域图像进行对齐处理;performing alignment processing on the first target area image and the second target area image; 对所述第一目标区域图像和所述第三目标区域图像进行对齐处理。Perform alignment processing on the first target area image and the third target area image. 13.根据权利要求12所述的方法,其特征在于,所述对所述第一目标区域图像和所述第二目标区域图像进行对齐处理,包括:13. The method according to claim 12, wherein the aligning the first target area image and the second target area image comprises: 分别对所述第一目标区域图像和所述第二目标区域图像进行二值化处理,得到所述第一目标区域图像对应的第一二值化掩膜图像及所述第二目标区域图像对应的第二二值化掩膜图像;Perform binarization processing on the first target area image and the second target area image respectively to obtain a first binarized mask image corresponding to the first target area image and a corresponding second target area image The second binarized mask image of ; 对所述第一二值化掩膜图像和所述第二二值化掩膜图像进行与运算,得到所述第一目标区域图像与所述第二目标区域图像的相交区域对应的第二掩膜图像。Perform an AND operation on the first binarized mask image and the second binarized mask image to obtain a second mask corresponding to the intersection area of the first target area image and the second target area image film image. 14.根据权利要求13所述的方法,其特征在于,所述对所述第一目标区域图像和所述第二目标区域图像进行对齐处理,还包括:14. The method according to claim 13, wherein the aligning the first target area image and the second target area image further comprises: 获取所述初始帧图像对应的人脸区域图像及所述结果图像对应的人脸区域图像;Acquiring the face area image corresponding to the initial frame image and the face area image corresponding to the result image; 对所述第二掩膜图像和所述初始帧图像对应的人脸区域图像进行与运算,得到所述初始帧图像对应的新的第一目标区域图像;performing an AND operation on the second mask image and the face area image corresponding to the initial frame image to obtain a new first target area image corresponding to the initial frame image; 对所述第二掩膜图像和所述当前帧图像对应的人脸区域图像进行与运算,得到所述结果图像对应的新的第二目标区域图像。An AND operation is performed on the second mask image and the face area image corresponding to the current frame image to obtain a new second target area image corresponding to the result image. 15.根据权利要求8-11任一项所述的方法,其特征在于,所述获取所述初始帧图像和所述当前帧图像对应的美妆掩码图,包括:15. The method according to any one of claims 8-11, wherein the acquiring the beauty mask corresponding to the initial frame image and the current frame image comprises: 获取用户选择的眼线样式图;Obtain the eyeliner style map selected by the user; 若所述初始帧图像中用户的眼部状态为睁眼状态,则获取所述眼线样式图对应的睁眼样式图;将所述睁眼样式图确定为所述初始帧图像对应的美妆掩码图;If the user's eye state in the initial frame image is the eye-open state, then obtain the eye-opening pattern map corresponding to the eyeliner pattern map; determine the eye-opening pattern map as the beauty mask corresponding to the initial frame image code map; 若所述初始帧图像中用户的眼部状态为闭眼状态,则获取所述眼线样式图对应的闭眼样式图,并将所述闭眼样式图确定为所述初始帧图像对应的美妆掩码图。If the eye state of the user in the initial frame image is the eye-closed state, then obtain the eye-closed pattern map corresponding to the eyeliner pattern map, and determine the eye-closed pattern map as the beauty makeup corresponding to the initial frame image mask map. 16.根据权利要求1所述的方法,其特征在于,所述特定妆容包括眼影妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:16. The method according to claim 1, wherein the specific makeup includes eye shadow makeup; and according to the initial frame image and the current frame image of the real-time makeup video, it is determined that the user performs the specific makeup. Current makeup progress, including: 获取眼影掩码图;Obtain the eye shadow mask map; 根据眼影上妆的每个目标上妆区域,分别从所述眼影掩码图中拆分出每个所述目标上妆区域对应的美妆掩码图;According to each target makeup area of eye shadow makeup, respectively split the beauty mask map corresponding to each target makeup area from the eye shadow mask map; 根据所述初始帧图像、所述当前帧图像及每个所述目标上妆区域对应的美妆掩码图,确定所述当前帧图像对应的当前化妆进度。The current makeup progress corresponding to the current frame image is determined according to the initial frame image, the current frame image, and the beauty mask map corresponding to each target makeup area. 17.根据权利要求16所述的方法,其特征在于,所述根据所述初始帧图像、所述当前帧图像及每个所述目标上妆区域对应的美妆掩码图,确定所述当前帧图像对应的当前化妆进度,包括:17. The method according to claim 16, wherein the current frame image is determined according to the initial frame image, the current frame image, and the beauty mask map corresponding to each target makeup area. The current makeup progress corresponding to the frame image, including: 分别以每个所述目标上妆区域对应的美妆掩码图为参照,从所述初始帧图像中获取每个所述目标上妆区域对应的第一目标区域图像;Taking the beauty mask image corresponding to each target makeup area as a reference, acquiring a first target area image corresponding to each target makeup area from the initial frame image; 分别以每个所述目标上妆区域对应的美妆掩码图为参照,从所述当前帧图像中获取每个所述目标上妆区域对应的第二目标区域图像;Respectively using the beauty mask image corresponding to each target makeup area as a reference, acquiring a second target area image corresponding to each target makeup area from the current frame image; 根据每个所述目标上妆区域对应的第一目标区域图像及第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is determined. 18.根据权利要求17所述的方法,其特征在于,所述根据每个所述目标上妆区域对应的第一目标区域图像及第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:18. The method according to claim 17, wherein the current makeup corresponding to the current frame image is determined according to the first target area image and the second target area image corresponding to each of the target makeup areas. progress, including: 分别将每个所述目标上妆区域对应的第一目标区域图像和第二目标区域图像转换为HLS颜色空间下包含预设单通道分量的图像;Respectively converting the first target area image and the second target area image corresponding to each target makeup area into images containing preset single-channel components under the HLS color space; 根据转换后的每个所述目标上妆区域对应的第一目标区域图像和第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the converted first target area image and second target area image corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is determined. 19.根据权利要求18所述的方法,其特征在于,所述根据转换后的每个所述目标上妆区域对应的第一目标区域图像和第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:19. The method according to claim 18, wherein, according to the first target area image and the second target area image corresponding to each target makeup area after conversion, it is determined that the current frame image corresponds to Current makeup progress, including: 分别计算转换后同一目标上妆区域对应的第一目标区域图像和第二目标区域图像中位置相同的像素点对应的所述预设单通道分量的差值绝对值;Respectively calculate the difference absolute value of the preset single-channel component corresponding to the pixels in the same position in the first target area image corresponding to the same target makeup area after conversion and in the second target area image; 统计每个目标上妆区域对应的差值绝对值满足预设化妆完成条件的像素点数目;Count the number of pixels whose absolute value of the difference corresponding to each target makeup area satisfies the preset makeup completion condition; 分别计算每个目标上妆区域对应的所述像素点数目与对应目标上妆区域中的像素点总数目之间的比值,得到每个目标上妆区域对应的化妆进度;Calculating the ratio between the number of pixels corresponding to each target makeup area and the total number of pixels in the corresponding target makeup area to obtain the makeup progress corresponding to each target makeup area; 根据每个目标上妆区域对应的化妆进度及每个目标上妆区域对应的预设权重,计算所述当前帧图像对应的当前化妆进度。According to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area, the current makeup progress corresponding to the current frame image is calculated. 20.根据权利要求7、9、17任一项所述的方法,其特征在于,以美妆掩码图为参照,从所述初始帧图像中获取第一目标区域图像,包括:20. The method according to any one of claims 7, 9, and 17, wherein the first target area image is obtained from the initial frame image with reference to the beauty mask image, comprising: 检测所述初始帧图像对应的第一人脸关键点;Detecting the first human face key point corresponding to the initial frame image; 根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像;Acquiring a face region image corresponding to the initial frame image according to the first face key point; 以美妆掩码图为参照,从所述人脸区域图像中获取上妆的第一目标区域图像。Taking the makeup mask image as a reference, the image of the first target area for applying makeup is acquired from the face area image. 21.根据权利要求20所述的方法,其特征在于,所述以美妆掩码图为参照,从所述人脸区域图像中获取上妆的第一目标区域图像,包括:21. The method according to claim 20, characterized in that, using the makeup mask image as a reference, obtaining the first target area image for makeup from the face area image comprises: 分别将美妆掩码图和所述人脸区域图像转换为二值化图像;Converting the beauty mask image and the face area image into a binary image respectively; 对所述美妆掩码图对应的二值化图像和所述人脸区域图像对应的二值化图像进行与运算,获得所述美妆掩码图与所述人脸区域图像的相交区域对应的第一掩膜图像;Perform an AND operation on the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face area image to obtain the correspondence between the intersection area of the beauty mask map and the face area image The first mask image of ; 对所述第一掩膜图像与所述初始帧图像对应的人脸区域图像进行与运算,获得第一目标区域图像。An AND operation is performed on the face area image corresponding to the first mask image and the initial frame image to obtain a first target area image. 22.根据权利要求21所述的方法,其特征在于,所述对所述美妆掩码图对应的二值化图像和所述人脸区域图像对应的二值化图像进行与运算之前,还包括:22. The method according to claim 21, wherein, before performing AND operation on the binarized image corresponding to the beauty mask map and the binarized image corresponding to the face region image, further include: 根据所述美妆掩码图对应的标准人脸关键点,确定所述美妆掩码图中位于每个上妆区域的轮廓上的一个或多个第一定位点;According to the standard human face key points corresponding to the beauty mask map, one or more first positioning points located on the outline of each makeup area in the beauty mask map are determined; 根据所述第一人脸关键点,从所述人脸区域图像中确定出与每个所述第一定位点对应的第二定位点;Determining a second anchor point corresponding to each of the first anchor points from the face area image according to the first face key point; 对所述美妆掩码图进行拉伸处理,将每个所述第一定位点拉伸至对应的每个所述第二定位点对应的位置处。Stretching is performed on the beauty mask image, and each of the first positioning points is stretched to a position corresponding to each of the second positioning points. 23.根据权利要求20所述的方法,其特征在于,所述以美妆掩码图为参照,从所述人脸区域图像中获取上妆的第一目标区域图像,包括:23. The method according to claim 20, characterized in that, using the makeup mask image as a reference, obtaining the first target area image for makeup from the face area image comprises: 将所述美妆掩码图拆分为多个子掩码图,每个所述子掩码图中包括至少一个目标上妆区域;Splitting the beauty mask image into multiple sub-mask images, each of which includes at least one target makeup area; 分别将每个所述子掩码图及所述人脸区域图像转换为二值化图像;converting each of the sub-mask images and the face region image into a binary image; 分别对每个所述子掩码图对应的二值化图像与所述人脸区域图像对应的二值化图像进行与运算,获得每个所述子掩码图各自对应的子掩膜图像;performing an AND operation on the binarized image corresponding to each of the sub-masks and the binarized image corresponding to the face region image to obtain a respective sub-mask image corresponding to each of the sub-masks; 分别对每个所述子掩膜图像与所述初始帧图像对应的人脸区域图像进行与运算,获得所述初始帧图像对应的多个子目标区域图像;performing an AND operation on each of the sub-mask images and the face region images corresponding to the initial frame image to obtain a plurality of sub-target region images corresponding to the initial frame image; 将所述多个子目标区域图像合并为所述初始帧图像对应的第一目标区域图像。Merging the multiple sub-target area images into a first target area image corresponding to the initial frame image. 24.根据权利要求23所述的方法,其特征在于,所述分别对每个所述子掩码图对应的二值化图像与所述人脸区域图像对应的二值化图像进行与运算之前,还包括:24. The method according to claim 23, wherein, before performing the AND operation on the binarized image corresponding to each of the sub-mask images and the binarized image corresponding to the face region image ,Also includes: 根据所述美妆掩码图对应的标准人脸关键点,确定第一子掩码图中位于目标上妆区域的轮廓上的一个或多个第一定位点,所述第一子掩码图为所述多个子掩码图中的任一子掩码图;According to the standard human face key points corresponding to the beauty mask map, determine one or more first positioning points located on the contour of the target makeup area in the first sub-mask map, the first sub-mask map It is any sub-mask in the plurality of sub-masks; 根据所述第一人脸关键点,从所述人脸区域图像中确定出与每个所述第一定位点对应的第二定位点;Determining a second anchor point corresponding to each of the first anchor points from the face area image according to the first face key point; 对所述第一子掩码图进行拉伸处理,将每个所述第一定位点拉伸至对应的每个所述第二定位点对应的位置处。Stretching is performed on the first sub-mask image, and each of the first positioning points is stretched to a position corresponding to each of the second positioning points. 25.根据权利要求1所述的方法,其特征在于,所述特定妆容包括眉毛妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:25. The method according to claim 1, wherein the specific makeup includes eyebrow makeup; and according to the initial frame image and the current frame image of the real-time makeup video, it is determined that the user performs the specific makeup. Current makeup progress, including: 从所述初始帧图像中获取眉毛对应的第一目标区域图像,及从所述当前帧图像中获取眉毛对应的第二目标区域图像;Obtaining a first target area image corresponding to the eyebrows from the initial frame image, and acquiring a second target area image corresponding to the eyebrows from the current frame image; 根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined. 26.根据权利要求25所述的方法,其特征在于,所述从所述初始帧图像中获取眉毛对应的第一目标区域图像,包括:26. The method according to claim 25, wherein said obtaining the first target region image corresponding to the eyebrows from the initial frame image comprises: 检测所述初始帧图像对应的第一人脸关键点;Detecting the first human face key point corresponding to the initial frame image; 根据所述第一人脸关键点,获取所述初始帧图像对应的人脸区域图像;Acquiring a face region image corresponding to the initial frame image according to the first face key point; 根据所述第一人脸关键点中包括的眉毛关键点,从所述人脸区域图像中获取眉毛对应的第一目标区域图像。According to the eyebrow key points included in the first human face key points, a first target area image corresponding to eyebrows is acquired from the human face area image. 27.根据权利要求26所述的方法,其特征在于,所述根据所述第一人脸关键点中包括的眉毛关键点,从所述人脸区域图像中截取眉毛对应的第一目标区域图像,包括:27. The method according to claim 26, characterized in that, according to the eyebrow key points included in the first human face key points, the first target area image corresponding to the eyebrows is intercepted from the face area image ,include: 对所述第一人脸关键点包括的眉头至眉峰之间的眉毛关键点进行插值,得到多个插值点;Interpolating the eyebrow key points between the eyebrows and eyebrow peaks included in the first human face key points to obtain a plurality of interpolation points; 从所述人脸区域图像中截取出眉头至眉峰之间所有眉毛关键点及所述多个插值点连接而成的闭合区域,得到眉头至眉峰之间的部分眉毛图像;Intercepting all eyebrow key points between the eyebrow head and the eyebrow peak and the closed area formed by the connection of the plurality of interpolation points from the image of the human face area, obtaining a partial eyebrow image between the eyebrow head and the eyebrow peak; 从所述人脸区域图像中截取出眉峰至眉尾之间的所有眉毛关键点连接而成的闭合区域,得到眉峰至眉尾之间的部分眉毛图像;Intercepting the closed area formed by connecting all eyebrow key points between the eyebrow peak and the eyebrow tail from the image of the human face region, obtaining a partial eyebrow image between the eyebrow peak and the eyebrow tail; 将所述眉头至眉峰之间的部分眉毛图像与所述眉峰至眉尾之间的部分眉毛图像拼接为眉毛对应的第一目标区域图像。The partial eyebrow image between the eyebrow head and the eyebrow peak and the partial eyebrow image between the eyebrow peak and the eyebrow tail are spliced into a first target region image corresponding to the eyebrows. 28.根据权利要求2、7、25任一项所述的方法,其特征在于,所述根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:28. The method according to any one of claims 2, 7, and 25, wherein the current frame corresponding to the current frame image is determined according to the first target area image and the second target area image. Makeup progress, including: 分别将所述第一目标区域图像和所述第二目标区域图像转换为HSV颜色空间下包含预设单通道分量的图像;respectively converting the first target area image and the second target area image into images containing preset single-channel components in HSV color space; 根据转换后的所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度。According to the converted first target area image and the second target area image, the current makeup progress corresponding to the current frame image is determined. 29.根据权利要求28所述的方法,其特征在于,所述根据转换后的所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度,包括:29. The method according to claim 28, wherein the determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image comprises : 分别计算转换后的所述第一目标区域图像和所述第二目标区域图像中位置相同的像素点对应的所述预设单通道分量的差值绝对值;Calculating the absolute value of the difference between the preset single-channel components corresponding to pixels at the same position in the converted first target area image and the second target area image; 统计对应的差值绝对值满足预设化妆完成条件的像素点数目;Count the number of pixels whose absolute value of the corresponding difference satisfies the preset makeup completion condition; 计算统计的所述像素点数目与所述第一目标区域图像中所有目标上妆区域中的像素点总数目之间的比值,得到所述当前帧图像对应的当前化妆进度。Calculate the ratio between the counted number of pixels and the total number of pixels in all target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image. 30.根据权利要求2、7、25任一项所述的方法,其特征在于,所述根据所述第一目标区域图像和所述第二目标区域图像,确定所述当前帧图像对应的当前化妆进度之前,还包括:30. The method according to any one of claims 2, 7, and 25, wherein the current frame image corresponding to the current frame image is determined according to the first target area image and the second target area image. Before makeup progress, also include: 分别对所述第一目标区域图像和所述第二目标区域图像进行二值化处理,得到所述第一目标区域图像对应的第一二值化掩膜图像和所述第二目标区域图像对应的第二二值化掩膜图像;Performing binarization processing on the first target area image and the second target area image respectively, to obtain a first binarized mask image corresponding to the first target area image and corresponding to the second target area image The second binarized mask image of ; 对所述第一二值化掩膜图像和所述第二二值化掩膜图像进行与运算,得到所述第一目标区域图像与所述第二目标区域图像的相交区域对应的第二掩膜图像;Perform an AND operation on the first binarized mask image and the second binarized mask image to obtain a second mask corresponding to the intersection area of the first target area image and the second target area image film image; 获取所述初始帧图像对应的人脸区域图像及所述当前帧图像对应的人脸区域图像;Acquiring the face area image corresponding to the initial frame image and the face area image corresponding to the current frame image; 对所述第二掩膜图像和所述初始帧图像对应的人脸区域图像进行与运算,得到所述初始帧图像对应的新第一目标区域图像;performing an AND operation on the second mask image and the face area image corresponding to the initial frame image to obtain a new first target area image corresponding to the initial frame image; 对所述第二掩膜图像和所述当前帧图像对应的人脸区域图像进行与运算,得到所述当前帧图像对应的新第二目标区域图像。An AND operation is performed on the second mask image and the face area image corresponding to the current frame image to obtain a new second target area image corresponding to the current frame image. 31.根据权利要求7、17、25任一项所述的方法,其特征在于,所述确定所述当前帧图像对应的当前化妆进度之前,还包括:31. The method according to any one of claims 7, 17, 25, characterized in that before determining the current makeup progress corresponding to the current frame image, further comprising: 分别对所述第一目标区域图像和所述第二目标区域图像中的上妆区域进行边界腐蚀处理。Boundary erosion processing is performed on makeup areas in the first target area image and the second target area image respectively. 32.根据权利要求1所述的方法,其特征在于,所述特定妆容包括粉底妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:32. The method according to claim 1, wherein the specific makeup includes foundation makeup; and according to the initial frame image and the current frame image of the real-time makeup video, it is determined that the user performs the specific makeup. Current makeup progress, including: 根据所述初始帧图像,模拟生成完成所述特定妆容后的结果图像;According to the initial frame image, simulate and generate a result image after completing the specific makeup; 分别获取所述初始帧图像、所述结果图像和所述当前帧图像对应的整体图像亮度;Obtaining the overall image brightness corresponding to the initial frame image, the result image, and the current frame image respectively; 分别获取所述初始帧图像、所述结果图像和所述当前帧图像对应的人脸区域亮度;Obtaining the brightness of the face area corresponding to the initial frame image, the result image, and the current frame image respectively; 根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的整体图像亮度和人脸区域亮度,确定所述当前帧图像对应的当前化妆进度。The current makeup progress corresponding to the current frame image is determined according to the overall image brightness and face area brightness corresponding to the initial frame image, the result image and the current frame image respectively. 33.根据权利要求32所述的方法,其特征在于,所述分别获取所述初始帧图像、所述结果图像和所述当前帧图像对应的整体图像亮度,包括:33. The method according to claim 32, wherein said acquiring the overall image brightness corresponding to said initial frame image, said result image and said current frame image respectively comprises: 分别将所述初始帧图像、所述结果图像和所述当前帧图像转换为灰度图像;respectively converting the initial frame image, the result image and the current frame image into grayscale images; 分别计算转换后所述初始帧图像、所述结果图像和所述当前帧图像各自对应的灰度图像中像素点的灰度平均值;Calculating the grayscale average value of the pixels in the grayscale images respectively corresponding to the converted initial frame image, the resultant image, and the current frame image; 将所述初始帧图像、所述结果图像和所述当前帧图像各自对应的灰度平均值分别确定为所述初始帧图像、所述结果图像和所述当前帧图像各自对应的整体图像亮度。The respective grayscale average values corresponding to the initial frame image, the result image and the current frame image are respectively determined as the respective overall image luminances corresponding to the initial frame image, the result image and the current frame image. 34.根据权利要求32所述的方法,其特征在于,所述分别获取所述初始帧图像、所述结果图像和所述当前帧图像对应的人脸区域亮度,包括:34. The method according to claim 32, wherein the acquiring the brightness of the face area corresponding to the initial frame image, the result image and the current frame image respectively comprises: 分别获取所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像;Respectively acquire the face area images corresponding to the initial frame image, the result image and the current frame image; 分别将所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像转换为人脸灰度图像;respectively converting the face area images corresponding to the initial frame image, the result image, and the current frame image into face grayscale images; 分别计算所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸灰度图像中像素点的灰度平均值,得到所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域亮度。Calculate the gray average value of the pixels in the face grayscale images corresponding to the initial frame image, the result image and the current frame image respectively, to obtain the initial frame image, the result image and the current frame image The brightness of the face area corresponding to each frame image. 35.根据权利要求32-34任一项所述的方法,其特征在于,所述根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的整体图像亮度和人脸区域亮度,确定所述当前帧图像对应的当前化妆进度,包括:35. The method according to any one of claims 32-34, characterized in that, according to the respective overall image brightness and face area brightness corresponding to the initial frame image, the result image and the current frame image , determining the current makeup progress corresponding to the current frame image, including: 根据所述初始帧图像对应的整体图像亮度和人脸区域亮度以及所述当前帧图像对应的整体图像亮度和人脸区域亮度,确定所述当前帧图像对应的第一环境变化亮度;Determine the first environmental change brightness corresponding to the current frame image according to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the current frame image; 根据所述初始帧图像对应的整体图像亮度和人脸区域亮度以及所述结果图像对应的整体图像亮度和人脸区域亮度,确定所述结果图像对应的第二环境变化亮度;Determine the second environmental change brightness corresponding to the result image according to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the result image; 根据所述第一环境变化亮度、所述第二环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述当前帧图像对应的人脸区域亮度、所述结果图像对应的人脸区域亮度,确定所述当前帧图像对应的当前化妆进度。According to the brightness of the first environmental change, the brightness of the second environmental change, the brightness of the face area corresponding to the initial frame image, the brightness of the face area corresponding to the current frame image, and the face area corresponding to the result image Brightness, to determine the current makeup progress corresponding to the current frame image. 36.根据权利要求35所述的方法,其特征在于,所述根据所述初始帧图像对应的整体图像亮度和人脸区域亮度以及所述当前帧图像对应的整体图像亮度和人脸区域亮度,确定所述当前帧图像对应的第一环境变化亮度,包括:36. The method according to claim 35, wherein, according to the overall image brightness and face area brightness corresponding to the initial frame image and the overall image brightness and face area brightness corresponding to the current frame image, Determining the brightness of the first environmental change corresponding to the current frame image includes: 计算所述初始帧图像对应的整体图像亮度与其对应的人脸区域亮度之间的差值,得到所述初始帧图像的环境亮度;Calculating the difference between the overall image brightness corresponding to the initial frame image and the brightness of the corresponding face area to obtain the ambient brightness of the initial frame image; 计算所述当前帧图像对应的整体图像亮度与其对应的人脸区域亮度之间的差值,得到所述当前帧图像的环境亮度;Calculating the difference between the overall image brightness corresponding to the current frame image and the corresponding face area brightness to obtain the ambient brightness of the current frame image; 将所述当前帧图像的环境亮度与所述初始帧图像的环境亮度之间的差值绝对值确定为所述当前帧图像对应的第一环境变化亮度。Determine the absolute value of the difference between the ambient brightness of the current frame image and the ambient brightness of the initial frame image as the first ambient change brightness corresponding to the current frame image. 37.根据权利要求35所述的方法,其特征在于,所述根据所述第一环境变化亮度、所述第二环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述当前帧图像对应的人脸区域亮度、所述结果图像对应的人脸区域亮度,确定所述当前帧图像对应的当前化妆进度,包括:37. The method according to claim 35, wherein the changing brightness according to the first environment, the changing brightness of the second environment, the brightness of the face area corresponding to the initial frame image, the brightness of the current frame The brightness of the face area corresponding to the image and the brightness of the face area corresponding to the result image determine the current makeup progress corresponding to the current frame image, including: 根据所述第一环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述当前帧图像对应的人脸区域亮度,确定所述当前帧图像对应的上妆亮度变化值;According to the brightness of the first environmental change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the current frame image, determine the makeup brightness change value corresponding to the current frame image; 根据所述第二环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述结果图像对应的人脸区域亮度,确定所述结果图像对应的上妆亮度变化值;According to the brightness of the second environmental change, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the result image, determine the change value of the makeup brightness corresponding to the result image; 计算所述当前帧图像对应的上妆亮度变化值与所述结果图像对应的上妆亮度变化值的比值,得到所述当前帧图像对应的当前化妆进度。Calculate the ratio of the makeup brightness change value corresponding to the current frame image to the makeup brightness change value corresponding to the result image to obtain the current makeup progress corresponding to the current frame image. 38.根据权利要求37所述的方法,其特征在于,所述根据所述第一环境变化亮度、所述初始帧图像对应的人脸区域亮度、所述当前帧图像对应的人脸区域亮度,确定所述当前帧图像对应的上妆亮度变化值,包括:38. The method according to claim 37, wherein the changing brightness according to the first environment, the brightness of the face area corresponding to the initial frame image, and the brightness of the face area corresponding to the current frame image, Determine the makeup brightness change value corresponding to the current frame image, including: 计算所述当前帧图像对应的人脸区域亮度与所述初始帧图像对应的人脸区域亮度之间的差值,得到所述当前帧图像对应的总亮度变化值;calculating the difference between the brightness of the face area corresponding to the current frame image and the brightness of the face area corresponding to the initial frame image to obtain a total brightness change value corresponding to the current frame image; 计算所述总亮度变化值与所述第一环境变化亮度之间的差值,得到所述当前帧图像对应的上妆亮度变化值。Calculate the difference between the total brightness change value and the first environmental change brightness to obtain the makeup brightness change value corresponding to the current frame image. 39.根据权利要求35所述的方法,其特征在于,所述方法还包括:39. The method of claim 35, further comprising: 若所述第一环境变化亮度大于预设阈值,则将上一帧图像对应的化妆进度确定为所述当前帧图像对应的当前化妆进度;If the brightness of the first environmental change is greater than a preset threshold, then determine the makeup progress corresponding to the previous frame image as the current makeup progress corresponding to the current frame image; 发送第一提示信息给所述用户的终端,所述第一提示信息用于提示所述用户回到所述初始帧图像对应的亮度环境下上妆。Sending first prompt information to the user's terminal, where the first prompt information is used to prompt the user to return to the brightness environment corresponding to the initial frame image to apply makeup. 40.根据权利要求1所述的方法,其特征在于,所述特定妆容包括遮瑕妆容;所述根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度,包括:40. The method according to claim 1, wherein the specific makeup includes concealer makeup; and according to the initial frame image and the current frame image of the real-time makeup video, it is determined that the user performs the specific makeup. Current makeup progress, including: 分别获取所述初始帧图像和所述当前帧图像各自对应的脸部瑕疵信息;Respectively acquiring facial blemish information corresponding to the initial frame image and the current frame image; 根据所述初始帧图像对应的脸部瑕疵信息和所述当前帧图像对应的脸部瑕疵信息,计算所述当前帧图像与所述初始帧图像之间的脸部瑕疵差异值;According to the facial blemish information corresponding to the initial frame image and the facial blemish information corresponding to the current frame image, calculate the facial blemish difference value between the current frame image and the initial frame image; 若所述脸部瑕疵差异值大于预设阈值,则根据所述脸部瑕疵差异值和所述初始帧图像对应的脸部瑕疵信息,计算所述当前帧图像对应的当前化妆进度;If the facial blemish difference value is greater than a preset threshold, then calculate the current makeup progress corresponding to the current frame image according to the facial blemish difference value and the facial blemish information corresponding to the initial frame image; 若所述脸部瑕疵差异值小于或等于所述预设阈值,则获取所述用户完成遮瑕上妆后的结果图像,根据所述初始帧图像、所述结果图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度。If the facial blemish difference value is less than or equal to the preset threshold, then obtain the result image after the user completes the concealment and makeup, and determine according to the initial frame image, the result image and the current frame image The current makeup progress corresponding to the current frame image. 41.根据权利要求40所述的方法,其特征在于,所述脸部瑕疵信息包括瑕疵类别及对应的瑕疵数目;所述根据所述初始帧图像对应的脸部瑕疵信息和所述当前帧图像对应的脸部瑕疵信息,计算所述当前帧图像与所述初始帧图像之间的脸部瑕疵差异值,包括:41. The method according to claim 40, wherein the facial blemish information includes blemish categories and corresponding blemish numbers; the facial blemish information corresponding to the initial frame image and the current frame image For the corresponding facial blemish information, calculate the facial blemish difference value between the current frame image and the initial frame image, including: 分别计算每种瑕疵类别下所述初始帧图像对应的瑕疵数目与所述当前帧图像对应的瑕疵数目之间的差值;Calculating the difference between the number of blemishes corresponding to the initial frame image and the number of blemishes corresponding to the current frame image under each blemish category; 计算每种瑕疵类别对应的差值之和,将得到的和值作为所述当前帧图像与所述初始帧图像之间的脸部瑕疵差异值。Calculate the sum of the differences corresponding to each blemish category, and use the obtained sum as the facial blemish difference value between the current frame image and the initial frame image. 42.根据权利要求40所述的方法,其特征在于,所述根据所述脸部瑕疵差异值和所述初始帧图像对应的脸部瑕疵信息,计算所述当前帧图像对应的当前化妆进度,包括:42. The method according to claim 40, wherein the current makeup progress corresponding to the current frame image is calculated according to the facial blemish difference value and the facial blemish information corresponding to the initial frame image, include: 计算所述初始帧图像对应的脸部瑕疵信息中各瑕疵类别对应的瑕疵数目之和,得到总瑕疵数;Calculating the sum of the number of blemishes corresponding to each blemish category in the facial blemish information corresponding to the initial frame image to obtain the total number of blemishes; 计算所述脸部瑕疵差异值与所述总瑕疵数之间的比值,将所述比值作为所述当前帧图像对应的当前化妆进度。Calculate the ratio between the facial blemish difference value and the total number of blemishes, and use the ratio as the current makeup progress corresponding to the current frame image. 43.根据权利要求40所述的方法,其特征在于,所述获取所述用户完成遮瑕上妆后的结果图像,根据所述初始帧图像、所述结果图像和所述当前帧图像,确定所述当前帧图像对应的当前化妆进度,包括:43. The method according to claim 40, wherein the acquisition of the result image after the user completes the concealment and makeup is determined according to the initial frame image, the result image and the current frame image. Describe the current makeup progress corresponding to the current frame image, including: 根据所述初始帧图像,模拟生成所述用户完成遮瑕上妆后的结果图像;According to the initial frame image, simulate and generate the result image after the user completes the concealer and makeup; 分别获取所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像;Respectively acquire the face area images corresponding to the initial frame image, the result image and the current frame image; 根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像,确定所述当前帧图像对应的当前化妆进度。According to the face area images corresponding to the initial frame image, the result image and the current frame image, the current makeup progress corresponding to the current frame image is determined. 44.根据权利要求43所述的方法,其特征在于,所述根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像,确定所述当前帧图像对应的当前化妆进度,包括:44. The method according to claim 43, characterized in that, according to the face area images corresponding to the initial frame image, the result image and the current frame image, determine the corresponding face area of the current frame image Current makeup progress, including: 分别将所述初始帧图像、所述结果图像和所述当前帧图像各自对应的人脸区域图像转换为HLS颜色空间下包含饱和度通道的图像;Respectively converting the face area images corresponding to the initial frame image, the result image and the current frame image into images containing saturation channels in the HLS color space; 通过预设滤波算法分别计算转换后所述初始帧图像、所述结果图像和所述当前帧图像各自的人脸区域图像对应的平滑因子;Calculating the smoothing factors corresponding to the respective face area images of the initial frame image, the result image, and the current frame image after conversion through a preset filtering algorithm; 根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的平滑因子,确定所述当前帧图像对应的当前化妆进度。According to the respective smoothing factors corresponding to the initial frame image, the result image and the current frame image, the current makeup progress corresponding to the current frame image is determined. 45.根据权利要求44所述的方法,其特征在于,所述根据所述初始帧图像、所述结果图像和所述当前帧图像各自对应的平滑因子,确定所述当前帧图像对应的当前化妆进度,包括:45. The method according to claim 44, wherein the current makeup corresponding to the current frame image is determined according to the respective smoothing factors of the initial frame image, the resultant image, and the current frame image. progress, including: 计算所述当前帧图像对应的平滑因子与所述初始帧图像对应的平滑因子之间的第一差值;calculating a first difference between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image; 计算所述结果图像对应的平滑因子与所述初始帧图像对应的平滑因子之间的第二差值;calculating a second difference between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image; 计算所述第一差值与所述第二差值之间的比值,将所述比值作为所述当前帧图像对应的当前化妆进度。Calculate the ratio between the first difference and the second difference, and use the ratio as the current makeup progress corresponding to the current frame image. 46.根据权利要求40所述的方法,其特征在于,所述分别获取所述初始帧图像和所述当前帧图像各自对应的脸部瑕疵信息,包括:46. The method according to claim 40, wherein said obtaining the respective facial blemish information corresponding to said initial frame image and said current frame image respectively comprises: 分别获取所述初始帧图像和所述当前帧图像各自对应的人脸区域图像;Respectively acquire the face area images corresponding to the initial frame image and the current frame image; 通过预设的皮肤检测模型分别检测所述初始帧图像和所述当前帧图像各自对应的人脸区域图像中各瑕疵类别对应的瑕疵数目,得到所述初始帧图像和所述当前帧图像各自对应的脸部瑕疵信息。The number of blemishes corresponding to each blemish category in the face area image corresponding to the initial frame image and the current frame image is respectively detected by the preset skin detection model, and the respective corresponding blemishes of the initial frame image and the current frame image are obtained. facial blemish information. 47.根据权利要求3、26、34、43、46任一项所述的方法,其特征在于,所述获取所述初始帧图像对应的人脸区域图像,包括:47. The method according to any one of claims 3, 26, 34, 43, 46, wherein said acquiring the face area image corresponding to said initial frame image comprises: 根据所述初始帧图像对应的第一人脸关键点,对所述初始帧图像及所述第一人脸关键点进行旋转矫正;Performing rotation correction on the initial frame image and the first human face key point according to the first human face key point corresponding to the initial frame image; 根据矫正后的所述第一人脸关键点,从矫正后的所述初始帧图像中截取包含人脸区域的图像;According to the corrected first human face key point, intercepting an image containing a human face area from the corrected initial frame image; 将所述包含人脸区域的图像缩放至预设尺寸,得到所述初始帧图像对应的人脸区域图像。The image containing the face area is scaled to a preset size to obtain a face area image corresponding to the initial frame image. 48.根据权利要求47所述的方法,其特征在于,所述根据所述初始帧图像对应的第一人脸关键点,对所述初始帧图像及所述第一人脸关键点进行旋转矫正,包括:48. The method according to claim 47, wherein, according to the first human face key point corresponding to the initial frame image, the initial frame image and the first human face key point are rotated and corrected ,include: 根据所述第一人脸关键点包括的左眼关键点和右眼关键点,分别确定左眼中心坐标和右眼中心坐标;Determine left eye center coordinates and right eye center coordinates respectively according to the left eye key point and right eye key point included in the first human face key point; 根据所述左眼中心坐标和所述右眼中心坐标,确定所述初始帧图像对应的旋转角度及旋转中心点坐标;Determine the rotation angle and rotation center point coordinates corresponding to the initial frame image according to the left eye center coordinates and the right eye center coordinates; 根据所述旋转角度和所述旋转中心点坐标,对所述初始帧图像及所述第一人脸关键点进行旋转矫正。Rotation correction is performed on the initial frame image and the first human face key point according to the rotation angle and the coordinates of the rotation center point. 49.根据权利要求47所述的方法,其特征在于,所述根据矫正后的所述第一人脸关键点,从矫正后的所述初始帧图像中截取包含人脸区域的图像,包括:49. The method according to claim 47, wherein, according to the corrected first human face key point, intercepting an image containing a human face area from the corrected initial frame image comprises: 从矫正后的所述第一人脸关键点中确定最小横坐标值、最小纵坐标值、最大横坐标值和最大纵坐标值;Determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key point; 根据所述最小横坐标值、所述最小纵坐标值、最大横坐标值和最大纵坐标值,确定矫正后的所述初始帧图像中人脸区域对应的截取框;According to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value, determine the clipping frame corresponding to the face area in the corrected initial frame image; 根据所述截取框,从矫正后的所述初始帧图像中截取出包含所述人脸区域的图像。According to the clipping frame, an image including the face area is clipped from the rectified initial frame image. 50.根据权利要求49所述的方法,其特征在于,所述方法还包括:50. The method of claim 49, further comprising: 将所述截取框放大预设倍数;Enlarging the interception frame by a preset multiple; 根据放大后的所述截取框,从矫正后的所述初始帧图像中截取出包含所述人脸区域的图像。According to the enlarged clipping frame, an image including the face area is clipped from the corrected initial frame image. 51.根据权利要求47所述的方法,其特征在于,所述方法还包括:51. The method of claim 47, further comprising: 根据所述包含人脸区域的图像的尺寸及所述预设尺寸,对矫正后的所述第一人脸关键点进行缩放平移处理。According to the size of the image including the face area and the preset size, zooming and translation processing is performed on the corrected first key points of the face. 52.根据权利要求1所述的方法,其特征在于,所述方法还包括:52. The method of claim 1, further comprising: 检测所述初始帧图像和所述当前帧图像中是否均仅包含同一个用户的人脸图像;Detecting whether both the initial frame image and the current frame image only contain the face image of the same user; 如果是,则执行所述确定所述用户进行所述特定妆容的当前化妆进度的操作;If yes, then perform the operation of determining the current makeup progress of the user performing the specific makeup look; 如果否,则发送提示信息给所述用户的终端,所述提示信息用于提示所述用户保持所述实时化妆视频中仅出现同一个用户的人脸。If not, send prompt information to the user's terminal, where the prompt information is used to prompt the user to keep only the face of the same user appearing in the real-time makeup video. 53.一种化妆进度检测装置,其特征在于,包括:53. A makeup progress detection device, comprising: 视频获取模块,用于获取用户当前进行特定妆容的实时化妆视频;The video acquisition module is used to acquire the real-time makeup video of the user currently performing a specific makeup look; 化妆进度确定模块,用于根据所述实时化妆视频的初始帧图像和当前帧图像,确定所述用户进行所述特定妆容的当前化妆进度;A makeup progress determination module, configured to determine the current makeup progress of the user performing the specific makeup look according to the initial frame image and the current frame image of the real-time makeup video; 所述化妆进度确定模块,具体用于所述特定妆容包括高光妆容、修容妆容、腮红妆容、眼线妆容、眼影妆容或眉毛妆容中的至少一种,从所述初始帧图像中获取所述特定妆容对应的第一目标区域图像,及从所述当前帧图像中获取所述特定妆容对应的第二目标区域图像;获取仅包含所述特定妆容对应的通道分量的单通道的所述第一目标区域图像和所述第二目标区域图像;对仅包含所述通道分量的单通道的所述第一目标区域图像和所述第二目标区域图像分别进行二值化处理,得到第一目标区域图像对应的第一二值化掩膜图像和第二目标区域图像对应的第二二值化掩膜图像;对第一二值化掩膜图像和第二二值化掩膜图像进行与运算,得到相交区域掩膜图像;基于所述相交区域掩膜图像,分别生成新的第一目标区域图像和新的第二目标区域图像;根据新的第一目标区域图像和新的第二目标区域图像,确定所述用户进行所述特定妆容的当前化妆进度。The makeup progress determination module is specifically used for the specific makeup including at least one of highlight makeup, trimming makeup, blush makeup, eyeliner makeup, eye shadow makeup or eyebrow makeup, and obtains the A first target area image corresponding to the specific makeup, and acquiring a second target area image corresponding to the specific makeup from the current frame image; acquiring a single-channel image of the first target area that only includes channel components corresponding to the specific makeup The target area image and the second target area image; respectively performing binarization processing on the first target area image and the second target area image of a single channel containing only the channel components to obtain a first target area The first binarized mask image corresponding to the image and the second binarized mask image corresponding to the second target area image; performing an AND operation on the first binarized mask image and the second binarized mask image, Obtaining an intersecting area mask image; generating a new first target area image and a new second target area image based on the intersecting area mask image; according to the new first target area image and the new second target area image , to determine the current makeup progress of the user performing the specific makeup. 54.一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器运行所述计算机程序以实现如权利要求1-52任一项所述的方法。54. An electronic device comprising a memory, a processor and a computer program stored on the memory and operable on the processor, wherein the processor runs the computer program to realize the The method of any one of 1-52. 55.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行实现如权利要求1-52中任一项所述的方法。55. A computer-readable storage medium, on which a computer program is stored, wherein the program is executed by a processor to implement the method according to any one of claims 1-52.
CN202111015242.4A 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium Active CN114155569B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202111306864.2A CN115731591A (en) 2021-08-31 2021-08-31 A makeup progress detection method, device, equipment and storage medium
CN202111308461.1A CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium
CN202111308454.1A CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium
CN202111308470.0A CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium
CN202111015242.4A CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111015242.4A CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Related Child Applications (4)

Application Number Title Priority Date Filing Date
CN202111308454.1A Division CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium
CN202111306864.2A Division CN115731591A (en) 2021-08-31 2021-08-31 A makeup progress detection method, device, equipment and storage medium
CN202111308461.1A Division CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium
CN202111308470.0A Division CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114155569A CN114155569A (en) 2022-03-08
CN114155569B true CN114155569B (en) 2022-11-04

Family

ID=80461794

Family Applications (5)

Application Number Title Priority Date Filing Date
CN202111308461.1A Pending CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium
CN202111308470.0A Pending CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium
CN202111308454.1A Pending CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium
CN202111306864.2A Pending CN115731591A (en) 2021-08-31 2021-08-31 A makeup progress detection method, device, equipment and storage medium
CN202111015242.4A Active CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Family Applications Before (4)

Application Number Title Priority Date Filing Date
CN202111308461.1A Pending CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium
CN202111308470.0A Pending CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium
CN202111308454.1A Pending CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium
CN202111306864.2A Pending CN115731591A (en) 2021-08-31 2021-08-31 A makeup progress detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (5) CN115731142A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206351A (en) * 2023-01-06 2023-06-02 北京字跳网络技术有限公司 Information prompting method, device, computer equipment and storage medium
CN117078675B (en) * 2023-10-16 2024-02-06 太和康美(北京)中医研究院有限公司 Cosmetic efficacy evaluation methods, devices, equipment and media based on image analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805090A (en) * 2018-06-14 2018-11-13 广东工业大学 A kind of virtual examination cosmetic method based on Plane Gridding Model
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110008813A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 Face identification method and system based on In vivo detection technology
CN111066060A (en) * 2017-07-13 2020-04-24 资生堂美洲公司 Virtual face makeup removal and simulation, fast face detection, and landmark tracking
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 A makeup treatment method, device, electronic device and storage medium
CN111651040A (en) * 2020-05-27 2020-09-11 华为技术有限公司 Interaction method and electronic device of electronic device for skin detection

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673475B (en) * 2009-09-15 2013-01-09 宇龙计算机通信科技(深圳)有限公司 Method for realizing making-up guidance at terminal and equipment and system
CN103927719B (en) * 2014-04-04 2017-05-17 北京猎豹网络科技有限公司 Picture processing method and device
CN105278376A (en) * 2015-10-16 2016-01-27 珠海格力电器股份有限公司 Device using method and device using face recognition technology
CN105844242A (en) * 2016-03-23 2016-08-10 湖北知本信息科技有限公司 Method for detecting skin color in image
KR101872635B1 (en) * 2017-02-01 2018-06-29 주식회사 엘지생활건강 Automatic make-up evaluation system and operating method thereof
US10621771B2 (en) * 2017-03-21 2020-04-14 The Procter & Gamble Company Methods for age appearance simulation
CN109427075A (en) * 2017-08-24 2019-03-05 丽宝大数据股份有限公司 Body information analysis device and eye shadow analysis method thereof
CN108230255A (en) * 2017-09-19 2018-06-29 北京市商汤科技开发有限公司 It is used to implement the method, apparatus and electronic equipment of image enhancement
CN107820027A (en) * 2017-11-02 2018-03-20 北京奇虎科技有限公司 Video personage dresss up method, apparatus, computing device and computer-readable storage medium
CN109359317A (en) * 2017-11-02 2019-02-19 广东数相智能科技有限公司 A kind of lipstick is matched colors the model building method and lipstick color matching selection method of selection
CN107945135B (en) * 2017-11-30 2021-03-02 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic device
CN107969058A (en) * 2017-12-29 2018-04-27 上海斐讯数据通信技术有限公司 A kind of intelligence dressing table and control method
CN108256235A (en) * 2018-01-19 2018-07-06 上海爱优威软件开发有限公司 Makeup instructing method
CN108765268A (en) * 2018-05-28 2018-11-06 京东方科技集团股份有限公司 A kind of auxiliary cosmetic method, device and smart mirror
CN109934092A (en) * 2019-01-18 2019-06-25 深圳壹账通智能科技有限公司 Identify color method, apparatus, computer equipment and storage medium
CN111914598B (en) * 2019-05-09 2024-07-26 北京四维图新科技股份有限公司 Method, device, equipment and storage medium for detecting key points of continuous frame face
CN110111245B (en) * 2019-05-13 2023-12-08 Oppo广东移动通信有限公司 Image processing method, device, terminal and computer readable storage medium
JP6710440B1 (en) * 2019-07-15 2020-06-17 太丞 岡田 Makeup evaluation device, makeup evaluation system, and makeup evaluation method
CN112507766B (en) * 2019-09-16 2023-11-24 珠海格力电器股份有限公司 Face image extraction method, storage medium and terminal equipment
CN110738723A (en) * 2019-10-12 2020-01-31 创新工场(北京)企业管理股份有限公司 A method, system and electronic device for texture map generation based on face mesh model
US11657481B2 (en) * 2019-11-18 2023-05-23 Shinyfields Limited Systems and methods for selective enhancement of skin features in images
CN112528057A (en) * 2020-12-11 2021-03-19 广东科学中心 Dressing recommendation method, recommendation device, storage medium and terminal
CN112784773B (en) * 2021-01-27 2022-09-27 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal
CN112766234B (en) * 2021-02-23 2023-05-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111066060A (en) * 2017-07-13 2020-04-24 资生堂美洲公司 Virtual face makeup removal and simulation, fast face detection, and landmark tracking
CN108805090A (en) * 2018-06-14 2018-11-13 广东工业大学 A kind of virtual examination cosmetic method based on Plane Gridding Model
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110008813A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 Face identification method and system based on In vivo detection technology
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 A makeup treatment method, device, electronic device and storage medium
CN111651040A (en) * 2020-05-27 2020-09-11 华为技术有限公司 Interaction method and electronic device of electronic device for skin detection

Also Published As

Publication number Publication date
CN115731591A (en) 2023-03-03
CN115731142A (en) 2023-03-03
CN115937919A (en) 2023-04-07
CN115761827A (en) 2023-03-07
CN114155569A (en) 2022-03-08

Similar Documents

Publication Publication Date Title
JP7632917B2 (en) Method, system and computer program for generating a 3D head deformation model
US9142054B2 (en) System and method for changing hair color in digital images
JP7462120B2 (en) Method, system and computer program for extracting color from two-dimensional (2D) facial images
CN108305312B (en) Method and device for generating 3D virtual image
US9691136B2 (en) Eye beautification under inaccurate localization
CN114565508B (en) Virtual reloading method and device
CN111445410A (en) Texture enhancement method, device and equipment based on texture image and storage medium
CN109952594A (en) Image processing method, device, terminal and storage medium
KR20230110787A (en) Methods and systems for forming personalized 3D head and face models
CN108229278A (en) Face image processing process, device and electronic equipment
CN107273837A (en) Method and system for virtual makeup
CA2424963A1 (en) Method and system for enhancing portrait images
JP2010507854A (en) Method and apparatus for virtual simulation of video image sequence
JP2012098808A (en) Image processing device, image processing method, and control program
CN113344837B (en) Face image processing method and device, computer readable storage medium and terminal
CN114283052A (en) Method and device for cosmetic transfer and training of cosmetic transfer network
CN108022207A (en) Image processing method, device, storage medium and electronic equipment
CN113837017B (en) Cosmetic progress detection method, device, equipment and storage medium
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN113469874A (en) Beauty treatment method and device, electronic equipment and storage medium
US20190347469A1 (en) Method of improving image analysis
CN114972014A (en) Image processing method and device and electronic equipment
CN113421197A (en) Processing method and processing system of beautifying image
CN113837018B (en) Cosmetic progress detection method, device, equipment and storage medium
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant