[go: up one dir, main page]

CN112634125A - Automatic face replacement method based on off-line face database - Google Patents

Automatic face replacement method based on off-line face database Download PDF

Info

Publication number
CN112634125A
CN112634125A CN202011484330.4A CN202011484330A CN112634125A CN 112634125 A CN112634125 A CN 112634125A CN 202011484330 A CN202011484330 A CN 202011484330A CN 112634125 A CN112634125 A CN 112634125A
Authority
CN
China
Prior art keywords
face
face image
candidate
image
tested
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011484330.4A
Other languages
Chinese (zh)
Other versions
CN112634125B (en
Inventor
张九龙
马仲杰
屈小娥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Mengdong Information Technology Co.,Ltd.
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202011484330.4A priority Critical patent/CN112634125B/en
Publication of CN112634125A publication Critical patent/CN112634125A/en
Application granted granted Critical
Publication of CN112634125B publication Critical patent/CN112634125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于离线人脸数据库的自动面部替换方法,具体为:构建离线人脸数据库,将数据集中的每一张人脸图像进行镜像处理,然后根据人脸姿态的欧拉角对候选人脸图像集中的所有人脸图像进行分类,输入待测人脸图像,将候选人脸图像集中的人脸图像和待测人脸图像对齐到一个公共坐标系中,将待测人脸图像和候选人脸图像集中的所有人脸图像进行比较,根据待测人脸图像调整初始候选人脸集中所有人脸图像的照明,计算候选人脸图像与待测人脸图像之间的欧氏距离并按照从小到大的顺序排列,选择排名第一的候选人脸与待测人脸进行替换输出。本发明的一种基于离线人脸数据库的自动面部替换方法,解决了现有技术中存在的替换效率低的问题。

Figure 202011484330

The invention discloses an automatic face replacement method based on an offline face database, specifically: constructing an offline face database, mirroring each face image in the data set, and then aligning each face image according to the Euler angle of the face pose. Classify all the face images in the candidate face image set, input the face image to be tested, align the face image in the candidate face image set and the face image to be tested into a common coordinate system, and align the face image to be tested. Compare with all face images in the candidate face image set, adjust the illumination of all face images in the initial candidate face set according to the face image to be tested, and calculate the Euclidean distance between the candidate face image and the face image to be tested And according to the order from small to large, select the first-ranked candidate face and the face to be tested for replacement output. An automatic face replacement method based on an offline face database of the present invention solves the problem of low replacement efficiency existing in the prior art.

Figure 202011484330

Description

Automatic face replacement method based on off-line face database
Technical Field
The invention belongs to the technical field of digital image processing methods, and relates to an automatic face replacement method based on an offline face database.
Background
Advances in digital photography have enabled people to capture large numbers of high resolution images and share them over the internet. But also creates a new problem-privacy. More and more people take photos on the street and upload the photos to the Internet, but the photos contain some faces which are seen from the mirror and are released to the Internet without the consent of the people, so that the photos invade the privacy of others; in street view maps of Baidu and God software, a large number of pedestrians are photographed without soliciting opinions, so that protection of privacy of individuals becomes increasingly important. The traditional approach is to blur the identity protection by blurring, pixelating or simply covering the face area in the captured photograph with black pixels. This is undesirable because it reduces the visual appeal of the images, and these methods currently require manual manipulation, which becomes cumbersome and inefficient as the number of images rapidly grows.
Disclosure of Invention
The invention aims to provide an automatic face replacement method based on an offline face database, which solves the problem of low replacement efficiency in the prior art.
The technical scheme adopted by the invention is that an automatic face replacement method based on an off-line face database is implemented according to the following steps:
step 1, constructing an off-line face database;
step 2, carrying out mirror image processing on each face image in the data set obtained in the step 1, then taking the original face image and the face image after mirror image as a candidate face image set, and classifying all face images in the candidate face image set according to the Euler angles of the face postures;
step 3, inputting a face image to be detected, using a dlib detection model to execute face detection to extract all faces, estimating the face postures of the face image to be detected, calculating Euler angles of all the faces in the face image to be detected, and aligning the face image in the candidate face image set and the face image to be detected to a common coordinate system;
step 4, comparing the face image to be detected with all face images in the candidate face image set to obtain an initial candidate face set;
step 5, adjusting the illumination of all face images in the initial candidate face set according to the face image to be detected to obtain an adjusted candidate face set;
and 6, calculating Euclidean distances between the candidate face images in the candidate face set obtained in the step 5 and the face image to be detected, arranging the Euclidean distances in a sequence from small to large, and selecting the candidate face with the first rank and the face to be detected for replacement output.
The present invention is also characterized in that,
in step 2, classifying all the face images in the candidate face image set according to the euler angles of the face postures specifically comprises the following steps:
step 2.1, using a dlib detection model to perform feature point detection on the face images in the candidate face image set, obtaining 6 key point coordinates of the face images, namely the coordinates of a left canthus, a right canthus, a nose tip, a left mouth angle, a right mouth angle and a lower jaw, then using an average face model to take 6 key points as basic points of a 3D model, constructing a corresponding 3D model, then using a solvePp function of OpenCV to calculate a rotation vector according to the positions of the key points in the 3D model, and then calculating a Yaw angle Yaw and a Pitch angle Pitch of an Euler angle according to the rotation vector;
step 2.2, selecting a face image with a yaw angle within +/-25 degrees and a pitch angle within +/-15 degrees as a candidate face image;
step 2.3, classifying the alternative face images, specifically:
the yaw angle is evenly divided into 5 intervals from-25 degrees to 25 degrees as the abscissa, the pitch angle is evenly divided into three intervals from-15 degrees to 15 degrees as the ordinate, and the corresponding face images meeting the numerical values of the abscissa and the ordinate are placed in corresponding grids to form 15 attitude boxes, namely, the alternative face images are divided into 15 types.
And 3, calculating Euler angles of all the faces in the face image to be detected according to the method in the step 2.1, and calculating a Yaw angle Yaw and a Pitch angle Pitch.
The step 4 specifically comprises the following steps:
step 4.1, carrying out gender screening on the alternative face image: according to the gender of the face image to be detected, selecting the face image with the same gender as that of the face image to be detected in 15 gesture boxes of the alternative face image to be reserved as the next candidate face image;
and 4.2, performing age screening on all candidate face images obtained in the step 4.1: according to the age interval of the face image to be detected, detecting a face in accordance with the corresponding age interval in the candidate face image obtained in the step 4.1 as a next candidate face image;
4.3, selecting a face which has a difference of not more than 3 degrees with the yaw angle and the pitch angle of the face image to be detected in the candidate face image obtained in the step 4.2 as a candidate face image of the next step;
step 4.4, selecting the face meeting the resolution requirement in the candidate face set in the step 4.3 as a candidate face image of the next step;
step 4.5, calculating the fuzzy distance d between the candidate face image obtained by the processing of the step 4.4 and the face image to be detectedBThe calculated fuzzy distance dBSorting from small to large, and reserving the images ranked in the first 50 percent as candidate face images of the next step;
step 4.6, respectively calculating the illumination distance d between the candidate face image obtained in the step 4.5 and the face image to be detectedLThen to dLSorting according to the rule from small to large, and finally reserving the illumination distance dLThe top 10% of the face images are used as the next candidate face.
The step 4.5 is specifically as follows:
step 4.5.1, normalizing the gray level intensity of the eye area of each face image into a zero mean value and a unit variance, which specifically comprises the following steps:
Figure BDA0002838937250000041
wherein, x is the candidate face image obtained in the step 4.4 or the eye area list of the face image to be detectedThe intensity value of the gray level in each pixel,
Figure BDA0002838937250000042
is the mean value of the gray intensity of the candidate face image or the eye region of the face image to be detected obtained in the step 4.4, sigma is the standard deviation of the gray intensity of the candidate face image or the eye region of the face image to be detected obtained in the step 4.4, and x*Is the normalized gray scale intensity value;
step 4.5.2, calculating histogram h of normalized eye zone gradient size(1)And h(2)Multiplying the histogram by a weighting function using the square of the histogram index bin, resulting in two weighted histograms
Figure BDA0002838937250000043
And
Figure BDA0002838937250000044
the method specifically comprises the following steps:
Figure BDA0002838937250000045
where n denotes the bin index number of the histogram, h(i)A histogram representing the normalized eye region gradient size of the candidate face image or the face image to be measured, respectively,
Figure BDA0002838937250000046
representing a weighted histogram corresponding to the candidate face image or the face image to be detected, wherein i is 1 to represent the candidate face image, and i is 2 to represent the face image to be detected;
step 4.5.3, calculating the fuzzy distance between the candidate face image and the face image to be detected, namely the histogram intersection distance HID, specifically comprising the following steps:
Figure BDA0002838937250000047
wherein d isBRepresenting two facesA blur distance of the image;
step 4.5.4, calculating the fuzzy distance d between all the candidate face images and the face image to be detected according to the steps 4.5.1-4.5.3BThen, the images are sorted according to a rule from small to large, and the top 50% of the images are reserved as the candidate face images of the next step.
The step 4.6 is specifically as follows:
step 4.6.1, representing the candidate face image and the face image to be detected obtained in the step 4.5 into a cylindrical average face shape by using a face re-marking method;
step 4.6.2, calculating the image intensity of the face replacement area in each RGB color channel
Figure BDA0002838937250000051
The method specifically comprises the following steps:
Figure BDA0002838937250000052
wherein,
Figure BDA0002838937250000053
representing the calculation of the image intensity of the face replacement region in each RGB color channel; n (x, y) is the surface face normal at image location (x, y), ρcIs a constant albedo, coefficient a, for each of three color channelsc,kLight conditions, Hk(n (x, y) is a spherical harmonic image;
step 4.6.3 by applying Schmitt orthogonalization to the harmonic basis Hk(n (x, y) to create an orthogonal basis ψk(x, y) then
Figure BDA0002838937250000054
Expressed as:
Figure BDA0002838937250000055
wherein, betac,kRepresenting the illumination coefficient, #k(x, y) representsSpherical harmonic images after Schmidt orthogonalization;
step 4.6.4, calculating the corresponding candidate face image obtained in step 4.5 and the face image to be tested according to the step 4.6.1-4.6.3
Figure BDA0002838937250000056
Are used separately
Figure BDA0002838937250000057
And
Figure BDA0002838937250000058
representing and then calculating the illumination distance d between the candidate face image and the face image to be measuredLThe method specifically comprises the following steps:
Figure BDA0002838937250000059
wherein,
Figure BDA00028389372500000510
representing the corresponding illumination coefficients of the candidate face image,
Figure BDA00028389372500000511
representing the corresponding illumination coefficient of the face image to be measured, dL(I(1),I(2)) Representing the illumination distance between the candidate face image and the face image to be measured, wherein I(1)As candidate face image, I(2)The image is a human face image to be detected;
step 4.6.5, calculating the illumination distance d between all the candidate face images and the face image to be detectedLThe calculated illumination distance dLOrdering according to a rule from small to large, reserving dLThe top 10% of the faces are used as the next candidate face image.
The step 5 specifically comprises the following steps:
using an image formula to convert the face image I to be detected in the replacement area(2)Is applied to the candidate face image obtained in step 4I(1)And if so, the image intensity corresponding to the candidate face image after application is as follows:
Figure BDA0002838937250000061
wherein,
Figure BDA0002838937250000062
is a candidate face image I calculated according to the formula (5)(1)The image intensity of the face replacement area in each corresponding RGB color channel;
Figure BDA0002838937250000063
is a face image I to be measured calculated according to the formula (5)(2)The image intensity of the face replacement area in each corresponding RGB color channel;
Figure BDA0002838937250000064
to be measured face image I(2)Illumination candidate face image I(1)And the image intensity corresponding to the candidate face image.
The step 6 specifically comprises the following steps: and (5) calculating Euclidean distances between all the candidate face images processed in the step (5) and the face image to be detected, arranging the calculated Euclidean distances in a sequence from small to large, and selecting the candidate face with the first rank and the face to be detected for replacement output.
The invention has the beneficial effects that:
(1) compared with the traditional method, the method can fully automatically replace the human faces with different postures, sexes, ages, resolutions, image blurring degrees and illumination conditions without redundant manual assistance.
(2) The method utilizes the off-line face database to carry out full-automatic face exchange, is different from the traditional method of blurring identity by utilizing pixelation and painting black pixels, and achieves good effect.
Drawings
FIG. 1 is a flow chart of an automatic face replacement method based on an offline face database of the present invention;
FIG. 2 is a schematic diagram of 6 key points in an automatic face replacement method based on an offline face database according to the present invention;
FIG. 3 is a schematic diagram of 3-dimensional coordinate distribution used in estimating face pose in an automatic face replacement method based on an offline face database according to the present invention;
fig. 4 is a schematic diagram of 68 face key points detected by a dlib tool used in the automatic face replacement method based on the offline face database.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to an automatic face replacement method based on an off-line face database, which is implemented according to the following steps:
step 1, constructing an off-line face database;
step 2, carrying out mirror image processing on each face image in the data set obtained in the step 1, then taking the original face image and the face image after mirror image as a candidate face image set, and classifying all face images in the candidate face image set according to the Euler angles of the face postures; the classification of all the face images in the candidate face image set according to the euler angles of the face postures specifically comprises the following steps:
step 2.1, using a dlib detection model to perform feature point detection on the face images in the candidate face image set, obtaining 6 key point coordinates of the face images, as shown in fig. 2, namely, coordinates of a left eye corner, a right eye corner, a nose tip, a left mouth corner, a right mouth corner and a lower jaw, then using an average face model to take the 6 key points as basic points of the 3D model, constructing a corresponding 3D model, then using a solvePnP function of OpenCV, calculating a rotation vector of the face according to the positions of the key points in the 3D model, and then calculating a Yaw angle Yaw and a Pitch angle Pitch of an euler angle by using the rotation vector;
step 2.2, selecting a face image with a yaw angle within +/-25 degrees and a pitch angle within +/-15 degrees as a candidate face image;
step 2.3, classifying the alternative face images, specifically:
uniformly dividing a yaw angle from-25 degrees to 25 degrees into 5 intervals as horizontal coordinates, uniformly dividing a pitch angle from-15 degrees to 15 degrees into three intervals as vertical coordinates, and placing corresponding face images meeting the numerical values of the horizontal and vertical coordinates into corresponding grids to form 15 attitude boxes, namely, dividing the alternative face images into 15 types;
step 3, inputting a face image to be detected, using a dlib detection model to execute face detection to extract all faces, estimating the face postures of the face image to be detected, calculating the Yaw angle Yaw and the Pitch angle Pitch of all the faces in the face image to be detected, and aligning the face image in the candidate face image set and the face image to be detected to a common coordinate system;
step 4, comparing the face image to be detected with all face images in the candidate face image set to obtain an initial candidate face set; the method specifically comprises the following steps:
step 4.1, carrying out gender screening on the alternative face image: according to the gender of the face image to be detected, selecting the face image with the same gender as that of the face image to be detected in 15 gesture boxes of the alternative face image to be reserved as the next candidate face image;
and 4.2, performing age screening on all candidate face images obtained in the step 4.1: according to the age interval of the face image to be detected, detecting a face in accordance with the corresponding age interval in the candidate face image obtained in the step 4.1 as a next candidate face image;
4.3, selecting a face which has a difference of not more than 3 degrees with the yaw angle and the pitch angle of the face image to be detected in the candidate face image obtained in the step 4.2 as a candidate face image of the next step;
step 4.4, selecting the face meeting the resolution requirement in the candidate face set in the step 4.3 as a candidate face image of the next step;
step 4.5, calculating the fuzzy distance d between the candidate face image obtained by the processing of the step 4.4 and the face image to be detectedBThe calculated fuzzy distance dBSorting from small to large, and reserving the images ranked in the first 50 percent as candidate face images of the next step;
step 4.6, calculating the illumination distance d between the candidate face image obtained in the step 4.5 and the face image to be detectedLThen to dLSorting according to the rule from small to large, and finally reserving the illumination distance dLThe top 10% of the face images are used as the next candidate face.
The step 4.5 is specifically as follows:
step 4.5.1, normalizing the gray level intensity of the eye area of each face image into a zero mean value and a unit variance, which specifically comprises the following steps:
Figure BDA0002838937250000091
wherein x is the gray intensity value in a single pixel point of the candidate face image or the eye region of the face image to be detected obtained in the step 4.4,
Figure BDA0002838937250000092
is the mean value of the gray intensity of the candidate face image or the eye region of the face image to be detected obtained in the step 4.4, sigma is the standard deviation of the gray intensity of the candidate face image or the eye region of the face image to be detected obtained in the step 4.4, and x*Is the normalized gray scale intensity value;
step 4.5.2, calculating histogram h of normalized eye zone gradient size(1)And h(2)Multiplying the histogram by a weighting function using the square of the histogram index bin, resulting in two weighted histograms
Figure BDA0002838937250000093
And
Figure BDA0002838937250000094
the method specifically comprises the following steps:
Figure BDA0002838937250000095
where n denotes the bin index number of the histogram, h(i)A histogram representing the normalized eye region gradient size of the candidate face image or the face image to be measured, respectively,
Figure BDA0002838937250000096
representing a weighted histogram corresponding to the candidate face image or the face image to be detected, wherein i is 1 to represent the candidate face image, and i is 2 to represent the face image to be detected;
step 4.5.3, calculating the fuzzy distance between the candidate face image and the face image to be detected, namely the histogram intersection distance HID, specifically comprising the following steps:
Figure BDA0002838937250000101
wherein d isBRepresenting the fuzzy distance of two face images;
step 4.5.4, calculating the fuzzy distance d between all the candidate face images and the face image to be detected according to the steps 4.5.1-4.5.3BThen, ordering according to a rule from small to large, and reserving the first 50 percent of images as candidate face images of the next step
The step 4.6 is specifically as follows:
step 4.6.1, representing the candidate face image and the face image to be detected obtained in the step 4.5 into a cylindrical average face shape by using a face re-marking method;
step 4.6.2, calculating the image intensity of the face replacement area in each RGB color channel
Figure BDA0002838937250000102
The method specifically comprises the following steps:
Figure BDA0002838937250000103
wherein,
Figure BDA0002838937250000104
representing the calculation of the image intensity of the face replacement region in each RGB color channel; n (x, y) is the surface face normal at image location (x, y), ρcIs a constant albedo, coefficient a, for each of three color channelsc,kLight conditions, Hk(n (x, y) is a spherical harmonic image;
step 4.6.3 by applying Schmitt orthogonalization to the harmonic basis Hk(n (x, y) to create an orthogonal basis ψk(x, y) then
Figure BDA0002838937250000105
Expressed as:
Figure BDA0002838937250000106
wherein, betac,kRepresenting the illumination coefficient, #k(x, y) represents a spherical harmonic image after schmidt orthogonalization;
step 4.6.4, calculating the corresponding candidate face image obtained in step 4.5 and the face image to be tested according to the step 4.6.1-4.6.3
Figure BDA0002838937250000107
Are used separately
Figure BDA0002838937250000108
And
Figure BDA0002838937250000109
representing and then calculating the illumination distance d between the candidate face image and the face image to be measuredLThe method specifically comprises the following steps:
Figure BDA0002838937250000111
wherein,
Figure BDA0002838937250000112
representing the corresponding illumination coefficients of the candidate face image,
Figure BDA0002838937250000113
representing the corresponding illumination coefficient of the face image to be measured, dL(I(1),I(2)) Representing the illumination distance between the candidate face image and the face image to be measured, wherein I(1)As candidate face image, I(2)The image is a human face image to be detected;
step 4.6.5, calculating the illumination distance between all the candidate face images and the face image to be detected, and calculating the illumination distance dLOrdering according to a rule from small to large, reserving dLThe top 10% of the faces are used as the next candidate face image.
Step 5, adjusting the illumination of all face images in the initial candidate face set according to the face image to be detected to obtain an adjusted candidate face set; the method specifically comprises the following steps:
using an image formula to convert the face image I to be detected in the replacement area(2)Is applied to the candidate face image I obtained in step 4(1)And if so, the image intensity corresponding to the candidate face image after application is as follows:
Figure BDA0002838937250000114
wherein,
Figure BDA0002838937250000115
is a candidate face image I calculated according to the formula (5)(1)The image intensity of the face replacement area in each corresponding RGB color channel;
Figure BDA0002838937250000116
is a face image I to be measured calculated according to the formula (5)(2)The image intensity of the face replacement area in each corresponding RGB color channel;
Figure BDA0002838937250000117
to be measured face image I(2)Illumination candidate face image I(1)And the image intensity corresponding to the candidate face image.
And 6, calculating Euclidean distances between all the candidate face images processed in the step 5 and the face image to be detected, arranging the Euclidean distances from small to large, and selecting the candidate face with the first rank and the face to be detected for replacement output.
Examples
An automatic face replacement method based on an off-line face database is implemented according to the following steps:
step 1, constructing a face database, and using the existing most popular high-definition face data set celebA-HQ data set. The method comprises the steps of cutting and correcting 30000 high-definition face images with the resolution of 1024 x 1024, carrying out batch mirror image processing on all face images in a database by using a python function, and increasing the number of candidate faces to 60000.
And 2, grouping the face data sets obtained in the step 1, calculating by using face key points obtained by dlib to obtain Euler angles, and classifying each face into 15 categories according to the Euler angles of the faces.
The problem of pose estimation is also known as the PnP (coherent-n-Point problemm), and face pose estimation mainly obtains angular information of face orientation. We represent the obtained face pose information by three euler angles (pitch, yaw, roll). Firstly, a 3D face model with 6 key points (a left canthus, a right canthus, a nose tip, a left mouth corner, a right mouth corner and a lower jaw) is defined, then, corresponding 6 face key points in a picture are detected by adopting Dlib, a rotation vector is solved by adopting a solvePp function of OpenCV, and finally, the rotation vector is converted into an Euler angle. And dividing the human face in the database into 15 attitude boxes according to the pitch angle and the yaw angle in the Euler angles. Because all faces in the CelebA-HQ face dataset are aligned, only the yaw and pitch angles are considered. To replace faces in different directions, we assigned each face to one of 15 pose boxes using yaw and pitch angles. When our system provides a test image containing a face to be replaced, it performs face detection to extract the face, estimates the pose, and aligns the face to a common coordinate system. The system then looks at the face database and selects possible candidate faces for replacement. Note that only candidate faces in the same pose bin in the library are considered; this ensures that the replacement faces are relatively similar in pose, allowing the system to use simple 2D image compilation rather than a three-dimensional model-based approach that requires precise alignment.
Step 2.1, as shown in fig. 4, feature point detection is performed on the two-dimensional face image in the face database by using the detection model shape _ predictor _68_ face _ landworks.dat of 68 key points of dlib, and 68 key points in a fixed index order are obtained.
Step 2.2, selecting points at the positions of canthus (horns of the eyes), nose tip (tip of the nose), mouth corner (horns of the mouth) and the like from the detection result of the previous step, and recording indexes id of required 6 key points, wherein the indexes id are respectively as follows: the chin: 8; nose tip: 30, of a nitrogen-containing gas; left canthus: 36; right canthus: 45, a first step of; left mouth angle: 48; right mouth angle: 54 as known 2D coordinates.
And 2.3, obtaining a 3-dimensional coordinate corresponding to each 2-dimensional coordinate point. In practical applications, there is no need to obtain an accurate three-dimensional model of the face and no such model can be obtained, so we can use the average face model coordinates. The coordinates of the 6 key points in the above figure are generally used: respectively as follows: nose tip: (0.0, 0.0, 0.0); the chin: (0.0, -330.0, -65.0); upper left corner of left eye: (-225.0f, 170.0f, -135.0); upper right corner of right eye: (225.0, 170.0, -135.0); mouth left corner: (-150.0, -150.0, -125.0); right angle of mouth: (150.0, -150.0, -125.0). It should be noted that these points are in an arbitrary coordinate system. I.e., World Coordinates.
And 2.4, solving the rotation vector by using a solvePnP function of Opencv, wherein the output result of the solvePnP function comprises a rotation vector (rotation vector) and a translation vector (translation vector). Here we are only concerned with rotation information, so the rotation vector will operate mainly.
Step 2.5, the rotation vector is one of the representation modes of the object rotation information, and is a common representation mode of OpenCV. Because we need the Euler angle, the rotation vector is converted into the Euler angle here. The specific conversion mode is to convert the rotation vector into a quadruple and then convert the quadruple into an Euler angle.
And 2.6, dividing the face database according to the yaw angle and the pitch angle in the Euler angle. Since less reliable extreme poses may lead to unreliable face changes, we only select images with a yaw angle within ± 25 ° and images with a pitch angle of ± 15 ° as alternatives. The span intervals of the yaw angle and the pitch angle are 10 degrees, the yaw angle ranges from-25 degrees to 25 degrees, 5 options are divided in total, the pitch angle ranges from-15 degrees to 15 degrees, and 3 options are divided in total. Considering the yaw angle and the pitch angle, the face database is classified into 15 attitude boxes according to the attitude, as shown in fig. 3.
And 3, when an image to be detected is input, using dlib to execute face detection to extract all faces, estimating the postures of the faces to be detected by using key point detection of dlib, calculating Euler angles of the faces, and aligning the faces to be detected and the faces in the database to a common coordinate system.
And 4, comparing the yaw angle and the pitch angle of the face to be detected and the faces in all the attitude boxes, and selecting candidate faces in the corresponding attitude boxes. In order to produce a perceptually realistic replacement image, the pose of the face to be measured and the pose of the replacement face must be completely similar, or even more similar than what is guaranteed to be in the same pose box. Therefore, the difference between the yaw angle and the pitch angle of the face selected from the corresponding attitude box and the yaw angle and the pitch angle of the face to be detected is not more than 3 degrees. In addition, the system requires that the selected candidate face be similar to the face to be tested in terms of gender, age, image quality, lighting, color, etc. We define some attributes to describe the similarity of the appearance of faces in images, mainly gender, age, resolution, blur level, lighting, color. In this section, we will describe these attributes and the corresponding criteria for selecting a set of candidate faces.
And 4.1, considering the authenticity after face changing, ensuring that the genders of the face to be detected and the candidate face are consistent, carrying out gender screening on the candidate face, and searching the male face in a pose box corresponding to a face database as a candidate face set of the next step if the male face is detected in the image to be detected.
And 4.2, ensuring that the face after face changing is better fused with the non-face part of the original image to achieve a better effect, and limiting the age of the candidate face, wherein the age is divided into the following 5 intervals, namely 0-10 years old, 10-20 years old, 20-40 years old, 40-60 years old and 60-80 years old. When the detected face belongs to an age interval, the face meeting the age requirement is searched in the candidate face set of the step 4.1 to be used as the next candidate face set.
And 4.3, in order to ensure that the postures of the face to be detected and the face to be replaced are very similar, more accurate posture selection is carried out on the candidate face set after the age screening in the step 4.2, so that the face candidate set in the step 4.2 and the face with the difference between the yaw angle and the pitch angle of the face to be detected and the yaw angle and the pitch angle of the face to be detected being not more than 3 degrees are selected as the candidate face set in the next step.
And 4.4, ensuring that the resolution of the candidate face is consistent with that of the original image. A significant difference in this property will result in a significant mismatch between the inner and outer regions of the face after replacement. We use the distance between the top left corner of the left eye and the top right corner of the right eye to define the resolution of the human face image. Since high resolution images can always be downsampled, we only need to define a lower limit on the resolution of the candidate faces. Therefore, we select the face meeting the resolution requirement from the candidate face set in step 4.3 as the candidate face set of the next step, and the distance between the left and right eyes is at least 80% of the distance between the left and right eyes of the face in the image to be measured.
And 4.5, in order to ensure that the fuzzy degree of the candidate face is consistent with the face to be detected, a simple heuristic measurement is used for measuring the similarity of the fuzzy degrees in the two images. The step 4.5 is specifically as follows:
step 4.5.1, we normalize the gray-level intensity of the eye region of each aligned face image to zero mean and unit variance, as shown in equation (1):
Figure BDA0002838937250000151
wherein x is the gray intensity value in a single pixel point of the eye region of the human face to be detected,
Figure BDA0002838937250000152
is the mean value of the gray level intensity of the eye region of the face to be measured, sigma is the standard deviation of the gray level intensity of the eye region of the face to be measured, and x*Is the gray intensity value after normalization.
Step 4.5.2, we calculate a histogram h of normalized eye region gradient size(1)And h(2)The histogram is multiplied by a weighting function that uses the square of the histogram index bin, as shown in equation (2):
Figure BDA0002838937250000161
wherein n represents bin index number of histogram, i-1 represents candidate face image, i-2 represents image to be measured, h represents histogram index number, and h represents histogram index number(i)A histogram showing the normalized eye region gradient size for each of the two images.
Step 4.5.3, fuzzy distance is calculated as Histogram Intersection Distance (HID), in two weighted histograms
Figure BDA0002838937250000162
And
Figure BDA0002838937250000163
as shown below in equation (3):
Figure BDA0002838937250000164
wherein d isBRepresenting the blur distance of two faces of a person,
Figure BDA0002838937250000165
and
Figure BDA0002838937250000166
representing weighted histograms of the two images. Will blur the distance dBAnd ordering according to a rule from small to large, and reserving the first 50 percent of images as candidate face images of the next step.
And 4.6, ensuring that the illumination of the face of the candidate face is basically consistent with the face to be detected. And 4.5, calculating the illumination and average color in the replacement area of each face in the candidate face set in the step 4.5, and selecting the face with illumination and color similar to the face to be detected as the candidate face set of the next step under the condition of giving the face to be detected.
The step 4.6 is specifically as follows:
step 4.6.1, the human face re-labeling method is used to represent the face shape as a cylinder-like "average face shape".
Step 4.6.2, a simple orthographic projection is used to define the mapping from table face to face. Image intensity I of face replacement region in each RGB color channelc(x, y) can be approximated using a linear combination of 9 spherical harmonics to
Figure BDA0002838937250000167
As shown in equation (4):
Figure BDA0002838937250000168
wherein
Figure BDA0002838937250000169
Representing the approximate image intensity of the face replacement region in each RGB color channel, n (x, y) is the table face normal for the image location (x, y), ρcOf each of three colour channelsConstant albedo (representing the average color within the replacement area), coefficient ac,kDescription of the Lighting conditions, Hk(n) is a spherical harmonic image.
Step 4.6.3 by applying Schmitt orthogonalization to the harmonic basis Hk(n) to create an orthogonal basis psik(x, y), approximate image intensity
Figure BDA0002838937250000171
This orthogonal basis expansion can be used:
Figure BDA0002838937250000172
wherein, betac,kRepresenting the illumination coefficient, #k(x, y) represents a spherical harmonic image after schmidt orthogonalization. The other parameters are consistent with the above face formula.
Step 4.6.4, convert RGB albedo to HSV color space, for comparison of illumination, we will illuminate distance dLDefined as L between the corresponding illumination coefficients2(euclidean distance) distance:
Figure BDA0002838937250000173
wherein d isLRepresenting the illumination distances of the two images, dLOrdering according to a rule from small to large, reserving dLThe top 10% of the faces are used as the next candidate face image.
Selecting candidate faces from the library using the various appearance attributes described above is a nearest neighbor search problem. To speed up, a sequential selection method is used. Given a query face, we first select its pose, gender, age, resolution to approximate the face to be tested. This step enables us to reduce the number of potential candidate replacements from 60000 to thousands, next we use the fuzzy distance dBAnd a subsequent illumination distance dLFurther reducing the candidate face set, and finally selecting the first 50 faces as the lower faceAnd (5) one-step candidate face collection.
Step 5, adjusting the illumination of all face images in the initial candidate face set according to the face image to be detected to obtain an adjusted candidate face set; the method specifically comprises the following steps:
using an image formula to convert the face image I to be detected in the replacement area(2)Is applied to the candidate face image I obtained in step 4(1)And if so, the image intensity corresponding to the candidate face image after application is as follows:
Figure BDA0002838937250000181
wherein,
Figure BDA0002838937250000182
is a candidate face image I calculated according to the formula (5)(1)The image intensity of the face replacement area in each corresponding RGB color channel;
Figure BDA0002838937250000183
is a face image I to be measured calculated according to the formula (5)(2)The image intensity of the face replacement area in each corresponding RGB color channel;
Figure BDA0002838937250000184
to be measured face image I(2)Illumination candidate face image I(1)And the image intensity corresponding to the candidate face image.
And 6, calculating Euclidean distances between all the candidate face images processed in the step 5 and the face image to be detected, arranging the Euclidean distances from small to large, and selecting the candidate face with the first rank and the face to be detected for replacement output.

Claims (8)

1.一种基于离线人脸数据库的自动面部替换方法,其特征在于,具体按照如下步骤实施:1. an automatic face replacement method based on offline human face database, is characterized in that, specifically implements according to the following steps: 步骤1,构建离线人脸数据库;Step 1, build an offline face database; 步骤2,将步骤1获得的数据集中的每一张人脸图像进行镜像处理,然后将原始人脸图像和镜像后的人脸图像作为候选人脸图像集,根据人脸姿态的欧拉角对候选人脸图像集中的所有人脸图像进行分类;Step 2: Mirror each face image in the data set obtained in step 1, and then use the original face image and the mirrored face image as the candidate face image set, according to the Euler angle of the face pose. Classify all face images in the candidate face image set; 步骤3,输入待测人脸图像,使用dlib检测模型执行人脸检测以提取全部人脸,并估计待测人脸图像的人脸姿态,计算出待测人脸图像中所有人脸的欧拉角,并将候选人脸图像集中的人脸图像和待测人脸图像对齐到一个公共坐标系中;Step 3, input the face image to be tested, use the dlib detection model to perform face detection to extract all faces, and estimate the face pose of the face image to be tested, and calculate the Euler of all faces in the face image to be tested. angle, and align the face image in the candidate face image set and the face image to be tested into a common coordinate system; 步骤4,将待测人脸图像和候选人脸图像集中的所有人脸图像进行比较,获得初始候选人脸集;Step 4: Compare the face image to be tested with all face images in the candidate face image set to obtain an initial candidate face set; 步骤5,根据待测人脸图像调整初始候选人脸集中所有人脸图像的照明,获得调整后的候选人脸集;Step 5: Adjust the illumination of all face images in the initial candidate face set according to the face image to be tested, and obtain the adjusted candidate face set; 步骤6,计算经步骤5得到的候选人脸集中的候选人脸图像与待测人脸图像之间的欧氏距离并按照从小到大的顺序排列,选择排名第一的候选人脸与待测人脸进行替换输出。Step 6: Calculate the Euclidean distance between the candidate face image in the candidate face set obtained in step 5 and the face image to be tested and arrange it in ascending order, and select the first candidate face and the face image to be tested. Face replacement output. 2.根据权利要求1所述的一种基于离线人脸数据库的自动面部替换方法,其特征在于,所述步骤2中根据人脸姿态的欧拉角对候选人脸图像集中的所有人脸图像进行分类具体为:2. a kind of automatic face replacement method based on offline human face database according to claim 1, is characterized in that, in described step 2, according to the Euler angle of human face posture to all face images in candidate face image collection Categorized as: 步骤2.1,使用dlib检测模型对候选人脸图像集中的人脸图像进行特征点检测,获取人脸图像的6个关键点坐标,即就是左眼角、右眼角、鼻尖、左嘴角、右嘴角、下颌的坐标,然后采用平均脸模型将6个关键点作为3D模型的基本点,构建对应的3D模型,然后采用OpenCV的solvePnP函数,根据关键点在3D模型中的位置计算出人脸的旋转向量,然后将旋转向量计算欧拉角的偏航角Yaw和俯仰角Pitch;Step 2.1, use the dlib detection model to detect the feature points of the face images in the candidate face image set, and obtain the coordinates of 6 key points of the face image, that is, the left eye corner, the right eye corner, the nose tip, the left mouth corner, the right mouth corner, and the lower jaw. Then use the average face model to take 6 key points as the basic points of the 3D model to construct the corresponding 3D model, and then use the solvePnP function of OpenCV to calculate the rotation vector of the face according to the position of the key points in the 3D model, Then calculate the yaw angle Yaw and pitch angle Pitch of the Euler angle from the rotation vector; 步骤2.2,选择偏航角度在±25°内且俯仰角度在±15°的人脸图像作为备选人脸图像;Step 2.2, select the face image with the yaw angle within ±25° and the pitch angle within ±15° as the candidate face image; 步骤2.3,对备选人脸图像进行分类,具体为:Step 2.3, classify the candidate face images, specifically: 将偏航角从-25°到25°均匀划分为5个区间作为横坐标,将俯仰角从-15°到15°均匀划分为三个区间作为纵坐标,将同时满足横纵坐标数值的对应人脸图像放置到对应的格子中,形成15个姿态箱,即就是将备选人脸图像分为15类。Divide the yaw angle from -25° to 25° into 5 intervals as the abscissa, and divide the pitch angle from -15° to 15° into three intervals as the ordinate, which will satisfy the corresponding values of the abscissa and ordinate at the same time. The face images are placed in the corresponding grids to form 15 pose boxes, that is, the candidate face images are divided into 15 categories. 3.根据权利要求2所述的一种基于离线人脸数据库的自动面部替换方法,其特征在于,所述步骤3中的计算出待测人脸图像中所有人脸的欧拉角,按照步骤2.1的方法进行,计算偏航角Yaw和俯仰角Pitch。3. a kind of automatic face replacement method based on offline face database according to claim 2, is characterized in that, in described step 3, calculate the Euler angle of all faces in the face image to be measured, according to the step The method of 2.1 is carried out, and the yaw angle Yaw and the pitch angle Pitch are calculated. 4.根据权利要求3所述的一种基于离线人脸数据库的自动面部替换方法,其特征在于,所述步骤4具体为:4. a kind of automatic face replacement method based on offline face database according to claim 3, is characterized in that, described step 4 is specifically: 步骤4.1,对备选人脸图像进行性别筛选:根据待测人脸图像的性别,在备选人脸图像的15个姿态箱中选择与待测人脸图像性别相同的人脸图像进行保留,作为下一步候选人脸图像;Step 4.1, perform gender screening on the candidate face image: according to the gender of the face image to be tested, select a face image with the same gender as the face image to be tested from the 15 pose boxes of the candidate face image for retention, as the next candidate face image; 步骤4.2,将步骤4.1获得的所有候选人脸图像进行年龄筛选:根据待测人脸图像的年龄区间,在步骤4.1得到的候选人脸图像中检测符合对应年龄区间的人脸作为下一步的候选人脸图像;Step 4.2, perform age screening on all candidate face images obtained in step 4.1: According to the age range of the face image to be tested, detect faces that meet the corresponding age range in the candidate face images obtained in step 4.1 as candidates for the next step face image; 步骤4.3,选择步骤4.2得到的候选人脸图像中与待测人脸图像的偏航角和俯仰角相差均不超过3°的人脸作为下一步的候选人脸图像;Step 4.3, select the candidate face image in the candidate face image obtained in step 4.2 whose yaw angle and pitch angle differ by no more than 3° from the face image to be tested as the candidate face image for the next step; 步骤4.4,选择步骤4.3候选人脸集合中符合分辨率要求的人脸作为下一步的候选人脸图像;Step 4.4, select the face that meets the resolution requirements in the candidate face set in step 4.3 as the candidate face image for the next step; 步骤4.5,计算经过步骤4.4处理得到的候选人脸图像与待测人脸图像的模糊距离dB,将计算得到的模糊距离dB由小到大排序,保留排在前50%的图像作为下一步的候选人脸图像;Step 4.5, calculate the fuzzy distance d B between the candidate face image and the face image to be tested obtained after processing in step 4.4, sort the calculated fuzzy distance d B from small to large, and keep the top 50% of the images as the next step. one-step candidate face image; 步骤4.6,计算步骤4.5得到的候选人脸图像与待测人脸图像之间的照明距离dL,然后对dL按照从小到大的规则进行排序,最终保留照明距离dL在前10%的人脸图像作为下一步的候选人脸。Step 4.6, calculate the lighting distance d L between the candidate face image obtained in step 4.5 and the face image to be tested, and then sort d L according to the rules from small to large, and finally keep the lighting distance d L in the top 10%. The face image serves as the candidate face for the next step. 5.根据权利要求4所述的一种基于离线人脸数据库的自动面部替换方法,其特征在于,所述步骤4.5具体为:5. a kind of automatic face replacement method based on offline human face database according to claim 4, is characterized in that, described step 4.5 is specifically: 步骤4.5.1,将每个人脸图像的眼睛区域的灰度强度归一化为零均值和单位方差,具体为:Step 4.5.1, normalize the gray intensity of the eye region of each face image to zero mean and unit variance, specifically:
Figure FDA0002838937240000031
Figure FDA0002838937240000031
其中,x是步骤4.4得到的候选人脸图像或者待测人脸图像眼睛区域单个像素点内灰度强度值,
Figure FDA0002838937240000032
是步骤4.4得到的候选人脸图像或者待测人脸图像眼睛区域灰度强度的均值,σ是步骤4.4得到的候选人脸图像或者待测人脸图像区域灰度强度的标准差,x*是归一化之后的灰度强度值;
Among them, x is the candidate face image obtained in step 4.4 or the gray intensity value of a single pixel in the eye region of the face image to be tested,
Figure FDA0002838937240000032
is the mean value of the gray intensity in the eye region of the candidate face image or the face image to be tested obtained in step 4.4, σ is the standard deviation of the gray intensity of the candidate face image or the image region to be tested obtained in step 4.4, x * is Normalized grayscale intensity value;
步骤4.5.2,计算归一化眼区梯度大小的直方图h(1)和h(2),将直方图乘以一个加权函数,该加权函数使用直方图索引bin的平方,得到两个加权直方图
Figure FDA0002838937240000033
Figure FDA0002838937240000034
具体为:
Step 4.5.2, compute the histograms h (1) and h (2) of the normalized eye gradient magnitudes, multiply the histogram by a weighting function that uses the square of the histogram index bin to obtain two weights Histogram
Figure FDA0002838937240000033
and
Figure FDA0002838937240000034
Specifically:
Figure FDA0002838937240000035
Figure FDA0002838937240000035
其中,n表示直方图的bin索引号,h(i)表示候选人脸图像或者待测人脸图像分别的归一化眼区域梯度大小的直方图,
Figure FDA0002838937240000041
表示候选人脸图像或者待测人脸图像对应的加权直方图,i=1表示候选人脸图像,i=2表示待测人脸图像;
Among them, n represents the bin index number of the histogram, h (i) represents the histogram of the normalized eye region gradient size of the candidate face image or the face image to be tested, respectively,
Figure FDA0002838937240000041
represents the weighted histogram corresponding to the candidate face image or the face image to be tested, i=1 represents the candidate face image, and i=2 represents the face image to be tested;
步骤4.5.3,计算候选人脸图像和待测人脸图像的模糊距离,即就是直方图相交距离HID,具体为:Step 4.5.3, calculate the fuzzy distance between the candidate face image and the face image to be tested, that is, the histogram intersection distance HID, specifically:
Figure FDA0002838937240000042
Figure FDA0002838937240000042
其中,dB表示两张人脸图像的模糊距离;Among them, d B represents the blur distance of two face images; 步骤4.5.4,按照步骤4.5.1-4.5.3计算所有候选人脸图像和待测人脸图像的模糊距离dB,然后按照从小到大的规则排序,保留前50%的图像作为下一步的候选人脸图像。Step 4.5.4, follow steps 4.5.1-4.5.3 to calculate the blur distance d B of all candidate face images and the face image to be tested, then sort according to the rules from small to large, keep the first 50% of the images as the next step candidate face image.
6.根据权利要求4所述的一种基于离线人脸数据库的自动面部替换方法,其特征在于,所述步骤4.6具体为:6. a kind of automatic face replacement method based on offline face database according to claim 4, is characterized in that, described step 4.6 is specifically: 步骤4.6.1,使用人脸重新标记方法将步骤4.5获得的候选人脸图像和待测人脸图像表示为圆柱体状的平均人脸形状;Step 4.6.1, use the face re-labeling method to represent the candidate face image and the test face image obtained in step 4.5 as the average face shape of a cylinder; 步骤4.6.2,计算每个RGB颜色通道中人脸替换区域的图像强度
Figure FDA0002838937240000043
具体为:
Step 4.6.2, Calculate the image intensity of the face replacement area in each RGB color channel
Figure FDA0002838937240000043
Specifically:
Figure FDA0002838937240000044
Figure FDA0002838937240000044
其中,
Figure FDA0002838937240000045
表示计算每个RGB颜色通道中人脸替换区域的图像强度;n(x,y)是图像位置(x,y)处的表人脸法线,ρc是三个颜色通道中每个通道的常数反照率,系数ac,k光照条件,Hk(n(x,y)是球面谐波图像;
in,
Figure FDA0002838937240000045
Represents calculating the image intensity of the face replacement area in each RGB color channel; n(x,y) is the table face normal at the image position (x,y), ρ c is the value of each of the three color channels Constant albedo, coefficients a c,k light conditions, H k (n(x,y) is the spherical harmonic image;
步骤4.6.3,通过将施密特正交化应用于谐波基Hk(n(x,y)来创建一个正交基ψk(x,y),则
Figure FDA0002838937240000046
表示为:
Step 4.6.3, create an orthonormal basis ψk (x,y) by applying Schmitt orthogonalization to the harmonic basis Hk (n(x,y), then
Figure FDA0002838937240000046
Expressed as:
Figure FDA0002838937240000051
Figure FDA0002838937240000051
其中,βc,k表示照明系数,ψk(x,y)表示施密特正交化后的球面谐波图像;Among them, β c,k represents the illumination coefficient, ψ k (x, y) represents the spherical harmonic image after Schmidt orthogonalization; 步骤4.6.4,根据步骤4.6.1-4.6.3分分别计算经步骤4.5得到的候选人脸图像和待测人脸图像对应的
Figure FDA0002838937240000052
分别用
Figure FDA0002838937240000053
Figure FDA0002838937240000054
表示,然后计算候选人脸图像和待测人脸图像的照明距离dL,具体为:
Step 4.6.4, according to steps 4.6.1-4.6.3 points, respectively calculate the candidate face image obtained in step 4.5 and the face image to be tested corresponding to
Figure FDA0002838937240000052
use separately
Figure FDA0002838937240000053
and
Figure FDA0002838937240000054
represents, and then calculates the illumination distance d L between the candidate face image and the test face image, specifically:
Figure FDA0002838937240000055
Figure FDA0002838937240000055
其中,
Figure FDA0002838937240000056
表示候选人脸图像对应的照明系数,
Figure FDA0002838937240000057
表示待测人脸图像对应的照明系数,dL(I(1),I(2))表示候选人脸图像和待测人脸图像之间的照明距离,其中,I(1)为候选人脸图像,I(2)为待测人脸图像;
in,
Figure FDA0002838937240000056
represents the illumination coefficient corresponding to the candidate face image,
Figure FDA0002838937240000057
represents the illumination coefficient corresponding to the face image to be tested, d L (I (1) , I (2) ) represents the illumination distance between the candidate face image and the face image to be tested, where I (1) is the candidate face image, I (2) is the face image to be tested;
步骤4.6.5,计算所有候选人脸图像和待测人脸图像之间的照明距离,将计算的照明距离dL按照从小到大的规则排序,保留dL在前10%的人脸作为下一步的候选人脸图像。Step 4.6.5, calculate the lighting distance between all candidate face images and the face image to be tested, sort the calculated lighting distances d L according to the rules from small to large, and keep the faces with d L in the top 10% as the lower part. One step candidate face image.
7.根据权利要求4所述的一种基于离线人脸数据库的自动面部替换方法,其特征在于,所述步骤5具体为:7. a kind of automatic face replacement method based on offline face database according to claim 4, is characterized in that, described step 5 is specifically: 使用图像公式在替换区域内将待测人脸图像I(2)的照明应用于步骤4得到的候选人脸图像I(1)中,则应用后的候选人脸图像对应的图像强度为:Use the image formula to apply the illumination of the face image I (2) to be tested in the candidate face image I (1) obtained in step 4 in the replacement area, then the image intensity corresponding to the applied candidate face image is:
Figure FDA0002838937240000058
Figure FDA0002838937240000058
其中,
Figure FDA0002838937240000059
为根据公式(5)计算得到的候选人脸图像I(1)对应的每个RGB颜色通道中人脸替换区域的图像强度;
in,
Figure FDA0002838937240000059
be the image intensity of the face replacement area in each RGB color channel corresponding to the candidate face image I (1 ) calculated according to formula (5);
Figure FDA00028389372400000510
为根据公式(5)计算得到的待测人脸图像I(2)对应的每个RGB颜色通道中人脸替换区域的图像强度;
Figure FDA00028389372400000510
For the image intensity of the face replacement area in each RGB color channel corresponding to the face image 1 (2) to be measured that is calculated according to formula (5);
Figure FDA0002838937240000061
为将待测人脸图像I(2)的照明候选人脸图像I(1)后的候选人脸图像对应的图像强度。
Figure FDA0002838937240000061
is the image intensity corresponding to the candidate face image after illuminating the candidate face image I (1) of the face image I (2) to be tested.
8.根据权利要求7所述的种基于离线人脸数据库的自动面部替换方法,其特征在于,所述步骤6具体为:计算经步骤5处理之后的所有候选人脸图像与待测人脸图像之间的欧氏距离并按照从小到大的顺序排列,选择排名第一的候选人脸与待测人脸进行替换输出。8. the kind of automatic face replacement method based on off-line face database according to claim 7, is characterized in that, described step 6 is specifically: calculate all candidate face images and the face image to be tested after the processing of step 5 The Euclidean distance between them is arranged in order from small to large, and the first-ranked candidate face and the face to be tested are selected for replacement output.
CN202011484330.4A 2020-12-16 2020-12-16 Automatic face replacement method based on off-line face database Active CN112634125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011484330.4A CN112634125B (en) 2020-12-16 2020-12-16 Automatic face replacement method based on off-line face database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011484330.4A CN112634125B (en) 2020-12-16 2020-12-16 Automatic face replacement method based on off-line face database

Publications (2)

Publication Number Publication Date
CN112634125A true CN112634125A (en) 2021-04-09
CN112634125B CN112634125B (en) 2023-07-25

Family

ID=75313471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011484330.4A Active CN112634125B (en) 2020-12-16 2020-12-16 Automatic face replacement method based on off-line face database

Country Status (1)

Country Link
CN (1) CN112634125B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807313A (en) * 2021-10-08 2021-12-17 合肥安达创展科技股份有限公司 An AI platform analysis system based on Dlib face recognition
CN114661214A (en) * 2022-02-18 2022-06-24 北京达佳互联信息技术有限公司 Image display method, device and storage medium
CN114913275A (en) * 2022-04-18 2022-08-16 上海幻维数码创意科技股份有限公司 Face exchange method, device and storage medium based on Dlib and regression tree synthesis
CN114998508A (en) * 2022-01-24 2022-09-02 上海幻维数码创意科技股份有限公司 A method of generating facial expressions in video based on Dlib and OpenGL
CN116503842A (en) * 2023-05-04 2023-07-28 北京中科睿途科技有限公司 Facial pose recognition method and device for wearing mask for intelligent cabin
CN116664393A (en) * 2023-07-05 2023-08-29 北京大学 Face data bleaching method, device, computing equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009094661A1 (en) * 2008-01-24 2009-07-30 The Trustees Of Columbia University In The City Of New York Methods, systems, and media for swapping faces in images
CN102982165A (en) * 2012-12-10 2013-03-20 南京大学 Large-scale human face image searching method
TW201612796A (en) * 2014-09-22 2016-04-01 Univ Ming Chuan Utilizing two-dimensional image to estimate its three-dimensional face angle method, and its database establishment of face replacement and face image replacement method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009094661A1 (en) * 2008-01-24 2009-07-30 The Trustees Of Columbia University In The City Of New York Methods, systems, and media for swapping faces in images
US20110123118A1 (en) * 2008-01-24 2011-05-26 Nayar Shree K Methods, systems, and media for swapping faces in images
CN102982165A (en) * 2012-12-10 2013-03-20 南京大学 Large-scale human face image searching method
TW201612796A (en) * 2014-09-22 2016-04-01 Univ Ming Chuan Utilizing two-dimensional image to estimate its three-dimensional face angle method, and its database establishment of face replacement and face image replacement method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BANGPENG YAO: "Person-specific face recognition in unconstrained environments: a combination of offline and online learning", IEEE *
成俊: "基于深度学习的人脸置换系统设计", 《湖北理工学院学报》 *
成俊: "基于深度学习的人脸置换系统设计", 《湖北理工学院学报》, vol. 36, no. 6, 10 December 2020 (2020-12-10) *
黄诚: "基于Candide-3算法的图像中面部替换技术", 《计算技术与自动化》 *
黄诚: "基于Candide-3算法的图像中面部替换技术", 《计算技术与自动化》, no. 02, 15 June 2018 (2018-06-15) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807313A (en) * 2021-10-08 2021-12-17 合肥安达创展科技股份有限公司 An AI platform analysis system based on Dlib face recognition
CN114998508A (en) * 2022-01-24 2022-09-02 上海幻维数码创意科技股份有限公司 A method of generating facial expressions in video based on Dlib and OpenGL
CN114661214A (en) * 2022-02-18 2022-06-24 北京达佳互联信息技术有限公司 Image display method, device and storage medium
CN114913275A (en) * 2022-04-18 2022-08-16 上海幻维数码创意科技股份有限公司 Face exchange method, device and storage medium based on Dlib and regression tree synthesis
CN116503842A (en) * 2023-05-04 2023-07-28 北京中科睿途科技有限公司 Facial pose recognition method and device for wearing mask for intelligent cabin
CN116503842B (en) * 2023-05-04 2023-10-13 北京中科睿途科技有限公司 Facial pose recognition method and device for wearing mask for intelligent cabin
CN116664393A (en) * 2023-07-05 2023-08-29 北京大学 Face data bleaching method, device, computing equipment and storage medium
CN116664393B (en) * 2023-07-05 2024-02-27 北京大学 Face data bleaching method, device, computing equipment and storage medium

Also Published As

Publication number Publication date
CN112634125B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN112634125A (en) Automatic face replacement method based on off-line face database
US10762608B2 (en) Sky editing based on image composition
Van De Weijer et al. Learning color names for real-world applications
Soriano et al. Adaptive skin color modeling using the skin locus for selecting training pixels
Gupta et al. Texas 3D face recognition database
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
JP2023036784A (en) Virtual face makeup removal, fast face detection and landmark tracking
CN105868716B (en) A kind of face identification method based on facial geometric feature
Lin Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network
US20180357819A1 (en) Method for generating a set of annotated images
WO2017029488A2 (en) Methods of generating personalized 3d head models or 3d body models
TW201005673A (en) Example-based two-dimensional to three-dimensional image conversion method, computer readable medium therefor, and system
CN110619628A (en) Human face image quality evaluation method
CN102622589A (en) Multispectral face detection method based on graphics processing unit (GPU)
KR20050022306A (en) Method and Apparatus for image-based photorealistic 3D face modeling
CN107273895B (en) Method for recognizing and translating real-time text of video stream of head-mounted intelligent device
CN111460976A (en) A data-driven real-time hand motion evaluation method based on RGB video
CN106874884A (en) Human body recognition methods again based on position segmentation
CN110019912A (en) Graphic searching based on shape
JP4414401B2 (en) Facial feature point detection method, apparatus, and program
CN110415816A (en) A multi-classification method for clinical images of skin diseases based on transfer learning
Smiatacz Normalization of face illumination using basic knowledge and information extracted from a single image
KR101357581B1 (en) A Method of Detecting Human Skin Region Utilizing Depth Information
Sharma et al. Image recognition system using geometric matching and contour detection
Li et al. Photo composition feedback and enhancement: Exploiting spatial design categories and the notan dark-light principle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20250422

Address after: 518000 1104, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Hongyue Information Technology Co.,Ltd.

Country or region after: China

Address before: 710048 Shaanxi province Xi'an Beilin District Jinhua Road No. 5

Patentee before: XI'AN University OF TECHNOLOGY

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20251124

Address after: 510000 Guangdong Province Guangzhou City Haizhu District Pazhou Avenue No. 109 Room 2802 (Location: Self-compiled 2813)

Patentee after: Guangzhou Mengdong Information Technology Co.,Ltd.

Country or region after: China

Address before: 518000 1104, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Patentee before: Shenzhen Hongyue Information Technology Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right