[go: up one dir, main page]

CN112861671A - Method for identifying deeply forged face image and video - Google Patents

Method for identifying deeply forged face image and video Download PDF

Info

Publication number
CN112861671A
CN112861671A CN202110110096.7A CN202110110096A CN112861671A CN 112861671 A CN112861671 A CN 112861671A CN 202110110096 A CN202110110096 A CN 202110110096A CN 112861671 A CN112861671 A CN 112861671A
Authority
CN
China
Prior art keywords
face
forged
deep
mixed
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110110096.7A
Other languages
Chinese (zh)
Other versions
CN112861671B (en
Inventor
李斌
周世杰
张家亮
贾宇
邹严
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Hongjieqi Technology Co ltd
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110110096.7A priority Critical patent/CN112861671B/en
Publication of CN112861671A publication Critical patent/CN112861671A/en
Application granted granted Critical
Publication of CN112861671B publication Critical patent/CN112861671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for identifying a deeply forged face image and a deeply forged video, which comprises the following steps: s1, collecting a mixed training sample; s2, constructing an identification model; the identification model comprises two 2D deep convolution neural networks and a 3D deep convolution neural network; s3, training the identification model by using the mixed training sample; and S4, identifying the face video to be identified by using the trained identification model. The invention proposes three improvements: (1) the generalization performance is improved by adopting a mixed training sample; (2) the face center cutting images of the large edge and the small edge are adopted to train two 2D depths, the depth convolution neural network of the prediction robustness is improved, and the prediction robustness is improved; (3) the 3D deep convolution neural network can utilize the interframe consistency information, so that the information utilization rate is improved; therefore, the method and the device can solve the problem that the discrimination capability of the prior art on the novel forged video is poor.

Description

Method for identifying deeply forged face image and video
Technical Field
The invention particularly relates to a method for identifying a deeply forged face image and a deeply forged video.
Background
Detection methods for counterfeit videos can be divided into two categories: firstly, a method (Temporal features across frames) based on inter-frame time characteristics utilizes time-related characteristics such as human blinking frequency and mouth shape in a video to judge, and a recursive classification method is generally used; secondly, based on the method of the Visual effect in the frame (Visual objects with frame), the method utilizes the flaws of the image edge and the unnatural details of the position of five sense organs, facial shadow and the like to judge, and usually extracts the specific features and then completes the detection by using a deep layer classifier or a shallow layer classifier.
In addition, researchers have proposed tracing of depth-forged video using traceable, non-tamperable blockchain techniques. In 2019, researchers of the electrical and computer engineering system of harry university of the academy headquarters, arabian consortium published a paper named Using blockchains and intelligent contract attack depth forgery Videos (Combating fake Videos Using blockchains and Smart contacts), and a solution and a general framework for Using blockchains are proposed to track the source and history of digital content, and the digital content can be tracked even if copied for multiple times. The solution framework provided by the paper is universal and can be applied to any other form of digital content.
The specific achievement aspect is as follows:
in 8 months in 2017, a network security group of the singapore information communication research institute published a paper named automatic face exchange and detection (automatic face swapping and its detection), an AI face exchange detection frame was proposed for the first time, and the detection accuracy rate reached 92%. Since then, the research on the artificial intelligence face changing technology and the detection technology in the industry enters the stage of enthusiasm, and enterprises, universities and individual developers invest in the development of artificial intelligence face changing detection tools.
In 2019, researchers at Berkeley university of California and university of California in the United states collected personal characteristics in videos through existing non-forged videos, and a highly personalized 'soft biometric identification index' system was constructed. After the identification system grasps personal micro-expression and behavior habits, the false identification accuracy can reach 95%. Adobe corporation of america also introduced a reverse PS (Photoshop, the most widely used cartography software worldwide, here meaning "edit pictures") tool in 2019, 6 months. By means of an AI algorithm, the tool can automatically identify the part of the portrait picture modified by the image liquefaction tool and restore the portrait picture to an initial appearance, and the accuracy is as high as 99%.
To help researchers develop automatic detection tools for depth forgery, google, inc, published a recognition data set of depth forgery videos including 3000 segments of videos shot by multiple real actors in 28 different scenes in 9 months 2019. Global researchers can use this fully open source data set to train a deep forgery detection tool.
However, the above-mentioned techniques only identify a single image, and do not consider the context information in the video, and the neural network cannot automatically utilize the inter-frame information, so that the inference from the inter-frame consistency cannot be made. Because the method and the variable of the real-world deep-counterfeit video cannot be exhausted, and the algorithm for counterfeiting the video is continuously improved, new algorithms are continuously proposed, and the characteristics and the counterfeiting points of the real-world deep-counterfeit video are obviously different from the counterfeit data set manufactured in the industry at present. The generalized performance of the generated model is poor and the discrimination capability of the novel forged video is poor by using a common classification convolutional neural network training method on the forged data sets.
Disclosure of Invention
The invention aims to provide a method for identifying a deeply forged face image and a deeply forged video, so as to solve the problems in the prior art.
The invention provides a method for identifying a deeply forged face image and a deeply forged video, which comprises the following steps:
s1, collecting a mixed training sample;
s2, constructing an identification model; the identification model comprises two 2D deep convolution neural networks and a 3D deep convolution neural network;
s3, training the identification model by using the mixed training sample;
and S4, identifying the face video to be identified by using the trained identification model.
Further, the method for collecting the hybrid training sample in step S1 includes:
s11, collecting a large number of deep forged videos and original videos corresponding to the deep forged videos to form a training data set;
s12, detecting the first face position in each frame of each depth forged video by using a face detection method, randomly intercepting segments with the length of L in continuous frames with forged faces, and cutting out face frames by using first face position information to form depth forged face segments;
s13, detecting the position of a second face in each frame of the original video corresponding to each depth forged video by using a face detection method, randomly intercepting segments with the length of L in continuous frames with faces, and cutting face frames by using second face position information to form face segments of the original video;
s14, taking a frame F in the depth fake face fragment and a corresponding frame R in the original video face fragment, and weighting and adding the frame F and the corresponding frame R to form a mixed face image;
and S15, forming a mixed face image by the method of the step S14 on all the deeply forged face fragments and the corresponding original video face fragments, and obtaining a mixed training sample.
Further, the frame F and the corresponding frame R in step S14 are weighted to fit into a certain distribution of [0,1] random samples.
Further, in step S2, the convolution kernel of each 2D deep convolutional neural network is 2D, the backbone network is a common deep convolutional neural network, and the full connection layer is a 2-class structure.
Further, the method for training the 2D deep convolutional neural network in step S3 includes:
(1) extracting a frame of mixed face image from the mixed training sample, performing center clipping to enable the face to be far away from the edge of the mixed face image, and then repeatedly using a first 2D depth convolution neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping;
(2) extracting a frame of mixed face image from the mixed training sample, performing center clipping to enable the face to be close to the edge of the mixed face image, and then repeatedly using a first 2D depth convolution neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping;
further, in step S2, the 3D deep convolutional neural network is based on a 2D deep convolutional neural network, and the convolution kernel thereof is replaced by a 3D convolution kernel, so that the 3D deep convolutional neural network has the capability of performing convolution between video frames.
Further, the method for training the 3D deep convolutional neural network in step S3 is to extract several consecutive frames of mixed face images from the mixed training sample, and then train several consecutive frames of mixed face images by repeating the forward and backward propagation using the 3D deep convolutional neural network.
Further, step S4 includes the following sub-steps:
s41, randomly extracting a video frame segment from the face video to be identified;
s42, identifying the face in each frame in the video frame segment by using the two trained 2D depth convolution neural networks;
s43, identifying each frame in the video frame segment by using the trained 3D deep convolution neural network;
and S44, using a weighted integration method as the identification result for the identification predicted values of the two 2D deep convolutional neural networks and the 3D deep convolutional neural network.
Further, the weighted integration method described in step S44 uses the weight as the confidence of identifying the predicted value; the confidence is the distance between the discrimination prediction value and 0.5.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the invention proposes three improvements: (1) the generalization performance is improved by adopting a mixed training sample; (2) the face center cutting images of the large edge and the small edge are adopted to train two 2D depths, the depth convolution neural network of the prediction robustness is improved, and the prediction robustness is improved; (3) the 3D deep convolution neural network can utilize the interframe consistency information, so that the information utilization rate is improved; therefore, the method and the device can solve the problem that the discrimination capability of the prior art on the novel forged video is poor.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a method for identifying a deeply forged face image and a video according to an embodiment of the present invention
Fig. 2 is a schematic diagram of collecting hybrid training samples according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of identifying a video of a face to be identified by using a trained identification model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
Referring to fig. 1, the present embodiment provides a method for identifying a deeply forged face image and a deeply forged video, including the following steps:
s1, collecting a mixed training sample;
referring to fig. 2, the method for collecting the hybrid training samples in step S1 includes:
s11, collecting a large number of deep forged videos and original videos corresponding to the deep forged videos to form a training data set;
s12, detecting the first face position in each frame of each depth forged video by using a face detection method, randomly intercepting segments with the length of L in continuous frames with forged faces, and cutting out face frames by using first face position information to form depth forged face segments;
s13, detecting the position of a second face in each frame of the original video corresponding to each depth forged video by using a face detection method, randomly intercepting segments with the length of L in continuous frames with faces, and cutting face frames by using second face position information to form face segments of the original video;
s14, taking a frame F in the depth fake face fragment and a corresponding frame R in the original video face fragment, and weighting and adding the frame F and the corresponding frame R to form a mixed face image; in some embodiments, the frame F and the corresponding frame R are weighted and summed to a weighted sum that is a [0,1] random sample that fits a distribution, such as a normal distribution;
and S15, forming a mixed face image by the method of the step S14 on all the deeply forged face fragments and the corresponding original video face fragments, and obtaining a mixed training sample.
This step S1 can generate a novel hybrid training sample based on the original video and the depth-forged video using a data augmentation method, and can improve generalization performance.
S2, constructing an identification model; the identification model comprises two 2D deep convolution neural networks and a 3D deep convolution neural network;
for a 2D deep convolutional neural network, a convolution kernel of each 2D deep convolutional neural network in this embodiment is 2D, a backbone network is a common deep convolutional neural network, and a full connection layer is a 2-class structure. The 2D depth convolution neural network is used for identifying whether a single image is subjected to depth forgery or not.
For a 3D deep convolutional neural network, the 3D deep convolutional neural network of this embodiment is based on a 2D deep convolutional neural network, and its convolution kernel is replaced by a 3D convolution kernel, so that it has the capability of performing convolution between video frames. The 2D depth convolution neural network is used for identifying whether continuous frame images are subjected to depth forgery or not.
S3, training the identification model by using the mixed training sample;
for a 2D deep convolutional neural network, the method for training the 2D deep convolutional neural network in this embodiment is as follows:
(1) extracting a frame of mixed face image from the mixed training sample, performing center clipping to enable the face to be far away from the edge of the mixed face image, and then repeatedly using a first 2D depth convolution neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping;
(2) extracting a frame of mixed face image from the mixed training sample, performing center clipping to enable the face to be close to the edge of the mixed face image, and then repeatedly using a first 2D depth convolution neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping;
in the process of training the 2D deep convolutional neural network, the face center cutting images of the large edge and the small edge are adopted for training respectively, so that the prediction robustness can be improved.
For the 3D deep convolutional neural network, the method for training the 3D deep convolutional neural network in this embodiment is to extract several continuous frames of mixed face images from the mixed training sample, and then to repeatedly use the 3D deep convolutional neural network to train the several continuous frames of mixed face images in forward and backward propagation. When the 3D deep convolution neural network identifies the frames in the video, the front and rear frames of the frames are used as references, so that the inter-frame consistency can be utilized, and the information utilization rate is improved.
S4, identifying the face video to be identified by using the trained identification model;
as shown in fig. 3, step S4 includes the following sub-steps:
s41, randomly extracting a video frame segment with the length L from the face video to be identified;
s42, identifying the face in each frame in the video frame segment by using the two trained 2D depth convolution neural networks;
s43, identifying each frame in the video frame segment by using the trained 3D deep convolution neural network;
and S44, using a weighted integration method as the identification result for the identification predicted values of the two 2D deep convolutional neural networks and the 3D deep convolutional neural network. The weighted integration method uses a weight that is the confidence of identifying the predicted value. The neural network outputs a discrimination prediction value of a section of video, the discrimination prediction value is between (0, 1), the closer to 1 represents that the neural network considers the video to be more likely to be fake video, and the closer to 0 represents that the video is more likely to be real video. The confidence level near 0 or 1 is high, and the confidence level near 0.5 is low, so that the confidence level in this embodiment is the distance between the discrimination prediction value and 0.5.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for identifying a deeply forged face image and a deeply forged video is characterized by comprising the following steps:
s1, collecting a mixed training sample;
s2, constructing an identification model; the identification model comprises two 2D deep convolution neural networks and a 3D deep convolution neural network;
s3, training the identification model by using the mixed training sample;
and S4, identifying the face video to be identified by using the trained identification model.
2. The method for identifying the deep forged face images and videos as claimed in claim 1, wherein the method for collecting the mixed training samples in step S1 comprises:
s11, collecting a large number of deep forged videos and original videos corresponding to the deep forged videos to form a training data set;
s12, detecting the first face position in each frame of each depth forged video by using a face detection method, randomly intercepting segments with the length of L in continuous frames with forged faces, and cutting out face frames by using first face position information to form depth forged face segments;
s13, detecting the position of a second face in each frame of the original video corresponding to each depth forged video by using a face detection method, randomly intercepting segments with the length of L in continuous frames with faces, and cutting face frames by using second face position information to form face segments of the original video;
s14, taking a frame F in the depth fake face fragment and a corresponding frame R in the original video face fragment, and weighting and adding the frame F and the corresponding frame R to form a mixed face image;
and S15, forming a mixed face image by the method of the step S14 on all the deeply forged face fragments and the corresponding original video face fragments, and obtaining a mixed training sample.
3. The method for authenticating deep forged face images and videos as claimed in claim 2, wherein the weighted sum of the frame F and the corresponding frame R in the step S14 is weighted to fit into [0,1] random samples conforming to a certain distribution.
4. The method for identifying the deep forged face images and videos according to claim 2, wherein in step S2, the convolution kernel of each 2D deep convolutional neural network is 2D, the backbone network is a common deep convolutional neural network, and the full connection layer is a 2-class structure.
5. The method for identifying the deep forged face images and videos as claimed in claim 4, wherein the method for training the 2D deep convolutional neural network in step S3 is as follows:
(1) extracting a frame of mixed face image from the mixed training sample, performing center clipping to enable the face to be far away from the edge of the mixed face image, and then repeatedly using a first 2D depth convolution neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping;
(2) and extracting a frame of mixed face image from the mixed training sample to perform center clipping so that the face is close to the edge of the mixed face image, and then repeatedly using the first 2D deep convolutional neural network to perform forward and backward propagation training on the mixed face image subjected to center clipping.
6. The method for authenticating deep forged face images and videos as claimed in claim 4, wherein the 3D deep convolutional neural network in step S2 is based on a 2D deep convolutional neural network, and replaces the convolution kernel thereof with a 3D convolution kernel, so that the 3D deep convolutional neural network has the capability of performing convolution between video frames.
7. The method for discriminating the deep forged face images and videos as claimed in claim 6, wherein the method for training the 3D deep convolutional neural network in step S3 is to sequentially extract several consecutive frames of mixed face images from the mixed training samples, and then repeatedly use the 3D deep convolutional neural network to train the several consecutive frames of mixed face images to propagate forwards and backwards.
8. The method for discriminating deep forged face images and videos as claimed in claim 1, wherein the step S4 includes the sub-steps of:
s41, randomly extracting a video frame segment from the face video to be identified;
s42, identifying the face in each frame in the video frame segment by using the two trained 2D depth convolution neural networks;
s43, identifying each frame in the video frame segment by using the trained 3D deep convolution neural network;
and S44, using a weighted integration method as the identification result for the identification predicted values of the two 2D deep convolutional neural networks and the 3D deep convolutional neural network.
9. The method for discriminating deep forged face images and videos as claimed in claim 8, wherein the weight used in the weighted integration method in step S44 is the confidence of discrimination prediction value; the confidence is the distance between the discrimination prediction value and 0.5.
CN202110110096.7A 2021-01-27 2021-01-27 An identification method for deepfake face images and videos Active CN112861671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110110096.7A CN112861671B (en) 2021-01-27 2021-01-27 An identification method for deepfake face images and videos

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110110096.7A CN112861671B (en) 2021-01-27 2021-01-27 An identification method for deepfake face images and videos

Publications (2)

Publication Number Publication Date
CN112861671A true CN112861671A (en) 2021-05-28
CN112861671B CN112861671B (en) 2022-10-21

Family

ID=76009483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110110096.7A Active CN112861671B (en) 2021-01-27 2021-01-27 An identification method for deepfake face images and videos

Country Status (1)

Country Link
CN (1) CN112861671B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435292A (en) * 2021-06-22 2021-09-24 北京交通大学 AI counterfeit face detection method based on inherent feature mining
CN113627256A (en) * 2021-07-09 2021-11-09 武汉大学 Method and system for detecting counterfeit video based on blink synchronization and binocular movement detection
CN113723220A (en) * 2021-08-11 2021-11-30 电子科技大学 Deep counterfeiting traceability system based on big data federated learning architecture
CN114093013A (en) * 2022-01-19 2022-02-25 武汉大学 Reverse tracing method and system for deeply forged human faces
CN114494935A (en) * 2021-12-15 2022-05-13 北京百度网讯科技有限公司 Video information processing method and device, electronic equipment and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140436A1 (en) * 2014-11-15 2016-05-19 Beijing Kuangshi Technology Co., Ltd. Face Detection Using Machine Learning
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
US20190251333A1 (en) * 2017-06-02 2019-08-15 Tencent Technology (Shenzhen) Company Limited Face detection training method and apparatus, and electronic device
CN111368764A (en) * 2020-03-09 2020-07-03 零秩科技(深圳)有限公司 False video detection method based on computer vision and deep learning algorithm
CN111611873A (en) * 2020-04-28 2020-09-01 平安科技(深圳)有限公司 Face replacement detection method and device, electronic equipment and computer storage medium
CN111967427A (en) * 2020-08-28 2020-11-20 广东工业大学 Fake face video identification method, system and readable storage medium
CN112052759A (en) * 2020-08-25 2020-12-08 腾讯科技(深圳)有限公司 Living body detection method and device
CN112149608A (en) * 2020-10-09 2020-12-29 腾讯科技(深圳)有限公司 Image recognition method, device and storage medium
CN112163488A (en) * 2020-09-21 2021-01-01 中国科学院信息工程研究所 Video false face detection method and electronic device
US20210004930A1 (en) * 2019-07-01 2021-01-07 Digimarc Corporation Watermarking arrangements permitting vector graphics editing
CN112258388A (en) * 2020-11-02 2021-01-22 公安部第三研究所 Public security view desensitization test data generation method, system and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140436A1 (en) * 2014-11-15 2016-05-19 Beijing Kuangshi Technology Co., Ltd. Face Detection Using Machine Learning
US20190251333A1 (en) * 2017-06-02 2019-08-15 Tencent Technology (Shenzhen) Company Limited Face detection training method and apparatus, and electronic device
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
US20210004930A1 (en) * 2019-07-01 2021-01-07 Digimarc Corporation Watermarking arrangements permitting vector graphics editing
CN111368764A (en) * 2020-03-09 2020-07-03 零秩科技(深圳)有限公司 False video detection method based on computer vision and deep learning algorithm
CN111611873A (en) * 2020-04-28 2020-09-01 平安科技(深圳)有限公司 Face replacement detection method and device, electronic equipment and computer storage medium
CN112052759A (en) * 2020-08-25 2020-12-08 腾讯科技(深圳)有限公司 Living body detection method and device
CN111967427A (en) * 2020-08-28 2020-11-20 广东工业大学 Fake face video identification method, system and readable storage medium
CN112163488A (en) * 2020-09-21 2021-01-01 中国科学院信息工程研究所 Video false face detection method and electronic device
CN112149608A (en) * 2020-10-09 2020-12-29 腾讯科技(深圳)有限公司 Image recognition method, device and storage medium
CN112258388A (en) * 2020-11-02 2021-01-22 公安部第三研究所 Public security view desensitization test data generation method, system and storage medium

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
OSCAR DE LIMA 等: "Deepfake Detection using Spatiotemporal Convolutional Networks", 《ARXIV》 *
THANH THI NGUYEN 等: "Deep Learning for Deepfakes Creation and Detection", 《ARXIV》 *
卢鑫等: "基于深度学习的人脸活体检测", 《辽宁科技大学学报》 *
张健沛 等: "最小二乘支持向量机的半监督学习算法", 《哈尔滨工程大学学报》 *
张家亮 等: "基于新媒体的视图像内容识别技术研究", 《通信技术》 *
李山路: "基于3D卷积神经网络的活体人脸检测研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
梁瑞刚 等: "视听觉深度伪造检测技术研究综述", 《信息安全学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435292A (en) * 2021-06-22 2021-09-24 北京交通大学 AI counterfeit face detection method based on inherent feature mining
CN113435292B (en) * 2021-06-22 2023-09-19 北京交通大学 An AI fake face detection method based on inherent feature mining
CN113627256A (en) * 2021-07-09 2021-11-09 武汉大学 Method and system for detecting counterfeit video based on blink synchronization and binocular movement detection
CN113627256B (en) * 2021-07-09 2023-08-18 武汉大学 Forged video inspection method and system based on blink synchronization and binocular movement detection
CN113723220A (en) * 2021-08-11 2021-11-30 电子科技大学 Deep counterfeiting traceability system based on big data federated learning architecture
CN113723220B (en) * 2021-08-11 2023-08-25 电子科技大学 Deep counterfeiting traceability system based on big data federation learning architecture
CN114494935A (en) * 2021-12-15 2022-05-13 北京百度网讯科技有限公司 Video information processing method and device, electronic equipment and medium
CN114494935B (en) * 2021-12-15 2024-01-05 北京百度网讯科技有限公司 Video information processing method and device, electronic equipment and medium
CN114093013A (en) * 2022-01-19 2022-02-25 武汉大学 Reverse tracing method and system for deeply forged human faces
CN114093013B (en) * 2022-01-19 2022-04-01 武汉大学 A method and system for reverse traceability of deep forgery faces

Also Published As

Publication number Publication date
CN112861671B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN112861671A (en) Method for identifying deeply forged face image and video
CN114694220B (en) Double-flow face counterfeiting detection method based on Swin Transformer
Guo et al. Exposing deepfake face forgeries with guided residuals
Miao et al. Learning forgery region-aware and ID-independent features for face manipulation detection
CN113537027A (en) Face deep forgery detection method and system based on face division
CN111242837A (en) Face anonymity privacy protection method based on generative adversarial network
Huang et al. DeepFake MNIST+: A DeepFake facial animation dataset
CN118196865B (en) Generalizable deep fake image detection method and system based on noise perception
CN113361474A (en) Double-current network image counterfeiting detection method and system based on image block feature extraction
CN119920017B (en) Multi-class image forgery detection method, device, equipment and medium
CN114842524A (en) Face false distinguishing method based on irregular significant pixel cluster
CN117079354A (en) A deepfake detection classification and localization method based on noise inconsistency
CN119832552B (en) An artificial intelligence forged content detection method based on joint decision-making of multiple expert models
CN119625804B (en) A method and system for detecting deep fake facial images
CN112598043B (en) A Cooperative Saliency Detection Method Based on Weakly Supervised Learning
CN120318593A (en) A multimodal deepfake detection model for temporal forgery localization
Ding et al. DeepFake videos detection via spatiotemporal inconsistency learning and interactive fusion
CN117496601B (en) Face living body detection system and method based on fine classification and antibody domain generalization
CN120014717A (en) A method, device, equipment and medium for detecting fake faces
CN117556884A (en) Self-supervision contrast learning method and system based on data enhancement and feature enhancement
CN117351541A (en) An anomaly deepfake detection method based on anomaly simulation and noise suppression
CN117542124A (en) Face video deep forgery detection method and device
CN117711046A (en) Human face living body detection method based on time sequence shuffling and motion enhancement
CN110210561A (en) Training method, object detection method and device, the storage medium of neural network
CN114529446A (en) Face gender conversion model optimization method, device and conversion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20241011

Address after: No. 3, 19th Floor, Building 2, Morgan Center, 568 Jindong Road, Jinjiang District, Chengdu City, Sichuan Province 610000 (self assigned number)

Patentee after: Sichuan Hongjieqi Technology Co.,Ltd.

Country or region after: China

Address before: 611731 No.4, Section 2, Jianshe North Road, Chenghua District, Chengdu City, Sichuan Province

Patentee before: University of Electronic Science and Technology of China

Country or region before: China

TR01 Transfer of patent right