Identifying Synthetic Faces through GAN Inversion and Biometric Traits Analysis
<p>Overview of the inversion-based detection.</p> "> Figure 2
<p>GAN architecture and training scheme.</p> "> Figure 3
<p>Examples of face images before (top row) and after (bottom row) the reconstruction using the inversion StyleGAN2 process available <a href="https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/projector.py" target="_blank">https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/projector.py</a> (accessed on 30 December 2022).</p> "> Figure 4
<p>Pipeline of the comparison and training/classification processes.</p> "> Figure 5
<p>(<b>a</b>) Landmarks numbered from 1 to 68; (<b>b</b>) Landmarks detected on a face. We can identify 5 areas: face line (red landmarks [1–17]), eyebrows [18–27], and eyes [37–48], both in green (we call them eye area), nose [28–36] in yellow, and mouth [49–68] in blue.</p> "> Figure 6
<p>Summary of the face image data used in the experiments.</p> "> Figure 7
<p>Histograms of different similarity metrics for images belonging to different datasets and their reconstructed counterparts. For each case, we report the density of the values of each similarity metric and their <a href="https://seaborn.pydata.org/generated/seaborn.boxplot.html" target="_blank">https://seaborn.pydata.org/generated/seaborn.boxplot.html</a> (accessed on 30 December 2022) on top of the histogram, highlighting the interval between the first and third quartiles of each dataset (colored box), the median value of the distribution (vertical line within the colored box), and the outliers (grey diamonds).</p> "> Figure 8
<p>Comparison between FaceNet128 and FaceNet512 features using the MSE. FaceNet512 shows better results: the SG2 faces are well separated with respect to the other datasets, where the distributions tend to overlap each other. FaceNet128 shows an overlap between SG2 and LFW.</p> "> Figure 9
<p>Plots above visualize the histograms of the distributions of the landmarks of different areas of the face for the different datasets. The full set of landmarks shows better discrimination capability.</p> "> Figure 9 Cont.
<p>Plots above visualize the histograms of the distributions of the landmarks of different areas of the face for the different datasets. The full set of landmarks shows better discrimination capability.</p> "> Figure 10
<p>Visualization of <span class="html-italic">FN<math display="inline"><semantics> <msub> <mrow/> <mn>512</mn> </msub> </semantics></math></span> and <span class="html-italic">LM<math display="inline"><semantics> <msub> <mrow/> <mn>68</mn> </msub> </semantics></math></span> differential feature vectors through the UMAP dimensionality reduction.</p> "> Figure 11
<p>Visualization of the landmarks detected on the input and the reconstructed images from different datasets. In each case, the left image is the input one with the detected landmarks marked in blue, and the central one is the corresponding reconstruction with the detected landmarks marked in red. On the right, the two sets of landmarks are reported on the same spatial grid, so that their displacement can be visualized.</p> "> Figure 12
<p>Comparison of the ROC curves obtained with the SVM model using Facenet features.</p> "> Figure 13
<p>Comparison of the ROC curves obtained with the SVM model using landmark-based features.</p> "> Figure 14
<p>Examples of face images before (top row) and after (bottom row) the reconstruction when different resizing factors are applied as post-processing.</p> "> Figure 15
<p>Examples of face images before (top row) and after (bottom row) the reconstruction when different JPEG quality factors are applied as post-processing.</p> "> Figure 16
<p>Shared images before (top row) and after (bottom row) the reconstruction.</p> ">
Abstract
:1. Introduction
- We explicitly use the underlying mechanisms of GAN generators to perform the detection, instead of applying a blind learning procedure;
- We demonstrate that generative approaches produce structural errors in the reproduction of previously unseen face images, which can be revealed through appropriate sets of features;
- The proposed technique can be extended to any generator that admits an inversion, thus limiting the need for retraining over large image datasets.
- We release a data corpus of face images and their reconstructions through the StyleGAN2 inversion, available for research purposes.
2. Related Work
2.1. Data-Driven Detection Methods
2.2. GAN Image Generation and Inversion
3. Inversion-Based Detection
3.1. Inversion Process
3.2. Feature Extraction and Classification Process
- FaceNet embeddings: Proposed in [26], the FaceNet features are the best-performing ones on the LFW face recognition dataset [27] among the deep features selected in the Deepface toolbox (https://github.com/serengil/deepface (accessed on 30 December 2022)). In computing and , we employ the original 512-dimensional FaceNet version and its compact 128-dimensional variant. In this case, the comparison is simply an element-wise difference in the module.We denote as FN and FN the two types of differential feature vectors obtained.
- Facial landmarks: proposed in [28] and available in the https://github.com/davisking/dlib (accessed on 30 December 2022) library, the landmark localization algorithm returns 68 facial landmarks related to key facial structures. Those can be further partitioned into different face areas (face line, eyebrows, eyes, nose, and mouth), as is shown in Figure 5. This feature extractor outputs the arrays and of size , containing row-wise, the 2D coordinates of the 68 landmarks, and we extract from them two types of differential feature vectors:
- –
- LM contains the Euclidean distances between and , (i.e., the 2D coordinates of corresponding landmarks in the two different faces);
- –
- LM contains the differences in module between individual corresponding landmark coordinates and , , .
4. Experimental Setup and Analysis of Results
- CelebA: a subset of https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (accessed on 30 December 2022) [29], proposed for face detection and recognition, landmark localization, and face editing.
- CelebHQ: a subset of the https://mmlab.ie.cuhk.edu.hk/projects/CelebA/CelebAMask_HQ.html (accessed on 30 December 2022) dataset [30], proposed for evaluating algorithms in face parsing, recognition, and generation.
- Caltech: a subset of https://data.caltech.edu/records/6rjah-hdv18 (accessed on 30 December 2022) involving 27 subjects with different expressions and under different illumination conditions.
- LFW: a subset of the http://vis-www.cs.umass.edu/lfw/ (accessed on 30 December 2022) (LFW) dataset [27], a public benchmark for face verification.
4.1. Metrics-Based Analysis
- Mean Squared Error (MSE): Computes the distance pixel-wise between the two images
- Structure Similarity Index Method (SSIM): it is a perception-based model that takes into account the mean values and the variances of the two images
- Learned Perceptual Image Patch Similarity (LPIPS) (https://github.com/richzhang/PerceptualSimilarity (accessed on 30 December 2022)): it is proposed in [31] and used in [4] for the same purpose; it computes the similarity between the activations of two image patches for some pre-defined network. A low LPIPS score means that the image patches are perceptually similar.
4.2. Classification Results
4.3. Robustness Analysis
4.3.1. Resizing
4.3.2. JPEG Compression
4.3.3. Social Network Sharing
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lago, F.; Pasquini, C.; Böhme, R.; Dumont, H.; Goffaux, V.; Boato, G. More Real Than Real: A Study on Human Visual Perception of Synthetic Faces. IEEE Signal Process. Mag. 2021, 39, 109–116. [Google Scholar] [CrossRef]
- Nightingale, S.J.; Farid, H. AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proc. Natl. Acad. Sci. USA 2022, 119, e2120481119. [Google Scholar] [CrossRef] [PubMed]
- Karras, T.; Laine, S.; Aila, T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4401–4410. [Google Scholar]
- Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and Improving the Image Quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Dang-Nguyen, D.T.; Boato, G.; De Natale, F.G. 3D-model-based video analysis for computer generated faces identification. IEEE Trans. Inf. Forensics Secur. 2015, 10, 1752–1763. [Google Scholar] [CrossRef]
- Bonomi, M.; Pasquini, C.; Boato, G. Dynamic texture analysis for detecting fake faces in video sequences. J. Vis. Commun. Image Represent. 2021, 79, 103239. [Google Scholar] [CrossRef]
- Dang-Nguyen, D.; Boato, G.; De Natale, F. Identify computer generated characters by analysing facial expressions variation. In Proceedings of the IEEE International Workshop on Information Forensics and Security, Tenerife, Spain, 2–5 December 2012; pp. 252–257. [Google Scholar]
- Gragnaniello, D.; Cozzolino, D.; Marra, F.; Poggi, G.; Verdoliva, L. Are GAN generated images easy to detect? A critical analysis of the state-of-the-art. In Proceedings of the IEEE International Conference on Multimedia and Expo, Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar]
- Marra, F.; Saltori, C.; Boato, G.; Verdoliva, L. Incremental learning for the detection and classification of GAN-generated images. In Proceedings of the IEEE International Workshop on Information Forensics and Security, Delft, The Netherlands, 9–12 December 2019; pp. 1–6. [Google Scholar]
- Marra, F.; Gragnaniello, D.; Verdoliva, L.; Poggi, G. Do GANs leave artificial fingerprints? In Proceedings of the IEEE Conference on Multimedia Information Processing and Retrieval, San Jose, CA, USA, 28–30 March 2019; pp. 506–511. [Google Scholar]
- Wang, S.Y.; Wang, O.; Zhang, R.; Owens, A.; Efros, A.A. CNN-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 8695–8704. [Google Scholar]
- Xia, W.; Zhang, Y.; Yang, Y.; Xue, J.H.; Zhou, B.; Yang, M.H. GAN Inversion: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 1–17. [Google Scholar] [CrossRef]
- Nataraj, L.; Mohammed, T.M.; Manjunath, B.S.; Chandrasekaran, S.; FlennerJawadul, A.; Bappy, H.; Roy-Chowdhury, A.K. Detecting GAN generated Fake Images using Co-occurrence Matrices. Electron. Imaging 2019, 2019, 532-1. [Google Scholar] [CrossRef] [Green Version]
- Wang, R.; Juefei-Xu, F.; Ma, L.; Xie, X.; Huang, Y.; Wang, J.; Liu, Y. FakeSpotter: A Simple yet Robust Baseline for Spotting AI-Synthesized Fake Faces. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Yokohama, Japan, 7–15 January 2020. [Google Scholar]
- Marcon, F.; Pasquini, C.; Boato, G. Detection of Manipulated Face Videos over Social Networks: A Large-Scale Study. J. Imaging 2021, 7, 193. [Google Scholar] [CrossRef] [PubMed]
- Dong, X.; Miao, Z.; Ma, L.; Shen, J.; Jin, Z.; Guo, Z.; Teoh, A.B.J. Reconstruct face from features based on genetic algorithm using GAN generator as a distribution constraint. Comput. Secur. 2023, 125, 103026. [Google Scholar] [CrossRef]
- Albright, M.; McCloskey, S. Source Generator Attribution via Inversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Scherhag, U.; Rathgeb, C.; Merkle, J.; Busch, C. Deep Face Representations for Differential Morphing Attack Detection. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3625–3639. [Google Scholar] [CrossRef]
- Autherith, S.; Pasquini, C. Detecting morphing attacks through face geometry. J. Imaging 2020, 6, 115. [Google Scholar] [CrossRef] [PubMed]
- Chen, B.; Tang, G.; Sun, L.; Mao, X.; Guo, S.; Zhang, H.; Wang, X. Detection of GAN-Synthesized Image Based on Discrete Wavelet Transform. Secur. Commun. Netw. 2021, 2021, 5511435. [Google Scholar]
- Wang, J.; Tondi, B.; Barni, M. An Eyes-Based Siamese Neural Network for the Detection of GAN-Generated Face Images. Front. Signal Process. 2022, 45. [Google Scholar] [CrossRef]
- Agarwal, S.; Farid, H. Detecting deep-fake videos from aural and oral dynamics. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 981–989. [Google Scholar]
- Schwarcz, S.; Chellappa, R. Finding facial forgery artifacts with parts-based detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 933–942. [Google Scholar]
- Ju, Y.; Jia, S.; Ke, L.; Xue, H.; Nagano, K.; Lyu, S. Fusing Global and Local Features for Generalized AI-Synthesized Image Detection. arXiv 2022, arXiv:2203.13964R. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems; Ghahramani; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; Volume 27. [Google Scholar]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
- Huang, G.B.; Ramesh, M.; Berg, T.; Learned-Miller, E. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments; Technical Report 07-49; University of Massachusetts: Amherst, MA, USA, 2007. [Google Scholar]
- Kazemi, V.; Sullivan, J. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the IEEE/CFV Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 1867–1874. [Google Scholar]
- Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep Learning Face Attributes in the Wild. In Proceedings of the International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Lee, C.H.; Liu, Z.; Wu, L.; Luo, P. MaskGAN: Towards Diverse and Interactive Facial Image Manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Pasquini, C.; Amerini, I.; Boato, G. Media forensics on social media platforms: A survey. EURASIP J. Inf. Secur. 2021, 2021, 1–19. [Google Scholar] [CrossRef]
- Boato, G.; Pasquini, C.; Stefani, A.; Verde, S.; Miorandi, D. TrueFace: A dataset for the detection of synthetic face images from social networks. In Proceedings of the IEEE/IAPR International Joint Conference on Biometrics, Abu Dhabi, United Arab Emirates, 10–13 October 2022. [Google Scholar]
- Karras, T.; Aittala, M.; Laine, S.; Härkönen, E.; Hellsten, J.; Lehtinen, J.; Aila, T. Alias-Free Generative Adversarial Networks. In Proceedings of the NeurIPS, Virtual, 13 December 2021. [Google Scholar]
- Chan, E.R.; Lin, C.Z.; Chan, M.A.; Nagano, K.; Pan, B.; Mello, S.D.; Gallo, O.; Guibas, L.; Tremblay, J.; Khamis, S.; et al. Efficient Geometry-aware 3D Generative Adversarial Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–20 June 2022. [Google Scholar]
- Bińkowski, M.; Donahue, J.; Dieleman, S.; Clark, A.; Elsen, E.; Casagrande, N.; Cobo, L.C. High Fidelity Speech Synthesis with Adversarial Networks. In Proceedings of the ICLR, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Xu, J.; Sun, X.; Ren, X.; Lin, J.; Wei, B.; Li, W. DP-GAN: Diversity-Promoting Generative Adversarial Network for Generating Informative and Diversified Text. arXiv 2018, arXiv:1802.01345. [Google Scholar]
- Bhavsar, K.; Vakharia, V.; Chaudhari, R.; Vora, J.; Pimenov, D.Y.; Giasin, K. A Comparative Study to Predict Bearing Degradation Using Discrete Wavelet Transform (DWT), Tabular Generative Adversarial Networks (TGAN) and Machine Learning Models. Machines 2022, 10, 176. [Google Scholar] [CrossRef]
Dataset | ||
---|---|---|
SG2 | ||
FFHQ | ||
CelebA | ||
CelebHQ | ||
Caltech | ||
LFW |
SVM | RF | LR | MLP | FNN | |
---|---|---|---|---|---|
FFHQ vs. SG2 | 80.63 | 76.88 | 70.63 | 70.63 | 79.38 |
CelebA vs. SG2 | 82.28 | 82.91 | 69.62 | 69.62 | 82.91 |
CelebHQ vs. SG2 | 88.05 | 86.16 | 77.99 | 77.99 | 86.79 |
LFW vs. SG2 | 76.88 | 74.38 | 71.25 | 73.13 | 75.63 |
Caltech vs. SG2 | 89.38 | 81.88 | 81.25 | 81.25 | 83.13 |
All vs. SG2 | 78.13 | 75.00 | 71.88 | 73.13 | 77.50 |
(a) FN | |||||
SVM | RF | LR | MLP | FNN | |
FFHQ vs. SG2 | 81.88 | 78.75 | 73.75 | 73.13 | 80.00 |
CelebA vs. SG2 | 88.61 | 84.81 | 83.54 | 81.01 | 88.61 |
CelebHQ vs. SG2 | 86.79 | 84.91 | 81.13 | 79.25 | 86.16 |
LFW vs. SG2 | 79.38 | 78.13 | 77.50 | 72.50 | 81.25 |
Caltech vs. SG2 | 85.63 | 83.75 | 85.00 | 86.25 | 86.25 |
All vs. SG2 | 78.13 | 78.75 | 76.88 | 76.25 | 78.75 |
(b) FN | |||||
SVM | RF | LR | MLP | FNN | |
FFHQ vs. SG2 | 87.50 | 84.37 | 83.12 | 86.87 | 82.50 |
CelebA vs. SG2 | 89.24 | 91.14 | 88.60 | 87.34 | 87.34 |
CelebHQ vs. SG2 | 89.30 | 88.67 | 86.79 | 83.01 | 83.02 |
LFW vs. SG2 | 95.59 | 95.59 | 89.93 | 94.96 | 94.33 |
Caltech vs. SG2 | 87.50 | 87.50 | 86.25 | 87.50 | 87.50 |
All vs. SG2 | 89.37 | 88.12 | 85.00 | 85.00 | 84.37 |
(c) LM | |||||
SVM | RF | LR | MLP | FNN | |
FFHQ vs. SG2 | 88.75 | 84.37 | 80.62 | 85.00 | 85.62 |
CelebA vs. SG2 | 89.87 | 92.40 | 87.34 | 84.17 | 89.87 |
CelebHQ vs. SG2 | 89.30 | 88.67 | 81.76 | 84.90 | 84.90 |
LFW vs. SG2 | 94.96 | 93.71 | 84.27 | 90.56 | 94.96 |
Caltech vs. SG2 | 90.62 | 90.00 | 86.87 | 90.00 | 87.50 |
All vs. SG2 | 88.75 | 87.50 | 82.50 | 83.12 | 85.00 |
(d) LM |
FN | FN | LM | LM | |
---|---|---|---|---|
FFHQ vs. SG2 | 0.85 | 0.90 | 0.95 | 0.95 |
CelebA vs. SG2 | 0.91 | 0.93 | 0.96 | 0.96 |
CelebHQ vs. SG2 | 0.93 | 0.93 | 0.96 | 0.96 |
LFW vs. SG2 | 0.84 | 0.89 | 0.98 | 0.98 |
Caltech vs. SG2 | 0.94 | 0.95 | 0.96 | 0.96 |
All vs. SG2 | 0.86 | 0.92 | 0.96 | 0.95 |
Train/Test | 0.3 | 0.7 | 0.8 | 0.9 | 1.0 | 1.1 | 1.2 | 1.3 |
---|---|---|---|---|---|---|---|---|
0.3 | 84.35 | 82.50 | 83.75 | 84.37 | 84.38 | 81.87 | 81.88 | 81.25 |
0.7 | 85.00 | 85.00 | 84.37 | 88.13 | 85.00 | 82.50 | 83.12 | 79.36 |
0.8 | 83.75 | 85.00 | 85.00 | 86.25 | 81.88 | 82.50 | 81.25 | 81.25 |
0.9 | 83.12 | 80.62 | 81.88 | 86.25 | 80.00 | 78.75 | 78.75 | 78.13 |
1.0 | 83.12 | 83.75 | 80.00 | 82.50 | 80.63 | 80.00 | 78.13 | 77.50 |
1.1 | 83.12 | 82.50 | 82.50 | 83.75 | 80.62 | 81.25 | 78.75 | 80.00 |
1.2 | 84.61 | 85.25 | 83.33 | 87.18 | 84.62 | 80.77 | 83.97 | 82.69 |
1.3 | 82.70 | 80.77 | 80.12 | 85.26 | 82.05 | 83.97 | 83.33 | 80.77 |
Train/Test | 0.3 | 0.7 | 0.8 | 0.9 | 1.0 | 1.1 | 1.2 | 1.3 |
---|---|---|---|---|---|---|---|---|
0.3 | 86.88 | 83.13 | 78.75 | 82.50 | 84.38 | 81.88 | 80.00 | 83.75 |
0.7 | 85.00 | 80.62 | 78.12 | 81.25 | 81.88 | 80.63 | 80.63 | 80.63 |
0.8 | 85.00 | 78.12 | 76.25 | 80.63 | 81.25 | 78.75 | 78.75 | 80.00 |
0.9 | 83.75 | 81.25 | 80.00 | 80.00 | 82.50 | 80.00 | 78.75 | 80.63 |
1.0 | 85.63 | 81.25 | 81.25 | 78.13 | 81.88 | 80.00 | 80.00 | 77.50 |
1.1 | 83.75 | 80.63 | 80.00 | 82.50 | 83.13 | 83.13 | 80.00 | 80.63 |
1.2 | 81.41 | 78.21 | 76.92 | 78.85 | 80.79 | 77.56 | 80.00 | 80.63 |
1.3 | 80.13 | 75.64 | 81.41 | 78.85 | 83.33 | 80.77 | 78.85 | 80.75 |
Train/Test | 0.3 | 0.7 | 0.8 | 0.9 | 1.0 | 1.1 | 1.2 | 1.3 |
---|---|---|---|---|---|---|---|---|
0.3 | 88.13 | 86.87 | 83.75 | 85.62 | 81.76 | 85.00 | 83.33 | 85.25 |
0.7 | 86.87 | 86.25 | 84.37 | 90.62 | 85.62 | 85.00 | 91.66 | 90.38 |
0.8 | 90.65 | 88.12 | 86.25 | 91.25 | 87.50 | 86.25 | 91.02 | 87.82 |
0.9 | 90.00 | 87.50 | 87.50 | 90.62 | 86.25 | 86.25 | 89.10 | 90.38 |
1.0 | 90.00 | 88.12 | 86.25 | 90.62 | 87.50 | 85.32 | 90.25 | 88.21 |
1.1 | 88.75 | 88.75 | 85.62 | 88.75 | 85.00 | 87.50 | 89.10 | 87.17 |
1.2 | 90.00 | 90.62 | 89.37 | 91.25 | 88.12 | 90.00 | 92.94 | 91.66 |
1.3 | 90.00 | 88.75 | 86.88 | 90.65 | 86.88 | 90.00 | 92.31 | 90.38 |
Train/Test | 0.3 | 0.7 | 0.8 | 0.9 | 1.0 | 1.1 | 1.2 | 1.3 |
---|---|---|---|---|---|---|---|---|
0.3 | 85.62 | 86.87 | 83.75 | 86.87 | 86.25 | 85.00 | 87.18 | 86.53 |
0.7 | 88.75 | 89.37 | 85.00 | 90.62 | 88.12 | 88.12 | 91.02 | 91.02 |
0.8 | 91.25 | 90.62 | 83.75 | 92.50 | 90.00 | 90.62 | 90.38 | 89.10 |
0.9 | 91.25 | 87.75 | 85.62 | 90.00 | 86.87 | 88.75 | 89.10 | 89.74 |
1.0 | 89.37 | 86.87 | 85.00 | 90.00 | 88.75 | 89.37 | 90.38 | 89.10 |
1.1 | 90.62 | 88.75 | 85.00 | 89.37 | 87.50 | 88.12 | 90.38 | 88.46 |
1.2 | 92.50 | 91.25 | 86.25 | 90.62 | 88.75 | 90.62 | 91.66 | 89.10 |
1.3 | 93.75 | 90.62 | 88.12 | 91.25 | 88.12 | 93.75 | 94.87 | 92.30 |
Train/Test | NO COMP | 100 | 95 | 90 | 80 | 70 |
---|---|---|---|---|---|---|
NO COMP | 80.62 | 80.62 | 78.12 | 76.25 | 79.37 | 76.87 |
100 | 78.12 | 78.12 | 72.50 | 73.75 | 74.37 | 75.00 |
95 | 78.75 | 80.62 | 76.87 | 73.12 | 79.37 | 76.87 |
90 | 82.50 | 83.12 | 80.62 | 79.37 | 84.37 | 78.75 |
80 | 81.25 | 76.87 | 76.25 | 74.37 | 81.25 | 77.50 |
70 | 78.12 | 80.62 | 75.62 | 76.87 | 78.10 | 74.37 |
Train/Test | NO COMP | 100 | 95 | 90 | 80 | 70 |
---|---|---|---|---|---|---|
NO COMP | 81.87 | 80.62 | 78.75 | 78.12 | 80.00 | 81.25 |
100 | 79.37 | 77.50 | 76.87 | 75.62 | 76.87 | 77.50 |
95 | 78.12 | 79.37 | 77.50 | 76.87 | 76.25 | 76.87 |
90 | 81.87 | 82.50 | 81.25 | 78.75 | 80.62 | 81.87 |
80 | 83.12 | 82.50 | 76.87 | 77.50 | 78.75 | 81.87 |
70 | 85.00 | 85.62 | 83.75 | 83.12 | 84.37 | 83.12 |
Train/Test | NO COMP | 100 | 95 | 90 | 80 | 70 |
---|---|---|---|---|---|---|
NO COMP | 87.50 | 84.37 | 90.00 | 85.62 | 89.37 | 90.00 |
100 | 85.62 | 88.12 | 86.87 | 86.25 | 88.12 | 88.12 |
95 | 85.00 | 85.00 | 86.87 | 85.62 | 87.50 | 86.25 |
90 | 83.75 | 85.62 | 85.62 | 82.50 | 86.87 | 86.25 |
80 | 87.50 | 90.00 | 88.75 | 85.62 | 89.37 | 90.00 |
70 | 85.00 | 89.37 | 86.25 | 89.37 | 90.00 | 90.62 |
Train/Test | NO COMP | 100 | 95 | 90 | 80 | 70 |
---|---|---|---|---|---|---|
NO COMP | 88.75 | 88.12 | 88.75 | 85.00 | 90.62 | 88.75 |
100 | 84.37 | 88.75 | 88.75 | 86.87 | 87.50 | 88.75 |
95 | 86.25 | 86.87 | 90.62 | 88.75 | 88.75 | 86.87 |
90 | 86.87 | 88.12 | 90.62 | 86.87 | 89.37 | 88.12 |
80 | 87.50 | 91.25 | 90.62 | 86.87 | 89.37 | 91.25 |
70 | 87.50 | 90.62 | 90.00 | 89.37 | 90.00 | 90.62 |
Telegram | All | |||
---|---|---|---|---|
FN | 73 | 85 | 77 | 80 |
FN | 74 | 82 | 78 | 76 |
LM | 96 | 97 | 98 | 96 |
LM | 100 | 98 | 96 | 97 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pasquini, C.; Laiti, F.; Lobba, D.; Ambrosi, G.; Boato, G.; De Natale, F. Identifying Synthetic Faces through GAN Inversion and Biometric Traits Analysis. Appl. Sci. 2023, 13, 816. https://doi.org/10.3390/app13020816
Pasquini C, Laiti F, Lobba D, Ambrosi G, Boato G, De Natale F. Identifying Synthetic Faces through GAN Inversion and Biometric Traits Analysis. Applied Sciences. 2023; 13(2):816. https://doi.org/10.3390/app13020816
Chicago/Turabian StylePasquini, Cecilia, Francesco Laiti, Davide Lobba, Giovanni Ambrosi, Giulia Boato, and Francesco De Natale. 2023. "Identifying Synthetic Faces through GAN Inversion and Biometric Traits Analysis" Applied Sciences 13, no. 2: 816. https://doi.org/10.3390/app13020816
APA StylePasquini, C., Laiti, F., Lobba, D., Ambrosi, G., Boato, G., & De Natale, F. (2023). Identifying Synthetic Faces through GAN Inversion and Biometric Traits Analysis. Applied Sciences, 13(2), 816. https://doi.org/10.3390/app13020816