AuCFSR: Authentication and Color Face Self-Recovery Using Novel 2D Hyperchaotic System and Deep Learning Models
<p>Lyapunov exponents behavior of (<b>a</b>) 2D-SLIM, (<b>b</b>) 2D-HCM, (<b>c</b>,<b>d</b>) proposed map.</p> "> Figure 2
<p>Bifurcation behavior of proposed 2D-MSCM map.</p> "> Figure 3
<p>Phase attractor of the proposed 2D-MSCM.</p> "> Figure 4
<p>Flowchart of the proposed scheme for color face image authentication and self-recovery.</p> "> Figure 5
<p>Proposed LSB-based fragile watermarking of RGB color face image.</p> "> Figure 6
<p>Proposed process for recovery data embedding.</p> "> Figure 7
<p>(<b>a</b>) Color “Face” image of size 1024 × 1024 and the first row of (<b>b</b>–<b>d</b>) are <span class="html-italic">Rb</span>, <span class="html-italic">Gb</span> and <span class="html-italic">Bb</span> recovery data obtained from the MSBs of <span class="html-italic">R</span>, <span class="html-italic">G</span> and <span class="html-italic">B</span> channels, respectively, while the second row is their scrambled versions (<span class="html-italic">Rb*, Gb*</span> and <span class="html-italic">Bb*</span>) obtained by using Algorithm 2.</p> "> Figure 8
<p>Scrambled R-channel of “Face” image with cropping rate of (<b>a</b>) 15.33%, (<b>b</b>) 34.44%, (<b>c</b>) 61.18%, (<b>d</b>) 65.86% and its unscrambled forms: (<b>e</b>–<b>h</b>) respectively.</p> "> Figure 9
<p>Deep learning-based post-processing of the recovered image.</p> "> Figure 10
<p>(<b>a</b>–<b>h</b>) Set of 1024 × 1024 and (<b>i</b>) with a binary watermark image generated by 2D-MSCM.</p> "> Figure 11
<p>(<b>a</b>–<b>h</b>) Test images following the embedding of the watermark and the recovery data with the corresponding PSNR and SSIM values.</p> "> Figure 12
<p>(<b>a</b>) Watermarked test images of size 1024 × 1024. (<b>b</b>) Images attacked by cropping with the corresponding PSNR values. (<b>c</b>) Ground truth binary masks. (<b>d</b>) Detected tampered zones with the corresponding accuracy, recall and precision values.</p> "> Figure 13
<p>(<b>a</b>) Watermarked test images of size 1024 × 1024. (<b>b</b>) Copy-move attacked images with the corresponding PSNR values. (<b>c</b>) Ground truth binary masks. (<b>d</b>) Detected tampered zones with the corresponding accuracy, recall and precision values.</p> "> Figure 14
<p>(<b>a</b>) Watermarked test images of size 1024 × 1024. (<b>b</b>) Attacked images by face swapping with the corresponding PSNR values. (<b>c</b>) Ground truth binary masks. (<b>d</b>) Detected tampered zones with the corresponding accuracy, recall and precision values.</p> "> Figure 15
<p>(<b>a</b>) Watermarked test images of size 1024 × 1024. (<b>b</b>) Attacked images by face removing with the corresponding cropping proportions and PSNR values. (<b>c</b>) Recovered images by the proposed method before the post-processing. (<b>d</b>,<b>e</b>) Recovered images after applying the CodeFormer and DeOldify models, respectively, with the corresponding MSE, PSNR and SSIM values.</p> "> Figure 16
<p>(<b>a</b>) Watermarked test images of size 1024 × 1024. (<b>b</b>) Attacked images by face with the corresponding PSNR values. (<b>c</b>) Ground truth binary masks. (<b>d</b>) Detected tampered zones with the corresponding accuracy, recall and precision values.</p> "> Figure 17
<p>Set of color face images with resolution 512 × 512 selected from the dataset [<a href="#B46-sensors-23-08957" class="html-bibr">46</a>].</p> "> Figure 18
<p>(<b>a</b>) Watermarked test images. (<b>b</b>) Attacked images by various attacks with different proportions and PSNR values. (<b>c</b>) Detected tampered zones. (<b>d</b>) Recovered color face images by our method with the corresponding PSNR values.</p> "> Figure 19
<p>(<b>a</b>) Watermarked test images of size 512 × 512. (<b>b</b>) Attacked-watermarked images by “Salt-and-pepper” and “Gaussian” noise with various densities. (<b>c</b>) Noised-cropped images with the corresponding PSNR values. (<b>d</b>) Detected tampered zones with the corresponding accuracy, and (<b>e</b>) the recovered color face images by our method and their PSNR values.</p> "> Figure 20
<p>(<b>a</b>) Watermarked test images of size 1024 × 1024. (<b>b</b>) Tampered images by cropping attacks with the corresponding PSNR values. (<b>c</b>) Ground truth binary masks of the tampered areas. (<b>d</b>,<b>e</b>) Detected tampered zones with the corresponding accuracy when using the correct security key and incorrect one, respectively.</p> "> Figure 21
<p>(<b>a</b>) Watermarked test images of size 1024 × 1024. (<b>b</b>) Tampered images. (<b>c</b>,<b>d</b>) Recovered images with the corresponding PSNR and SSIM values when using the correct security key and incorrect ones, respectively.</p> "> Figure 22
<p>Set from the test images selected from the dataset [<a href="#B40-sensors-23-08957" class="html-bibr">40</a>] and used in the comparative analysis.</p> ">
Abstract
:1. Introduction
- Introducing a new 2D hyperchaotic system called 2D-MSCM.
- Analyzing the chaotic behavior of 2D-MSCM and demonstrating its superiority over similar maps.
- Introducing of a new approach for color face image authentication and self-recovery (AuCFSR) based on 2D-MSCM chaotic system and deep learning models.
- AuCFSR incorporates the watermark and recovery data into 2-LSB of the cover image, which ensures high visual quality of the AuCFSR output image.
- The suggested 2D-MSCM ensures the high security level of the proposed AuCFSR, as its security keys are very sensitive to any variation by +/−10−15.
- AuCFSR is a pixel-based system, which guarantees a high precision in tampering detection process.
- The use of deep learning models in the post-processing leads to improving the visual quality of the recovered color face image.
- To the best of our knowledge, AuCFSR is the first authentication and self-recovery scheme designed for color face images.
2. Related Work
- The proposed scheme is pixel-based, which can provide high accuracy in detecting the tampered areas. Therefore, an improvement in the tampering detection accuracy is expected by using the proposed scheme.
- The proposed method integrates the watermark and recovery data into 2 LSBs of each pixel. Therefore, our method ensures low degradation of the host image.
- Our scheme uses the pseudorandom property of the proposed 2D-MSCM to construct the watermark data and to embed the latter with the recovery data into the input image. In addition, the chaotic property of 2D-MSCM is exploited to overcome the problem of tampering coincidence. To the best of our knowledge, this is the first exploitation of chaotic systems in overcoming this problem.
- The proposed scheme involves a post-processing stage that relies on pre-trained deep learning models for improving the recovered image quality. Therefore, an improvement in the quality of the recovered image is expected via our scheme over the latest state-of-the-art schemes.
- The performance of our system is illustrated by its application to the tampering detection and self-recovery of color face images. To the best of our knowledge, this is the first attempt to address such specific problem in the image authentication and self-recovery application.
- The robustness of the provided scheme against brute force attacks and the sensitivity of the security keys are investigated to prove its high level of security.
3. Novel 2-D Discrete Hyperchaotic Map and Its Analysis
3.1. Lyapunov Exponents
3.2. Bifurcation Behaviour
3.3. Phase Attractors
4. Proposed Scheme for Color Face Image Authentication and Self-Recovery
4.1. 2D-MSCM-Based Color Image Fragile Watermarking
Algorithm 1. LSB-based color face image watermarking. | |
Inputs | W: Binary watermark of size generated by the proposed 2D chaotic map I: Input color image of size |
Output | WI: The watermarked color image |
//Splitting the input I image into three color image channels | |
1. | Get the red channel (R) of the input image (I) |
2. | Get the green channel (G) of the input image (I) |
3. | Get the blue channel (B) of the input image (I) |
4. | Set the first LSBs of B to W bits, which generates B* channel |
//Generating the watermarked color image (WI) | |
5. | |
6. | |
7. | |
8. | Return WI |
4.2. Self-Recovery Data Embedding
Algorithm 2. Proposed 2D-MSCM based confusion Algorithm. | |
Inputs: | Xc: Confusion vector of size generated based on 2D-MSCM Yc: Confusion vector of size generated based on 2D-MSCM I: 2D matrix of size |
Output: | CI: The confused version of I matrix |
1. | for i = 1 to N do |
2. | Get value, which is the i-element in Xc vector |
3. | = CircShift (, )//where CircShift (, ) is a left circular shifting function operation, which shifts the elements of i-row in I matrix by positions. |
4. | end for |
5. | for j = 1 to M do |
6. | Get value, which is the j-element in Yc vector |
7. | = CircShift (, )//where CircShift (, ) is a left circular shifting operation, which shifts the elements of j-columnin in matrix by positions. |
8. | end for |
9. | for i = 1 to N do |
10. | Get value, which is the i-element in Xc vector |
11. | = CircShift (, ) |
12. | end for |
13. | Return CI |
Algorithm 3. Proposed Algorithm for the recovery data embedding into the LSBs of the watermarked image channels. | |
Inputs: | Rb*, Gb*, and Bb*: three binary matrices of size , and , respectively WI: Watermarked color image of size |
Output: | WI*: Watermarked color image with embedded self-recovery data |
//Splitting WI image into three color image channels | |
1. | Get the red channel (RW) of WI image |
2. | Get the green channel (GW) of WI image |
3. | Get the blue channel (BW) of WI image |
4. | Get the first layer, noted R1, of the Rb* matrix |
5. | Get the second layer, noted R2, of the Rb* matrix |
6. | Get the first layer, noted G1, of the Gb* matrix |
7. | Get the second layer, noted G2, of the Gb* matrix |
8. | Set the 1-LSBs of the RW pixels to R1 bits |
9. | Set the 2-LSBs of the RW pixels to R2 bits, which generates RW* channel |
10. | Set the 1-LSBs of the GW pixels to G1 bits |
11. | Set the 2-LSBs of the GW pixels to G2 bits, which generates GW* channel |
12. | Set the 2-LSBs of the BW pixels to Bb* bits, which generates BW* channel |
//Generating the watermarked color image with the self-recovery data (WI*) | |
13. | |
14. | |
15. | |
16. | Return WI* |
4.3. Blind Detection of the Tampered Areas
Algorithm 4. Proposed algorithm for localizing the tampered areas within the received image. | |
Inputs: | W: Binary watermark of size generated via the proposed 2D-MSCM. RI: Received color image of size |
Outputs: | TZ: Binary matrix that represents the tampered regions (pixel positions) within the RI image TZ_NOT: The logical NOT of TZ matrix |
//Splitting WI image into three color image channels | |
1. | Get the red channel (R) of RI image |
2. | Get the green channel (G) of RI image |
3. | Get the blue channel (B) of RI image |
4. | Get the first LSB values of B channel and then save these values in WI* matrix. |
5. | T = XOR(WI*,W)//where the symbol “XOR” denotes the Bit-wise XOR operation between two binary inputs |
6. | TZ = IMCLOSE(T,3)//where IMCLOSE(TZ,3) function [41] performs the morphological closing with radius of 3 pixels on T binary image. |
7. | TZ_NOT = NOT(TZ)//where NOT (TZ) operator performs the logical NOT of TZ logical input |
8. | Return TZ, TZ_NOT |
4.4. Cropping the Detected Tampred Zones Withing The Received Image
4.5. Self-Recovery Image Generation
Algorithm 5. Proposed algorithm for generating the self-recovery image. | |
Inputs: | RI_Crop: Received image of size with cropped tampered zones TZ: The logical matrix of the tampered zones Xc: Confusion vector of size Yc: Confusion vector of size |
Output: | SRI: Generated self-recovery image of size |
1. | Get the red channel (R) of RI_Crop image |
2. | Get the green channel (G) of RI_Crop image |
3. | Get the blue channel (B) o of RI_Crop image |
4. | Get the first LSB values of R channel and then save these values in R_c1 matrix |
5. | Get the second LSB values of R channel and then save these values in R_c2 matrix |
6. | Get the first LSB values of G channel and then save these values in G_c1 matrix |
7. | Get the second LSB values of G channel and then save these values in G_c2 matrix |
8. | Get the second LSB values of B channel and then save these values in B_c2 matrix. |
9. | Perform the inverse confusion process for R_c1 matrix using Inv_confusion function described in Algorithm 6 to get R1 matrix |
10. | Perform the inverse confusion process for R_c2 matrix using Inv_confusion function to get R2 matrix |
11. | Perform the inverse confusion process for G_c1 matrix using Inv_confusion function to get G1 matrix |
12. | Perform the inverse confusion process for G_c2 matrix using Inv_confusion function to get G2 matrix |
13. | Perform the inverse confusion process for B_c2 matrix using Inv_confusion function to get B2 matrix |
//The following steps are implemented for generating the self-recovery image | |
14. | SRI_R1 = R1.*TZ//where the symbol “.*” represents the Hadamard product operation |
15. | SRI_R2 = R2.*TZ |
16. | SRI_G1 = G1.*TZ |
17. | SRI_G2 = G2.*TZ |
18. | SRI_B2 = G2.*TZ |
19. | Create an matrix, noted SRI, of zero values represented on 8 bits |
20. | Set the 7-MSBs of the first SRI layer to SRI_R1 binary matrix |
21. | Set the 8-MSBs of the first SRI layer to SRI_R2 binary matrix |
22. | Set the 7-MSBs of the second SRI layer to SRI_G1 binary matrix |
23. | Set the 8-MSBs of the second SRI layer to SRI_G2 binary matrix |
24. | Set the 8-MSBs of the third SRI layer to SRI_B2 binary matrix |
25. | Return SRI |
Algorithm 6. Proposed inverse confusion algorithm (Inv_Confusion) based on 2D-MSCM. | |
Inputs: | Xc: Confusion vector of size generated based on 2D-MSCM Yc: Confusion vector of size generated based on 2D-MSCM CI:Confused 2D matrix of size |
Output: | I: The inverse confused version of CI matrix |
1. | for i = 1 to N do |
2. | Get value, which is the i-element in Xc vector |
3. | = CircShift (, ) |
4. | end for |
5. | for j = 1 to M do |
6. | Get value, which is the j-element in Yc vector |
7. | = CircShift (,) |
8. | end for |
9. | for i = 1 to N do |
10. | Get value, which is the i-element in Xc vector |
11. | = CircShift (,) |
12. | end for |
13. | Return I |
4.6. Deep Learning-Based Post-Processing of the Recovered Image
5. Simulation Results
5.1. Evaluation of Imperceptibility Performance
5.2. Evaluation of Tampering Detection Rate Performance
5.3. Evaluation of Image Recovery Performance
5.4. Evaluation of Security Performance
5.4.1. Key Space Analysis
5.4.2. Key Sensitivity Analysis
5.5. Comparison with Similar Work
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ray, A.; Roy, S. Recent Trends in Image Watermarking Techniques for Copyright Protection: A Survey. Int. J. Multimed. Inf. Retr. 2020, 9, 249–270. [Google Scholar] [CrossRef]
- Tolosana, R.; Rathgeb, C.; Vera-Rodriguez, R.; Busch, C.; Verdoliva, L.; Lyu, S.; Nguyen, H.H.; Yamagishi, J.; Echizen, I.; Rot, P.; et al. Future Trends in Digital Face Manipulation and Detection. In Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks; Advances in Computer Vision and Pattern Recognition; Rathgeb, C., Tolosana, R., Vera-Rodriguez, R., Busch, C., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 463–482. ISBN 978-3-030-87664-7. [Google Scholar]
- Aminuddin, A.; Ernawan, F. AuSR1: Authentication and Self-Recovery Using a New Image Inpainting Technique with LSB Shifting in Fragile Image Watermarking. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 5822–5840. [Google Scholar] [CrossRef]
- Aminuddin, A.; Ernawan, F. AuSR2: Image Watermarking Technique for Authentication and Self-Recovery with Image Texture Preservation. Comput. Electr. Eng. 2022, 102, 108207. [Google Scholar] [CrossRef]
- Liu, T.; Yuan, X. A Dual-Tamper-Detection Method for Digital Image Authentication and Content Self-Recovery. Multimed. Tools Appl. 2021, 80, 29805–29826. [Google Scholar] [CrossRef]
- Molina, J.; Ponomaryov, V.; Reyes, R.; Sadovnychiy, S.; Cruz, C. Watermarking Framework for Authentication and Self-Recovery of Tampered Colour Images. IEEE Lat. Am. Trans. 2020, 18, 631–638. [Google Scholar] [CrossRef]
- Faheem, Z.B.; Ali, M.; Raza, M.A.; Arslan, F.; Ali, J.; Masud, M.; Shorfuzzaman, M. Image Watermarking Scheme Using LSB and Image Gradient. Appl. Sci. 2022, 12, 4202. [Google Scholar] [CrossRef]
- Molina-Garcia, J.; Garcia-Salgado, B.P.; Ponomaryov, V.; Reyes-Reyes, R.; Sadovnychiy, S.; Cruz-Ramos, C. An Effective Fragile Watermarking Scheme for Color Image Tampering Detection and Self-Recovery. Signal Process. Image Commun. 2020, 81, 115725. [Google Scholar] [CrossRef]
- Singh, D.; Shivani, S.; Agarwal, S. Self-Embedding Pixel Wise Fragile Watermarking Scheme for Image Authentication. In Proceedings of the Intelligent Interactive Technologies and Multimedia; Agrawal, A., Tripathi, R.C., Do, E.Y.-L., Tiwari, M.D., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 111–122. [Google Scholar]
- Kamili, A.; Hurrah, N.N.; Parah, S.A.; Bhat, G.M.; Muhammad, K. DWFCAT: Dual Watermarking Framework for Industrial Image Authentication and Tamper Localization. IEEE Trans. Ind. Inform. 2021, 17, 5108–5117. [Google Scholar] [CrossRef]
- Lee, T.-Y.; Lin, S.D. Dual Watermark for Image Tamper Detection and Recovery. Pattern Recognit. 2008, 41, 3497–3506. [Google Scholar] [CrossRef]
- Wang, N.; Zhang, Y.; Zhang, L. Dynamic Selection Network for Image Inpainting. IEEE Trans. Image Process. 2021, 30, 1784–1798. [Google Scholar] [CrossRef] [PubMed]
- Elharrouss, O.; Almaadeed, N.; Al-Maadeed, S.; Akbari, Y. Image Inpainting: A Review. Neural Process Lett. 2020, 51, 2007–2028. [Google Scholar] [CrossRef]
- Qin, Z.; Zeng, Q.; Zong, Y.; Xu, F. Image Inpainting Based on Deep Learning: A Review. Displays 2021, 69, 102028. [Google Scholar] [CrossRef]
- Wei, Z.; Min, W.; Wang, Q.; Liu, Q.; Zhao, H. ECNFP: Edge-Constrained Network Using a Feature Pyramid for Image Inpainting. Expert. Syst. Appl. 2022, 207, 118070. [Google Scholar] [CrossRef]
- Zhou, S.; Chan, K.; Li, C.; Loy, C.C. Towards Robust Blind Face Restoration with Codebook Lookup Transformer. Adv. Neural Inf. Process. Syst. 2022, 35, 30599–30611. [Google Scholar]
- Jantic, J. Deoldify. GitHub: Github.com/jantic/DeOldify. 2019. Available online: https://github.com/jantic/DeOldify (accessed on 16 September 2023).
- Tong, X.; Liu, Y.; Zhang, M.; Chen, Y. A Novel Chaos-Based Fragile Watermarking for Image Tampering Detection and Self-Recovery. Signal Process. Image Commun. 2013, 28, 301–308. [Google Scholar] [CrossRef]
- Singh, D.; Singh, S.K. Effective Self-Embedding Watermarking Scheme for Image Tampered Detection and Localization with Recovery Capability. J. Vis. Commun. Image Represent. 2016, 38, 775–789. [Google Scholar] [CrossRef]
- Cao, F.; An, B.; Wang, J.; Ye, D.; Wang, H. Hierarchical Recovery for Tampered Images Based on Watermark Self-Embedding. Displays 2017, 46, 52–60. [Google Scholar] [CrossRef]
- Tai, W.-L.; Liao, Z.-J. Image Self-Recovery with Watermark Self-Embedding. Signal Process. Image Commun. 2018, 65, 11–25. [Google Scholar] [CrossRef]
- Fan, M.; Wang, H. An Enhanced Fragile Watermarking Scheme to Digital Image Protection and Self-Recovery. Signal Process. Image Commun. 2018, 66, 19–29. [Google Scholar] [CrossRef]
- Bolourian Haghighi, B.; Taherinia, A.H.; Harati, A. TRLH: Fragile and Blind Dual Watermarking for Image Tamper Detection and Self-Recovery Based on Lifting Wavelet Transform and Halftoning Technique. J. Vis. Commun. Image Represent. 2018, 50, 49–64. [Google Scholar] [CrossRef]
- Li, Y.; Song, W.; Zhao, X.; Wang, J.; Zhao, L. A Novel Image Tamper Detection and Self-Recovery Algorithm Based on Watermarking and Chaotic System. Mathematics 2019, 7, 955. [Google Scholar] [CrossRef]
- Al-Otum, H.M.; Ellubani, A.A.A. Secure and Effective Color Image Tampering Detection and Self Restoration Using a Dual Watermarking Approach. Optik 2022, 262, 169280. [Google Scholar] [CrossRef]
- Su, G.-D.; Chang, C.-C.; Lin, C.-C. Effective Self-Recovery and Tampering Localization Fragile Watermarking for Medical Images. IEEE Access 2020, 8, 160840–160857. [Google Scholar] [CrossRef]
- Rezaei, M.; Taheri, H. Digital Image Self-Recovery Using CNN Networks. Optik 2022, 264, 169345. [Google Scholar] [CrossRef]
- Daoui, A.; Karmouni, H.; Sayyouri, M.; Qjidaa, H. Efficient Methods for Signal Processing Using Charlier Moments and Artificial Bee Colony Algorithm. Circuits Syst. Signal Process. 2022, 41, 166–195. [Google Scholar] [CrossRef]
- Jiang, F.; Tao, W.; Liu, S.; Ren, J.; Guo, X.; Zhao, D. An End-to-End Compression Framework Based on Convolutional Neural Networks. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 3007–3018. [Google Scholar] [CrossRef]
- Xu, Q.; Sun, K.; Cao, C.; Zhu, C. A Fast Image Encryption Algorithm Based on Compressive Sensing and Hyperchaotic Map. Opt. Lasers Eng. 2019, 121, 203–214. [Google Scholar] [CrossRef]
- Gao, X. Image Encryption Algorithm Based on 2D Hyperchaotic Map. Opt. Laser Technol. 2021, 142, 107252. [Google Scholar] [CrossRef]
- Chen, L.; Tang, S.; Li, Q.; Zhong, S. A New 4D Hyperchaotic System with High Complexity. Math. Comput. Simul. 2018, 146, 44–56. [Google Scholar] [CrossRef]
- Zheng, L.; Zhang, Y.; Thing, V.L.L. A Survey on Image Tampering and Its Detection in Real-World Photos. J. Vis. Commun. Image Represent. 2019, 58, 380–399. [Google Scholar] [CrossRef]
- Christlein, V.; Riess, C.; Jordan, J.; Riess, C.; Angelopoulou, E. An Evaluation of Popular Copy-Move Forgery Detection Approaches. IEEE Trans. Inf. Forensics Secur. 2012, 7, 1841–1854. [Google Scholar] [CrossRef]
- Schetinger, V.; Oliveira, M.M.; da Silva, R.; Carvalho, T.J. Humans Are Easily Fooled by Digital Images. Comput. Graph. 2017, 68, 142–151. [Google Scholar] [CrossRef]
- Yee, K.; Tantipongpipat, U.; Mishra, S. Image Cropping on Twitter: Fairness Metrics, Their Limitations, and the Importance of Representation, Design, and Agency. Proc. ACM Hum.-Comput. Interact. 2021, 5, 450:1–450:24. [Google Scholar] [CrossRef]
- Korshunova, I.; Shi, W.; Dambre, J.; Theis, L. Fast Face-Swap Using Convolutional Neural Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3677–3685. [Google Scholar]
- Nguyen, T.T.; Nguyen, Q.V.H.; Nguyen, D.T.; Nguyen, D.T.; Huynh-The, T.; Nahavandi, S.; Nguyen, T.T.; Pham, Q.-V.; Nguyen, C.M. Deep Learning for Deepfakes Creation and Detection: A Survey. Comput. Vis. Image Underst. 2022, 223, 103525. [Google Scholar] [CrossRef]
- Zhang, X.; Wang, S. Statistical Fragile Watermarking Capable of Locating Individual Tampered Pixels. IEEE Signal Process. Lett. 2007, 14, 727–730. [Google Scholar] [CrossRef]
- 70,000 Real Faces 2. Available online: https://www.kaggle.com/datasets/tunguz/70000-real-faces-2 (accessed on 11 March 2023).
- Ekstrom, M.P. Digital Image Processing Techniques; Academic Press: Cambridge, MA, USA, 2012; ISBN 978-0-323-14016-4. [Google Scholar]
- Zhou, S. Sczhou/CodeFormer 2023. Available online: https://github.com/sczhou/CodeFormer (accessed on 16 September 2023).
- Kim, C.; Yang, C.-N. Self-Embedding Fragile Watermarking Scheme to Detect Image Tampering Using AMBTC and OPAP Approaches. Appl. Sci. 2021, 11, 1146. [Google Scholar] [CrossRef]
- Siddiqui, G.F.; Iqbal, M.Z.; Saleem, K.; Saeed, Z.; Ahmed, A.; Hameed, I.A.; Khan, M.F. A Dynamic Three-Bit Image Steganography Algorithm for Medical and e-Healthcare Systems. IEEE Access 2020, 8, 181893–181903. [Google Scholar] [CrossRef]
- FaceSwapper|Swap Photo Video Face Online Free. Available online: https://faceswapper.ai/ (accessed on 28 April 2023).
- Flickr-Faces-HQ Dataset (FFHQ). Available online: https://www.kaggle.com/datasets/arnaud58/flickrfaceshq-dataset-ffhq (accessed on 13 October 2023).
- Boulogeorgos, A.-A.A.; Alexiou, A.; Merkle, T.; Schubert, C.; Elschner, R.; Katsiotis, A.; Stavrianos, P.; Kritharidis, D.; Chartsias, P.-K.; Kokkoniemi, J.; et al. Terahertz Technologies to Deliver Optical Network Quality of Experience in Wireless Systems Beyond 5G. IEEE Commun. Mag. 2018, 56, 144–151. [Google Scholar] [CrossRef]
- Daoui, A.; Mao, H.; Yamni, M.; Li, Q.; Alfarraj, O.; Abd El-Latif, A.A. Novel Integer Shmaliy Transform and New Multiparametric Piecewise Linear Chaotic Map for Joint Lossless Compression and Encryption of Medical Images in IoMTs. Mathematics 2023, 11, 3619. [Google Scholar] [CrossRef]
- Alvarez, G.; Li, S. Some Basic Cryptographic Requirements for Chaos-Based Cryptosystems. Int. J. Bifurc. Chaos 2006, 16, 2129–2151. [Google Scholar] [CrossRef]
Scheme’s Reference | Spatial/ Transform Domain | Block/Pixel-Based | Embedding Data Position (8 Bit Deep) | Image Category | Addressing the Tamper Coincidence Problem | Analysis of the Security Level? | Use of Deep Learning Techniques? |
---|---|---|---|---|---|---|---|
Aminuddin et al. [3] | Spatial | Block-based | 2-LSB | Color | Yes | No | No |
Aminuddin et al. [4] | Spatial | Block-based | 2-LSB | Color | Yes | No | No |
Molina-Garcia et al. [8] | Spatial | Block-based | 2-LSB | Color | Yes | No | No |
Tong et al. [18] | Spatial | Block-based | 3-LSB | Color | No | No | No |
Singh et al. [19] | DCT Transform | Block-based | 3-LSB | Grayscale/Color | Yes | No | No |
Cao et al. [20] | Spatial | Block-based | 2-LSB | Grayscale | Yes | No | No |
Tai et al. [21] | DWT Transform | Block-based | 2-LSB | Grayscale | Yes | Yes | No |
Fan et al. [22] | Spatial | Block-based | 2-LSB | Grayscale | No | No | No |
Bolourian Haghighi et al. [23] | LWT transform | Block-based | 2-LSB | Grayscale/Color | No | No | No |
Li et al. [24] | Spatial | Block-based | 2-LSB | Grayscale | No | No | No |
Al-Otum et al. [25] | Hybrid (spatial and DWT transform) | Block-based | 2-LSB | Color | Yes | No | No |
Su et al. [26] | Spatial | Block-based | 2-LSB | Medical Grayscale | Yes | No | No |
Rezaei et al. [27] | DCT Transform | Block-based | 2-LSB | Grayscale | No | No | Yes |
Proposed | Spatial | Pixel-based | 2-LSB | Color Face image | Yes | Yes | Yes |
Attacks | Proportion of Attacked Image Area | ||||
---|---|---|---|---|---|
5% | 10% | 15% | 20% | 25% | |
Cropping | 42.0127 | 39.3090 | 36.3278 | 32.1698 | 28.1234 |
Copy-move | 43.1398 | 40.811 | 35.7809 | 33.0643 | 27.1678 |
Face swapping | 42.5678 | 38.9865 | 35.1245 | 31.2236 | 28.5567 |
Object addition | 43.4567 | 38.1289 | 35.4887 | 32.0097 | 26.1309 |
Scheme | Proportions of Copied-Moved Areas | |||||
---|---|---|---|---|---|---|
10% | 20% | 40% | 60% | 80% | ||
Schemes | Proposed | 0.9988 | 0.9983 | 0.9978 | 0.9977 | 0.9976 |
Aminuddin et al. [3] | 0.9852 | 0.9806 | 0.9736 | 0.9719 | 0.9702 | |
Aminuddin et al. [4] | 0.9852 | 0.9806 | 0.9736 | 0.9719 | 0.9702 | |
Molina-Garcia et al. [8] | 0.9516 | 0.9632 | 0.9502 | 0.9703 | 0.9695 | |
Al-Otum et al. [25] | 0.9420 | 0.9302 | 0.9455 | 0.9560 | 0.9400 |
Attacks | Schemes | ||||
---|---|---|---|---|---|
Proposed | Aminuddin et al. [3] | Aminuddin et al. [4] | Molina-Garcia et al. [8] | Al-Otum et al. [25] | |
Cropping (rate of 25%) | 28.3616 | 24.9063 | 25.1633 | 23.1613 | 21.3685 |
Cropping (rate of 50%) | 23.0625 | 21.1696 | 21.635 | 20.1632 | 18.7452 |
Copy-move (rate of 25%) | 27.3611 | 24.1696 | 25.1698 | 23.6354 | 22.1696 |
Copy-move (rate of 50%) | 23.5063 | 21.1596 | 21.0056 | 20.1785 | 19.6321 |
Face swapping (rate 25%) | 26.8852 | 24.0258 | 24.9820 | 22.9621 | 20.1633 |
Face swapping (rate 50%) | 22.9630 | 20.9523 | 20.9816 | 20.1632 | 18.6592 |
Object addition (20% rate) | 32.1622 | 29.1652 | 30.1487 | 27.1632 | 26.1233 |
Object addition (40% rate) | 29.6305 | 27.0029 | 27.1598 | 25.1436 | 24.1678 |
Scheme Features | Schemes | ||||
---|---|---|---|---|---|
Proposed | Aminuddin et al. [3] | Aminuddin et al. [4] | Molina-Garcia et al. [8] | Al-Otum et al. [25] | |
Watermarking method (block-based/Pixel-based) | Pixel-based | Block-based | Block-based | Block-based | Block-based |
Key space | - | - | - | - | |
Use of deep learning models? | Yes | No | No | No | No |
Data embedding domain | Spatial | Spatial | Spatial | Spatial | Hybrid |
Salt-and-pepper noise robustness? | Yes | - | - | - | Yes |
Gaussian noise robustness? | No | - | - | - | No |
Average runtime for watermarking and recovery data embedding | 1.0512 | 3.2346 | 2.1754 | 4.6580 | 5.9561 |
Average runtime for tamper detection and data recovery | 15.7416 | 9.1644 | 10.3498 | 10.1230 | 12.1678 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Daoui, A.; Yamni, M.; Altameem, T.; Ahmad, M.; Hammad, M.; Pławiak, P.; Tadeusiewicz, R.; A. Abd El-Latif, A. AuCFSR: Authentication and Color Face Self-Recovery Using Novel 2D Hyperchaotic System and Deep Learning Models. Sensors 2023, 23, 8957. https://doi.org/10.3390/s23218957
Daoui A, Yamni M, Altameem T, Ahmad M, Hammad M, Pławiak P, Tadeusiewicz R, A. Abd El-Latif A. AuCFSR: Authentication and Color Face Self-Recovery Using Novel 2D Hyperchaotic System and Deep Learning Models. Sensors. 2023; 23(21):8957. https://doi.org/10.3390/s23218957
Chicago/Turabian StyleDaoui, Achraf, Mohamed Yamni, Torki Altameem, Musheer Ahmad, Mohamed Hammad, Paweł Pławiak, Ryszard Tadeusiewicz, and Ahmed A. Abd El-Latif. 2023. "AuCFSR: Authentication and Color Face Self-Recovery Using Novel 2D Hyperchaotic System and Deep Learning Models" Sensors 23, no. 21: 8957. https://doi.org/10.3390/s23218957