Efficient Source Camera Identification with Diversity-Enhanced Patch Selection and Deep Residual Prediction
<p>Framework of the proposed source camera identification method.</p> "> Figure 2
<p>Illustration of multiple criteria-based patch selection. (<b>a</b>) Selected edge and textual patches (in red square) and semantic representatives (in green square); (<b>b</b>) visualization of selected patches.</p> "> Figure 3
<p>Network structures of the (<b>a</b>) Res2net. Reprinted with permission from ref. [<a href="#B42-sensors-21-04701" class="html-bibr">42</a>] Copyright 2019 IEEE and (<b>b</b>) the proposed residual prediction module.</p> "> Figure 4
<p>Visulization of typical pathes by the residual prediction module. (<b>a</b>) Original patches; (<b>b</b>) residual patches.</p> "> Figure 5
<p>Framework of the proposed source camera identification method.</p> "> Figure 6
<p>Visualization of misclassified patches with (<b>a</b>) patch selection scheme in [<a href="#B7-sensors-21-04701" class="html-bibr">7</a>] and (<b>b</b>) the proposed patch selection scheme.</p> "> Figure 7
<p>Convergence curves of the proposed modified VGG network. (<b>a</b>) Loss vs. iterations and (<b>b</b>) accuracy vs. iterations.</p> "> Figure 8
<p>Confusion matrix of (<b>a</b>) brand level and (<b>b</b>) model level identification.</p> "> Figure 9
<p>Confusion matrix of instance level identification.</p> "> Figure 10
<p>Image tampering detection. (<b>a</b>) Original images. (<b>b</b>) Tampered images. (<b>c</b>) Detection results.</p> "> Figure 11
<p>Failure examples of the proposed method at model level identification.</p> ">
Abstract
:1. Introduction
- We propose a patch selection strategy based on local textural and semantic criteria, which are implemented by patchwise mean and variance scoring and K-means clustering, respectively. Training cost can be greatly reduced with enhanced diversity of the training data, thus, in turn, forcing the network to learn more intrinsic camera-related features for robust identification.
- A residual prediction module that automatically estimates residual image based on Res2Net [42] is proposed to reduce the impact of image contents. More granular multiscale richer features could be learned in a fully end-to-end manner, bypassing the drawbacks of traditional denoising methods due to imperfect filtering.
- Based on careful examination of the images in the Dresden database [43], we suggest a patch-level evaluation protocol for camera instance, model, and brand level experimental design method for fair comparison.
2. Summary of Source Camera Identification Methods
2.1. Conventional vs. Deep Learning Methods
2.1.1. Conventional Methods
2.1.2. Deep Learning Methods
2.2. Patch Selection Schemes
2.3. Preprocessing Methods
3. The Proposed Source Camera Identification Method
3.1. Multiple Creteria Based Patch Selection
Algorithm 1: Multiple-Criteria-based Patch Selection |
|
3.2. Residual Prediction Module
3.3. Modified VGG for Identification
3.4. Performance Evaluation
- For SCI task at one specific level, classes with only one instance at its lower level should be removed. For example, the “FujiFilm” brand is eliminated from brand level identification, as there is only one camera model “FujiFilm_FinePixJ50” in the Dresden data set. The possible influence of misleading the network to learn model level features could be avoided in this way. A similar principle applies to the model level SCI that models with only one instance are excluded. Instance-level SCI is not influenced such that all 74 camera instances are utilized.
- In order to reduce the effect of image content, scenes in the training set, validation set, and test set should be exclusive to each other. SCI algorithms are greatly affected by image content; images obtained from the same scene will affect the identification result severely. This is implemented with the scene number identifier of the Dresden database.
4. Experiments
4.1. Experimental Step
4.2. Experiment 1: Determination of Patch Selection Paremeters
4.3. Experiment 2: Comparison of Preprocessing Methods
4.4. Experiment 3: Comparison of Identificaiton Network Structures
4.5. Experiment 4: Comparison with State-of-the-Art-Methods
4.6. Experiment5: Confusion Matrix Analysis
4.7. Image Tampering Detection
4.8. Failure Cases Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Acknowledgments
Conflicts of Interest
References
- Stamm, M.C.; Wu, M.; Liu, K.J.R. Information forensics: An overview of the first decade. IEEE Access 2013, 1, 167–200. [Google Scholar] [CrossRef]
- Arjona, R.; Prada-Delgado, M.; Arcenegui, J.; Baturone, I. Trusted Cameras on Mobile Devices Based on SRAM Physically Unclonable Functions. Sensors 2018, 18, 3352. [Google Scholar] [CrossRef] [Green Version]
- Bernacki, J. A survey on digital camera identification methods. Forensic Sci. Int. Digit. Investig. 2020, 34, 300983. [Google Scholar] [CrossRef]
- Chen, M.; Fridrich, J.; Goljan, M.; Lukáš, J. Determining image origin and integrity using sensor noise. IEEE Trans. Inf. Forensics Secur. 2008, 3, 74–90. [Google Scholar] [CrossRef] [Green Version]
- Li, C.T. Source camera identification using enhanced sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 2010, 5, 280–287. [Google Scholar]
- Marra, F.; Poggi, G.; Sansone, C.; Verdoliva, L. Evaluation of residual–based local features for camera model identification. In Proceedings of the International Conference on Image Analysis and Processing, Genoa, Italy, 7–8 September 2015; pp. 11–18. [Google Scholar]
- Bondi, L.; Baroffio, L.; Güera, D.; Bestagini, P.; Delp, E.J.; Tubaro, S. First Steps Toward Camera Model Identification with Convolutional Neural Networks. IEEE Signal Process. Lett. 2017, 24, 259–263. [Google Scholar] [CrossRef] [Green Version]
- Tuama, A.; Comby, F.; Chaumont, M. Camera model identification with the use of deep convolutional neural networks. In Proceedings of the 2016 IEEE International Workshop on Information Forensics and Security (WIFS 2016), Abu Dhabi, United Arab Emirates, 4–7 December 2016; pp. 1–6. [Google Scholar]
- Yang, P.; Ni, R.; Zhao, Y.; Zhao, W. Source camera identification based on content–adaptive fusion residual networks. Pattern Recognit. Lett. 2017, 119, 195–204. [Google Scholar] [CrossRef]
- Ding, X.; Chen, Y.; Tang, Z.; Huang, Y. Camera identification based on domain knowledge–driven deep multi–task learning. IEEE Access 2019, 7, 25878–25890. [Google Scholar] [CrossRef]
- Yao, H.; Qiao, T.; Xu, M.; Zheng, N. Robust multi–classifier for camera model identification based on convolution neural network. IEEE Access 2018, 6, 24973–24982. [Google Scholar] [CrossRef]
- Lukáš, J. Fridrich, J.; Goljan, M. Digital camera identification from sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 2006, 1, 205–214. [Google Scholar] [CrossRef]
- Zhang, L.B.; Peng, F.; Long, M. Identifying source camera using guided image estimation and block weighted average. J. Vis. Commun. Image Represent. 2016, 48, 471–479. [Google Scholar] [CrossRef]
- Al-Ani, M.; Khelifi, F. On the SPN estimation in image forensics: A systematic empirical evaluation. IEEE Trans. Inf. Forensics Secur. 2017, 12, 1067–1081. [Google Scholar] [CrossRef]
- Deng, Z.; Gijsenij, A.; Zhang, J. Source camera identification using auto–white balance approximation. In Proceedings of the 13th International Conference on Computer Vision (ICCV 2011), Barcelona, Spain, 6–13 November 2011; pp. 57–64. [Google Scholar]
- Alles, E.J.; Geradts, Z.J.; Veenman, C.J. Source camera identification for heavily jpeg compressed low resolution still images. J. Forensic Sci. 2009, 54, 1067–1081. [Google Scholar] [CrossRef]
- Tuama, A.; Comby, F.; Chaumont, M. Camera model identification based machine learning approach with high order statistics features. In Proceedings of the 24th European Signal Processing Conference (EUSIPCO 2016), Budapest, Hungary, 29 August–2 September 2016; pp. 1183–1187. [Google Scholar]
- Sorrell, M.J. Multimedia Forensics and Security, 1st ed.; Li, C.T., Ed.; IGI Global: Pennsylvania, PA, USA, 2009; Chapter 14; pp. 292–313. ISBN 9781599048697. [Google Scholar]
- Cao, H.; Kot, A.C. Accurate detection of demosaicing regularity for digital image forensics. IEEE Trans. Inf. Forensics Secur. 2009, 4, 899–910. [Google Scholar]
- Thai, T.H.; Retraint, F.; Cogranne, R. Camera model identification based on DCT coefficient statistics. Digit. Signal Process. 2015, 40, 88–100. [Google Scholar] [CrossRef] [Green Version]
- Huang, N.; He, J.; Zhu, N.; Xuan, X.; Liu, G.; Chang, C. Identification of the source camera of images based on convolutional neural network. Digit. Investig. 2018, 40, 72–80. [Google Scholar] [CrossRef]
- Ferreira, A.; Chen, H.; Li, B.; Huang, J. An Inception–based data–driven ensemble approach to camera model identification. In Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security (WIFS 2018), Hong Kong, China, 11–13 December 2018; pp. 1–7. [Google Scholar]
- Kuzin, A.; Fattakhov, A.; Kibardin, I.; Iglovikov, V.I.; Dautov, R. Camera model identification using convolutional neural networks. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data 2018), Seattle, WA, USA, 10–13 December 2018; pp. 3107–3110. [Google Scholar]
- Rafi, A.M.; Kamal, U.; Hoque, R.; Abrar, A.; Das, S.; Laganière, R.; Hasan, M.K. Application of DenseNet in Camera Model Identification and Post–processing Detection. In Proceedings of the 2019 CVPR Workshops, Salt Lake City, UT, USA, 18–22 June 2019; pp. 19–28. [Google Scholar]
- Al Banna, M.H.; Haider, M.A.; Al Nahian, M.J.; Islam, M.M.; Taher, K.A.; Kaiser, M.S. Camera Model Identification using Deep CNN and Transfer Learning Approach. In Proceedings of the 2019 International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST 2019), Dhaka, Bangladesh, 10–12 January 2019; pp. 626–630. [Google Scholar]
- Zou, Z.Y.; Liu, Y.X.; Zhang, W.N.; Chen, Y.H.; Zang, Y.L.; Yang, Y.; Law, B.N.F. Robust Camera Model Identification Based on Richer Convolutional Feature Network. In Proceedings of the 2019 Asia–Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC 2019), Lanzhou, China, 18–21 November 2019; pp. 1202–1207. [Google Scholar]
- Rafi, A.M.; Tonmoy, T.I.; Kamal, U.; Wu, Q.J.; Hasan, M.K. RemNet: Remnant Convolutional Neural Network for Camera Model Identification. Neural Comput. Appl. 2021, 33, 3655–3670. [Google Scholar] [CrossRef]
- Mayer, O.; Stamm, M.C. Learned forensic source similarity for unknown camera models. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), Calgary, AB, Canada, 15–20 April 2018; pp. 2012–2016. [Google Scholar]
- Cozzolino, D.; Verdoliva, L. Noiseprint: A CNN–based camera model fingerprint. IEEE Trans. Inf. Forensics Secur. 2019, 15, 144–159. [Google Scholar] [CrossRef] [Green Version]
- Sameer, V.U.; Dali, I.; Naskar, R. A Deep Learning Based Digital Forensic Solution to Blind Source Identification of Facebook Images. In Proceedings of the 2018 International Conference on Information Systems Security, Bangkok, Thailand, 5 December 2018; pp. 291–303. [Google Scholar]
- Bayar, B.; Stamm, M.C. Towards open set camera model identification using a deep learning framework. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), Calgary, AB, Canada, 15–20 April 2018; pp. 2007–2011. [Google Scholar]
- Júnior, P.R.M.; Bondi, L.; Bestagini, P.; Tubaro, S.; Rocha, A. An in–depth study on open–set camera model identification. IEEE Access 2019, 7, 180713–180726. [Google Scholar] [CrossRef]
- Albisani, C.; Iuliani, M.; Piva, A. Checking PRNU Usability on Modern Devices. In Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021), Toronto, ON, Canada, 6–11 June 2021; pp. 2535–2539. [Google Scholar]
- Iuliani, M.; Fontani, M.; Piva, A. A Leak in PRNU Based Source Identification–Questioning Fingerprint Uniqueness. IEEE Access 2021, 9, 52455–52463. [Google Scholar] [CrossRef]
- Lin, H.; Wo, Y.; Wu, Y.; Meng, K.; Han, G. Robust source camera identification against adversarial attacks. Comput. Secur. 2021, 100, 102079. [Google Scholar] [CrossRef]
- Wang, B.; Zhao, M.; Wang, W.; Dai, X.; Li, Y.; Guo, Y. Adversarial Analysis for Source Camera Identification. IEEE Trans. Circuits Syst. Video Technol. 2020. [Google Scholar] [CrossRef]
- Bayar, B.; Stamm, M.C. Constrained convolutional neural networks: A new approach towards general purpose image manipulation detection. IEEE Trans. Inf. Forensics Secur. 2018, 13, 2691–2706. [Google Scholar] [CrossRef]
- Bayar, B.; Stamm, M.C. Augmented convolutional feature maps for robust cnn–based camera model identification. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP 2017), Beijing, China, 17–20 September 2017; pp. 4098–4102. [Google Scholar]
- Kang, C.; Kang, S.U. Camera model identification using a deep network and a reduced edge dataset. Neural Comput. Appl. 2020, 32, 13139–13146. [Google Scholar] [CrossRef]
- Zou, Z.Y.; Liu, Y.X.; Zhang, W.N.; Chen, Y.H. Camera Model Identification Based on Residual Extraction Module and SqueezeNet. In Proceedings of the 2nd International Conference on Big Data Technologies, Jinan, China, 28–30 August 2019; pp. 211–215. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
- Gao, S.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P.H. Res2net: A new multi–scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 652–662. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gloe, T.; Böhme, R. The ’Dresden Image Database’ for benchmarking digital image forensics. In Proceedings of the 2010 ACM Symposium on Applied Computing, Sierre, Switzerland, 22–26 March 2010; pp. 1584–1590. [Google Scholar]
- Kang, X.; Li, Y.; Qu, Z.; Huang, J. Enhancing source camera identification performance with a camera reference phase sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 2012, 7, 393–402. [Google Scholar] [CrossRef]
- Lin, X.; Li, C.T. Enhancing sensor pattern noise via filtering distortion removal. IEEE Signal Process. Lett. 2016, 23, 381–385. [Google Scholar] [CrossRef]
- Rao, Q.; Wang, J. Suppressing random artifacts in reference sensor pattern noise via decorrelation. IEEE Signal Process. Lett. 2017, 24, 809–813. [Google Scholar] [CrossRef]
- Zandi, N.; Razzazi, F. Source Camera Identification With Dual-Tree Complex Wavelet Transform. IEEE Access 2020, 8, 18874–18883. [Google Scholar]
- Chen, C.; Stamm, M.C. Camera model identification framework using an ensemble of demosaicing features. In Proceedings of the 2015 IEEE International Workshop on Information Forensics and Security (WIFS 2015), Rome, Italy, 16–19 November 2015; pp. 1–6. [Google Scholar]
- Tuama, A.; Comby, F.; Chaumont, M. Source camera model identification using features from contaminated sensor noise. In Proceedings of the International Workshop on Digital Watermarking, Tokyo, Japan, 7–10 October 2015; pp. 83–93. [Google Scholar]
- Marra, F.; Poggi, G.; Sansone, C.; Verdoliva, L. A study of co-occurrence based local features for camera model identification. Multimed. Tools Appl. 2017, 76, 4765–4781. [Google Scholar] [CrossRef]
- Xu, B.; Wang, X.; Zhou, X.; Xi, J.; Wang, S. Source camera identification from image texture features. Neurocomputing 2016, 207, 131–140. [Google Scholar] [CrossRef]
- Wang, B.; Yin, J.; Tan, S.; Li, Y.; Li, M. Source camera model identification based on convolutional neural networks with local binary patterns coding. Signal Process. Image Commun. 2018, 68, 162–168. [Google Scholar] [CrossRef]
- Zandi, N.; Razzazi, F. Source Camera Identification Using WLBP Descriptor. In Proceedings of the 2020 International Conference on Machine Vision and Image Processing (MVIP 2020), Tehran, Iran, 18–20 February 2020; pp. 1–6. [Google Scholar]
- Thai, T.H.; Retraint, F.; Cogranne, R. Camera model identification based on the generalized noise model in natural images. Digit. Signal Process. 2015, 48, 285–297. [Google Scholar] [CrossRef] [Green Version]
- Xu, G.; Shi, Y.Q.; Su, W. Camera brand and model identification using moments of 1-D and 2-D characteristic functions. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP 2009), Cairo, Egypt, 7–10 November 2009; pp. 2917–2920. [Google Scholar]
- Thai, T.H.; Cogranne, R.; Retraint, F. Camera model identification based on the heteroscedastic noise model. IEEE Trans. Image Process. 2013, 23, 250–263. [Google Scholar] [CrossRef] [PubMed]
- Filler, T.; Fridrich, J.; Goljan, M. Using sensor pattern noise for camera model identification. In Proceedings of the 15th IEEE International Conference on Image Processing (ICIP 2008), San Diego, CA, USA, 12–15 October 2008; pp. 1296–1299. [Google Scholar]
- Çeliktutan, O.; Sankur, B.; Avcibas, I. Blind identification of source cell–phone model. IEEE Trans. Inf. Forensics Secur. 2008, 3, 553–566. [Google Scholar] [CrossRef] [Green Version]
- Ahmed, F.; Khelifi, F.; Lawgaly, A.; Bouridane, A. Comparative analysis of a deep convolutional neural network for source camera identification. In Proceedings of the 2019 IEEE 12th International Conference on Global Security, Safety and Sustainability (ICGS3 2019), London, UK, 16–18 January 2019; pp. 1–6. [Google Scholar]
- Mehrish, A.; Subramanyam, A.V.; Emmanuel, S. Sensor pattern noise estimation using probabilistically estimated RAW values. IEEE Signal Process. Lett. 2016, 23, 693–697. [Google Scholar] [CrossRef]
- MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 21 June–18 July 1965; pp. 281–297. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Bradski, G. The OpenCV Library. 2000. Available online: http://citebay.com/how-to-cite/opencv/ (accessed on 8 July 2021).
No. | Camera Model | Resolution | No. Images |
---|---|---|---|
0 | Canon_Ixus70 | 363 | |
1 | Casio_EX-Z150 | 692 | |
2 | FujiFilm_FinePixJ50 | 385 | |
3 | Kodak_M1063 | 1698 | |
4 | Nikon_CoolPixS710 | 695 | |
5 | Nikon_D200 | 373 | |
6 | Nikon_D70 | 373 | |
7 | Olympus_mju-1050SW | 782 | |
8 | Panasonic_DMC-FZ50 | 564 | |
9 | Pentax_OptioA40 | 405 | |
10 | Praktica_DCZ5.9 | 766 | |
11 | Ricoh_GX100 | 559 | |
12 | Rollei_RCP-7325XS | 377 | |
13 | Samsung_L74wide | 441 | |
14 | Samsung_NV15 | 412 | |
15 | Sony_DSC-H50 | 253 | |
16 | Sony_DSC-T77 | 492 | |
17 | Sony_DSC-W170 | 201 |
No. of Patches | 32 | 64 | 128 | 256 |
Accuracy (100%) |
T | 128 | 32 | 32 | 32 | 64 | 64 | 64 |
k | 0 | 16 | 32 | 96 | 16 | 32 | 64 |
n | 0 | 6 | 3 | 1 | 4 | 2 | 1 |
Accuracy (100%) |
Method | Accuracy (%) |
---|---|
None | |
Fixed high-pass filter [8] | |
Mean filter | |
Constrained convolutional layer [38] | |
Proposed |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Y.; Zou, Z.; Yang, Y.; Law, N.-F.B.; Bharath, A.A. Efficient Source Camera Identification with Diversity-Enhanced Patch Selection and Deep Residual Prediction. Sensors 2021, 21, 4701. https://doi.org/10.3390/s21144701
Liu Y, Zou Z, Yang Y, Law N-FB, Bharath AA. Efficient Source Camera Identification with Diversity-Enhanced Patch Selection and Deep Residual Prediction. Sensors. 2021; 21(14):4701. https://doi.org/10.3390/s21144701
Chicago/Turabian StyleLiu, Yunxia, Zeyu Zou, Yang Yang, Ngai-Fong Bonnie Law, and Anil Anthony Bharath. 2021. "Efficient Source Camera Identification with Diversity-Enhanced Patch Selection and Deep Residual Prediction" Sensors 21, no. 14: 4701. https://doi.org/10.3390/s21144701