[go: up one dir, main page]

CN111951254B - Edge-guided weighted-average-based source camera identification method and system - Google Patents

Edge-guided weighted-average-based source camera identification method and system Download PDF

Info

Publication number
CN111951254B
CN111951254B CN202010832394.2A CN202010832394A CN111951254B CN 111951254 B CN111951254 B CN 111951254B CN 202010832394 A CN202010832394 A CN 202010832394A CN 111951254 B CN111951254 B CN 111951254B
Authority
CN
China
Prior art keywords
camera
image
edge
fingerprint
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010832394.2A
Other languages
Chinese (zh)
Other versions
CN111951254A (en
Inventor
刘云霞
张文娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202010832394.2A priority Critical patent/CN111951254B/en
Publication of CN111951254A publication Critical patent/CN111951254A/en
Application granted granted Critical
Publication of CN111951254B publication Critical patent/CN111951254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The disclosure provides a source camera identification method and a system based on edge-guided weighted average, which belong to the technical field of source camera identification, and the method comprises the following steps: acquiring image data obtained by shooting by a camera; cutting the acquired image data into image blocks with preset sizes; acquiring a residual image of an image block, and constructing an edge weighting weight graph of the residual image; fusing the obtained residual images and the corresponding edge weighted weight graphs, and estimating to obtain a camera fingerprint; calculating a weighted correlation value between a residual image of the image data to be identified and a camera fingerprint, and carrying out source camera identification according to the weighted correlation value; according to the method and the device, different weights are distinguished through the given edge region and the non-edge region, so that the influence of the image edge region on the camera fingerprint is effectively reduced, the residual image is further fused in a statistics layer through maximum likelihood estimation, and the identification effect of the source camera is greatly improved.

Description

Edge-guided weighted-average-based source camera identification method and system
Technical Field
The disclosure relates to the technical field of source camera identification, in particular to a source camera identification method and system based on edge-guided weighted average.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The digital image acts as an information carrier and can be used as a valid proof in the court. However, as digital images are tampered with maliciously, people's confidence in the images is reduced. Therefore, the problem of source camera identification in digital image forensics has received much attention. Sensor Pattern Noise (SPN) has been an effective approach to the SCI (source camera identification ) problem because it is a unique fingerprint that identifies a specific device of the same brand and camera model. The current method for acquiring fingerprints is as follows: given a set of images from the same camera device, its residuals are obtained by subtracting the denoised version from the original image, and then summarizing the residuals using different strategies to estimate the fingerprint of the camera device.
The inventor of the present disclosure found that, due to the imperfection of the denoising algorithm of the current image, a large number of structures related to the image content remain in the residual image, by comparing the residual image with the residual image aberration of the original image, it can be found that the residual image is highly related to the edge/texture region of the original image, the smooth region is beneficial to the estimation of the camera fingerprint, and the texture/edge region interferes with the estimation of the camera fingerprint, thereby reducing the accuracy of the result of the source camera identification.
Disclosure of Invention
In order to solve the defects of the prior art, the present disclosure provides a source camera identification method and system based on edge-guided weighted average, which effectively reduces the influence of an image edge region on a camera fingerprint by giving different weights for the edge region and non-edge distinction, and further fuses residual errors in a statistics layer by maximum likelihood estimation to obtain the camera fingerprint, thereby greatly improving the source camera identification effect.
In order to achieve the above purpose, the present disclosure adopts the following technical scheme:
A first aspect of the present disclosure provides a method of source camera identification based on edge-guided weighted averaging.
A source camera identification method based on edge-guided weighted averaging, comprising the steps of:
Acquiring image data obtained by shooting by a camera;
Cutting the acquired image data into image blocks with preset sizes;
Acquiring a residual image of an image block, and constructing an edge weighting weight graph of the residual image;
Fusing the obtained residual images and the corresponding edge weighted weight graphs, and estimating to obtain a camera fingerprint;
And calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and carrying out source camera identification according to the weighted correlation value.
As some possible implementations, a laplace edge detection operator is used to detect edge regions and non-edge regions of the residual image, and the weights of the edge regions and the weights of the non-edge regions are assigned.
As some possible implementations, the method for acquiring the camera fingerprint is as follows:
Cutting an original image of a database image into image blocks with preset sizes, and dividing the image blocks into a fingerprint set and a test set;
acquiring a group of residual images of a camera by utilizing a fingerprint set, and constructing an edge weighting weight graph of each residual image;
And (3) fusing the acquired residual images and the corresponding edge weighted weight images by using a camera fingerprint fusion method, and estimating to obtain the camera fingerprint.
As a further limitation, the residual images are fused pixel by pixel using maximum likelihood estimation to obtain the final camera fingerprint.
By way of further limitation, the identification accuracy of a camera is determined by calculating the ratio of the number of correctly classified test images in the test set of that camera to the total number of all test images in the test set.
As a further limitation, the experimental database is set in two ways, one is to randomly select one camera of each camera model for all camera models to form a first experimental database; another is to select multiple cameras from the same camera model as the second experimental database.
As a further limitation, for all the images of the cameras in the two experimental databases, the dataset is partitioned in two ways;
one is to randomly select a first number of images of all cameras as a fingerprint set and the remaining second number of images as a test set; another is to randomly select a third number of images of all cameras as the fingerprint set and the remaining fourth number of images as the test set.
As some possible implementations, the camera with the largest weighted correlation value between the residual image and the camera fingerprint is the source camera corresponding to the image to be identified.
As some possible implementations, the original image to be identified is denoised to obtain a denoised version thereof, and the difference between the original image and the denoised version is used as the residual image.
As some possible implementations, the image to be identified is cropped from the central region to a 64×64 or 128×128 image block.
A second aspect of the present disclosure provides a source camera identification system based on edge-guided weighted averaging.
A source camera identification system based on edge-guided weighted averaging, comprising:
A data acquisition module configured to: acquiring image data obtained by shooting by a camera;
An image cropping module configured to: cutting the acquired image data into image blocks with preset sizes;
a weight distribution module configured to: acquiring a residual image of an image block, and constructing an edge weighting weight graph of the residual image;
A fingerprint acquisition module configured to: fusing the obtained residual images and the corresponding edge weighted weight graphs, and estimating to obtain a camera fingerprint;
An identification module configured to: and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and carrying out source camera identification according to the weighted correlation value.
A third aspect of the present disclosure provides a medium having stored thereon a program which when executed by a processor implements the steps in a method of edge guided weighted average based source camera identification as described in the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides an electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the steps in the method of edge guided weighted average based source camera identification according to the first aspect of the present disclosure when the program is executed.
Compared with the prior art, the beneficial effects of the present disclosure are:
1. according to the method, the system, the medium and the electronic equipment, different weights are distinguished through the given edge area and the non-edge area, so that the influence of the image edge area on the camera fingerprint is effectively reduced, residual errors are further fused on a statistics layer through maximum likelihood estimation to obtain the camera fingerprint, and finally the source camera identification effect is improved.
2. In the fingerprint acquisition stage, the method, the system, the medium and the electronic equipment allocate different edge weighting weights according to reliability from pixel point to pixel point in the residual image, reduce the contribution of the residual of the edge area to the camera fingerprint, and further estimate more accurate camera fingerprint by combining with the maximum likelihood estimation method.
3. According to the method, the system, the medium and the electronic equipment, the weighting correlation is carried out in the test stage, so that the influence of the image content on the identification of the source camera in the edge area of a single test image due to the denoising algorithm is greatly reduced; and the artifacts introduced by the denoising algorithm are inhibited, and a good result is obtained.
4. The method, the system, the medium and the electronic equipment can be combined with different denoising algorithms and SPN enhancement methods to further improve estimation accuracy; meanwhile, the experimental database and the partitioned data set designed by the method can be used for comparing the effectiveness of the algorithm more fairly.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the exemplary embodiments of the disclosure and together with the description serve to explain the disclosure, and do not constitute an undue limitation on the disclosure.
Fig. 1 is a flowchart illustrating a source camera identification method based on edge-guided weighted average according to embodiment 1 of the present disclosure.
Fig. 2 is an effect of an edge region on camera fingerprint estimation provided in embodiment 1 of the present disclosure.
Fig. 3 is a schematic diagram of a fingerprint extraction process according to embodiment 1 of the present disclosure.
Detailed Description
The disclosure is further described below with reference to the drawings and examples.
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments in accordance with the present disclosure. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
Example 1:
As shown in fig. 1, embodiment 1 of the present disclosure provides a source camera identification method based on edge-guided weighted average, including the steps of:
Acquiring image data to be identified;
Acquiring image data obtained by shooting by a camera;
Cutting the acquired image data into image blocks with preset sizes;
Acquiring a residual image of an image block, and constructing an edge weighting weight graph of the residual image;
Fusing the obtained residual images and the corresponding edge weighted weight graphs, and estimating to obtain a camera fingerprint;
And calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and carrying out source camera identification according to the weighted correlation value.
In detail, the method comprises parts of edge guided weighted average, maximum likelihood estimation residual fusion, weighted correlation and the like, so that better identification performance can be obtained.
In the stage of extracting the fingerprint, firstly, the root acquires the edge/non-edge area of the image, and the reliable area has larger contribution to fingerprint estimation by distributing different weight coefficients; secondly, residual error fusion is carried out by using maximum likelihood estimation by utilizing statistical information of residual errors so as to obtain more accurate camera fingerprints; finally, a weighted correlation is applied to the candidate camera fingerprint and the test image residual to calculate a correlation value. A single test image of the test phase is more susceptible to image content than is aggregated when evaluating the fingerprint, and therefore the edge-directed weighting map of the test image is used for weighted correlation in the phase of calculating the correlation value.
The specific process comprises the following steps:
S1: setting an experiment database and a data set
In the largest digital image forensic Dresden database, the experimental database was set in two ways. The first method is to randomly select a composition experiment database of one camera device for all camera models in the database, and 26 camera models are shared in the Dresden database, so that an experiment database containing 26 camera devices is obtained and is called an experiment database A; the second type used was to select a camera model containing 5 camera devices in Dresden data, a total of 5 camera devices of 25 camera models, which is referred to as experimental data B.
For the images of all camera devices in the experimental databases a and B, the data set was partitioned in two ways. One is to randomly select 25 images of all camera devices as a fingerprint set, and the remaining 130 images as a test set; another is to randomly select 50 images of all camera devices as the fingerprint set, and the remaining 100 as the test set. The set of data set thus set is more fair because the total number of images per camera device in the Dresden database is not consistent.
S2: fingerprint estimation
For an image of a dataset, it is cropped from the intermediate region into image blocks of 64×64 or 128×128 size. Compared with the original image, the small image block contains less fingerprint information, so that the source camera identification method based on the image block is more difficult. Hereinafter 'image block' means a portion cut out from an original image for experiments.
Performing a BM3D denoising filter on the image block of the fingerprint set, the difference between the image block and its denoised version being referred to as the residual:
R=I-FBM3D(I) (1)
Detecting edge regions and non-edge regions of an image block using a laplace edge detection operator:
An edge weighting weight map W is set, and the edge weight of the assigned edge region is 0.475, and the edge weight of the non-edge region is 1. Fig. 2 illustrates the effect of edge regions on camera fingerprint estimation.
Fig. 2 (a) shows an estimated camera fingerprint obtained by averaging residual images of a plurality of image blocks. Fig. 2 (b) shows an image block obtained by cropping an original image. Fig. 2 (c) is a residual image extracted from fig. 2 (b) by GBWA method. Fig. 2 (d) is an edge view of fig. 2 (b).
Fusing the residual with a maximum likelihood estimation algorithm using an edge weighted average algorithm to obtain a final camera fingerprint:
A fingerprint extraction flow chart of this embodiment is shown in fig. 3.
S3: weighted correlation
For the images of the test set, it is also cut from the middle region into image blocks of 64×64 or 128×128 size. The image block is subjected to a BM3D denoising filter and its residual is obtained. And detecting the edge area and the non-edge area of the image block by using a Laplace edge detection operator, setting an edge weighting weight graph W t, and distributing the edge weight of the edge area to be 0.475 and the edge weight of the non-edge area to be 1. Performing weighted correlation operation on the candidate camera fingerprints and the test image residual errors to obtain normalized correlation coefficients of the candidate camera fingerprints and the test image residual errors:
S4: source camera identification performance evaluation
The experiment database A is classified by 26 by using the normalized correlation coefficient, and the experiment database B is classified by 5, and because the cameras of the experiment database B are from the same camera model, fingerprints in front of the cameras are easier to mix up and have higher difficulty.
The decision condition followed at the time of testing is to decide the test image to the candidate camera that yields the largest correlation value. By calculating the ratio of the number of correctly classified test images in a test set of a certain camera device to the total number of all the test images in the test set, as the camera recognition accuracy:
Since the number of camera devices in the two experimental databases is large, the average source camera recognition accuracy of all the camera devices is adopted as an evaluation criterion.
The present embodiment will be further described with reference to specific examples.
First, the digital images in the Dresden database are downloaded and then the images in the database are separated into fingerprint sets and test sets according to the data set division criteria described above. Second, the entire image is cropped from the center of the image into image blocks of 64×64 or 128×128, and the residual is fused using edge weighted average and maximum likelihood estimation to obtain the camera fingerprint. And finally, carrying out weighted correlation on the residual error of the test image and the fingerprint of the candidate camera to obtain a normalized correlation coefficient for source camera identification.
For the experimental database a, the method of this embodiment was compared with several other successful source camera identification methods under the same design.
The comparison results are given in table 1, and the accuracy of the individual camera apparatuses are all calculated by the formula (5).
From the experimental results shown in table 1, the source camera recognition method of this embodiment provides the highest recognition accuracy thanks to the edge-guided weighting, the maximum likelihood estimation fusion residual and the weighted correlation. Based on the experimental setup of the present embodiment, the number of fingerprints to be extracted and the size of the identification image block are changed, respectively. It is a known fact that the larger the number of images used to estimate the camera fingerprint, the more accurate the estimated camera fingerprint; as the image block is reduced, the recognition accuracy may be greatly reduced. In the method of the embodiment, the highest recognition accuracy is achieved under four conditions of the two variable combination modes, and the effectiveness of the algorithm is widely reflected.
Generally, BM3D based approaches generally exhibit optimal performance due to their strong noise reduction capabilities. Meanwhile, the performance of the device can be continuously improved, and the device is averagely improved by 1.54%,1.1%,1.69% and 0.38% for four conditions. Compared with the MLE method with the highest awareness, the recognition accuracy of the four cases is respectively improved by 26.06%,18.97%,24.65% and 17.27%.
Table 1: comparison of average recognition accuracy of different source camera recognition methods
In the 26-classification experiment, compared with the MLE method, the method is greatly improved, and the recognition accuracy can be effectively improved.
For the experimental database B, the average recognition accuracy of 5 camera models can be obtained according to the operation, and for better displaying experimental results, the overall average recognition accuracy under all camera models is calculated below.
The method of this embodiment is compared to several other successful source camera identification methods. The comparison results are given in table 2, and the accuracy of the individual camera devices is calculated by the following formula:
For four cases, compared with the BM 3D-based method [3], the identification accuracy of the embodiment is improved by 3.36%,1.51%,3.99% and 1.27%. Compared with the MLE method with the highest awareness, the recognition accuracy of the four cases is respectively improved by 14.06%,11.28%,20.76% and 15.14%.
Table 2: comparison of average recognition accuracy of different source camera recognition methods
For the BM3D method, the method of this embodiment has a higher promotion in distinguishing camera devices from the same camera model, which is of great importance for actual evidence collection.
Example 2:
Embodiment 2 of the present disclosure provides a source camera identification system based on edge-guided weighted averaging, including:
A data acquisition module configured to: acquiring image data obtained by shooting by a camera;
An image cropping module configured to: cutting the acquired image data into image blocks with preset sizes;
a weight distribution module configured to: acquiring a residual image of an image block, and constructing an edge weighting weight graph of the residual image;
A fingerprint acquisition module configured to: fusing the obtained residual images and the corresponding edge weighted weight graphs, and estimating to obtain a camera fingerprint;
An identification module configured to: and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and carrying out source camera identification according to the weighted correlation value.
The operation method of the system is the same as the source camera identification method based on edge-guided weighted average provided in embodiment 1, and will not be described here again.
Example 3:
Embodiment 3 of the present disclosure provides a medium having a program stored thereon, which when executed by a processor, implements the steps in the edge-guided weighted-average-based source camera identification method according to embodiment 1 of the present disclosure, the steps being:
Acquiring image data obtained by shooting by a camera;
Cutting the acquired image data into image blocks with preset sizes;
Acquiring a residual image of an image block, and constructing an edge weighting weight graph of the residual image;
Fusing the obtained residual images and the corresponding edge weighted weight graphs, and estimating to obtain a camera fingerprint;
And calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and carrying out source camera identification according to the weighted correlation value.
The detailed steps are the same as those of the source camera identification method based on edge-guided weighted average provided in embodiment 1, and will not be repeated here.
Example 4:
Embodiment 4 of the present disclosure provides an electronic device, including a memory, a processor, and a program stored on the memory and executable on the processor, where the processor implements steps in the method for identifying a source camera based on edge-guided weighted average according to embodiment 1 of the present disclosure when the program is executed, where the steps are as follows:
Acquiring image data obtained by shooting by a camera;
Cutting the acquired image data into image blocks with preset sizes;
Acquiring a residual image of an image block, and constructing an edge weighting weight graph of the residual image;
Fusing the obtained residual images and the corresponding edge weighted weight graphs, and estimating to obtain a camera fingerprint;
And calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and carrying out source camera identification according to the weighted correlation value.
The detailed steps are the same as those of the source camera identification method based on edge-guided weighted average provided in embodiment 1, and will not be repeated here.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disc, a Read-Only Memory (ROM), a Random access Memory (Random AccessMemory, RAM), or the like.
The foregoing description of the preferred embodiments of the present disclosure is provided only and not intended to limit the disclosure so that various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (9)

1. A method for identifying a source camera based on edge-guided weighted averaging, comprising the steps of:
Acquiring image data obtained by shooting by a camera;
Cutting the acquired image data into image blocks with preset sizes;
Acquiring a residual image of an image block, and constructing an edge weighting weight graph of the residual image;
Fusing the obtained residual images and the corresponding edge weighted weight graphs, and estimating to obtain a camera fingerprint;
The acquisition method of the camera fingerprint comprises the following steps:
Cutting an original image of a database image into image blocks with preset sizes, and dividing the image blocks into a fingerprint set and a test set;
acquiring a group of residual images of a camera by utilizing a fingerprint set, and constructing an edge weighting weight graph of each residual image;
The camera fingerprint fusion method is utilized, and the obtained residual images and the corresponding edge weighting weight images are fused and then estimated to obtain the camera fingerprint;
fusing residual errors by using an edge weighted average algorithm and a maximum likelihood estimation algorithm to obtain a final camera fingerprint;
Calculating a weighted correlation value between a residual image of the image data to be identified and a camera fingerprint, and carrying out source camera identification according to the weighted correlation value;
And performing weighted correlation operation on the fingerprint of the camera to be detected and the residual error of the test image to obtain a weighted correlation value formula:
2. The method for identifying a source camera based on edge-guided weighted averaging as claimed in claim 1, wherein edge regions and non-edge regions of the residual image are detected using a laplacian edge detection operator, and weights of the edge regions and weights of the non-edge regions are assigned.
3. The method for identifying a source camera based on edge-guided weighted averaging of claim 1, wherein a final camera fingerprint is obtained by fusing residual images pixel by pixel using maximum likelihood estimation;
Or alternatively
The identification accuracy of a camera is obtained by calculating the ratio of the number of correctly classified test images in the test set of the camera to the total number of all the test images in the test set.
4. The method for identifying a source camera based on edge-guided weighted averaging of claim 1, wherein the experimental database is set in two ways, one for randomly selecting one camera of each camera model for all camera models to form the first experimental database; another is to select multiple cameras from the same camera model as the second experimental database.
5. The edge-guided weighted average-based source camera identification method of claim 4 wherein the dataset is partitioned in two ways for all camera images in both experimental databases;
one is to randomly select a first number of images of all cameras as a fingerprint set and the remaining second number of images as a test set; another is to randomly select a third number of images of all cameras as the fingerprint set and the remaining fourth number of images as the test set.
6. The method for identifying a source camera based on edge-guided weighted average according to claim 1, wherein the camera with the largest weighted correlation value between the residual image and the camera fingerprint is the source camera corresponding to the image to be identified;
Or alternatively
Denoising the original image to be identified to obtain a denoised version thereof, and using a difference value between the original image and the denoised version as a residual image;
Or alternatively
The image to be identified is cropped from the central region to a 64 x 64 or 128 x 128 image block.
7. A source camera identification system based on edge guided weighted averaging implementing the method of any of claims 1-6, comprising:
A data acquisition module configured to: acquiring image data obtained by shooting by a camera;
An image cropping module configured to: cutting the acquired image data into image blocks with preset sizes;
a weight distribution module configured to: acquiring a residual image of an image block, and constructing an edge weighting weight graph of the residual image;
A fingerprint acquisition module configured to: fusing the obtained residual images and the corresponding edge weighted weight graphs, and estimating to obtain a camera fingerprint;
An identification module configured to: and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and carrying out source camera identification according to the weighted correlation value.
8. A medium having stored thereon a program which when executed by a processor performs the steps of the method for edge guided weighted average based source camera identification according to any of claims 1-6.
9. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor implements the steps in the edge-guided weighted average based source camera identification method of any of claims 1-6 when the program is executed.
CN202010832394.2A 2020-08-18 2020-08-18 Edge-guided weighted-average-based source camera identification method and system Active CN111951254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010832394.2A CN111951254B (en) 2020-08-18 2020-08-18 Edge-guided weighted-average-based source camera identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010832394.2A CN111951254B (en) 2020-08-18 2020-08-18 Edge-guided weighted-average-based source camera identification method and system

Publications (2)

Publication Number Publication Date
CN111951254A CN111951254A (en) 2020-11-17
CN111951254B true CN111951254B (en) 2024-05-10

Family

ID=73343161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010832394.2A Active CN111951254B (en) 2020-08-18 2020-08-18 Edge-guided weighted-average-based source camera identification method and system

Country Status (1)

Country Link
CN (1) CN111951254B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025073B (en) * 2021-11-18 2023-09-29 支付宝(杭州)信息技术有限公司 Method and device for extracting hardware fingerprint of camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000175046A (en) * 1998-09-30 2000-06-23 Fuji Photo Film Co Ltd Image processing method and image processor
CN102819831A (en) * 2012-08-16 2012-12-12 江南大学 Camera source evidence obtaining method based on mode noise big component
CN107451990A (en) * 2017-06-13 2017-12-08 宁波大学 A kind of photograph image altering detecting method using non-linear guiding filtering
CN111178166A (en) * 2019-12-12 2020-05-19 中国科学院深圳先进技术研究院 Camera source identification method based on image content self-adaption

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4727720B2 (en) * 2008-12-31 2011-07-20 株式会社モルフォ Image processing method and image processing apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000175046A (en) * 1998-09-30 2000-06-23 Fuji Photo Film Co Ltd Image processing method and image processor
CN102819831A (en) * 2012-08-16 2012-12-12 江南大学 Camera source evidence obtaining method based on mode noise big component
CN107451990A (en) * 2017-06-13 2017-12-08 宁波大学 A kind of photograph image altering detecting method using non-linear guiding filtering
CN111178166A (en) * 2019-12-12 2020-05-19 中国科学院深圳先进技术研究院 Camera source identification method based on image content self-adaption

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
An Improved Sensor Pattern Noise Estimation Method Based on Edge Guided Weighted Averaging;Wen-Na Zhang 等;Proceedings of the International Conference on Machine Learning for Cyber Security;第405-415页 *
Effective Source Camera Identification based on MSEPLL Denoising Applied to Small Image Patches;Wen-Na Zhang;Proceedings of APSIPA Annual Summit and Conference 2019;第18-21页 *
Laplacian Operator-Based Edge Detectors;Xin Wang 等;IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE;第29卷(第5期);第886-890 页 *
Sensor Pattern Noise Matching Based on Reliability Map for SourceCamera Identification;Riccardo Satta 等;In Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISAPP-2015);第222-226页 *

Also Published As

Publication number Publication date
CN111951254A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
JP2002288658A (en) Object extracting device and method on the basis of matching of regional feature value of segmented image regions
US10275677B2 (en) Image processing apparatus, image processing method and program
CN108416789A (en) Method for detecting image edge and system
CN111027546B (en) Character segmentation method, device and computer readable storage medium
WO2021109697A1 (en) Character segmentation method and apparatus, and computer-readable storage medium
JP5706647B2 (en) Information processing apparatus and processing method thereof
US8983199B2 (en) Apparatus and method for generating image feature data
CN103455994A (en) Method and equipment for determining image blurriness
CN110378893B (en) Image quality evaluation method and device and electronic equipment
CN113109368A (en) Glass crack detection method, device, equipment and medium
KR20180109658A (en) Apparatus and method for image processing
CN115908154A (en) Video late-stage particle noise removing method based on image processing
CN117115117B (en) Pathological image recognition method based on small sample, electronic equipment and storage medium
CN109741334A (en) A method of image segmentation is carried out by piecemeal threshold value
CN108205657A (en) Method, storage medium and the mobile terminal of video lens segmentation
CN109785356A (en) A kind of background modeling method of video image
CN106331746B (en) Method and apparatus for identifying watermark location in video file
CN116129195A (en) Image quality evaluation device, image quality evaluation method, electronic device, and storage medium
CN112926695A (en) Image recognition method and system based on template matching
CN111951254B (en) Edge-guided weighted-average-based source camera identification method and system
Julliand et al. Automated image splicing detection from noise estimation in raw images
Cozzolino et al. PRNU-based forgery localization in a blind scenario
CN106446832B (en) Video-based pedestrian real-time detection method
CN115019069A (en) Template matching method, template matching device and storage medium
CN117830623A (en) Image positioning area selection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant