[go: up one dir, main page]

CN103310236A - Mosaic image detection method and system based on local two-dimensional characteristics - Google Patents

Mosaic image detection method and system based on local two-dimensional characteristics Download PDF

Info

Publication number
CN103310236A
CN103310236A CN2013102616210A CN201310261621A CN103310236A CN 103310236 A CN103310236 A CN 103310236A CN 2013102616210 A CN2013102616210 A CN 2013102616210A CN 201310261621 A CN201310261621 A CN 201310261621A CN 103310236 A CN103310236 A CN 103310236A
Authority
CN
China
Prior art keywords
local
image
absolute value
dimensional
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013102616210A
Other languages
Chinese (zh)
Inventor
李翔
李建华
裘瑛
黄豫蕾
王佳凯
陈继国
王士林
林祥
陈璐艺
冯皪魏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI PENGYUE JINGHONG INFORMATION TECHNOLOGY DEVELOPMENT Co Ltd
SHANGHAI INSTITUTE OF DATA ANALYSIS AND PROCESSING TECHNOLOGY
Shanghai Jiao Tong University
Original Assignee
SHANGHAI PENGYUE JINGHONG INFORMATION TECHNOLOGY DEVELOPMENT Co Ltd
SHANGHAI INSTITUTE OF DATA ANALYSIS AND PROCESSING TECHNOLOGY
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI PENGYUE JINGHONG INFORMATION TECHNOLOGY DEVELOPMENT Co Ltd, SHANGHAI INSTITUTE OF DATA ANALYSIS AND PROCESSING TECHNOLOGY, Shanghai Jiao Tong University filed Critical SHANGHAI PENGYUE JINGHONG INFORMATION TECHNOLOGY DEVELOPMENT Co Ltd
Priority to CN2013102616210A priority Critical patent/CN103310236A/en
Publication of CN103310236A publication Critical patent/CN103310236A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

一种图像处理及信息安全技术领域的基于局部二维特征的拼接图像检测方法及系统,通过将图像采用不同边长的正方形分割后进行分块DCT变换,并将得到的分块DCT系数采用局部二维特征的方式进行描述并合并为完整检测特征后,采用分类器进行分类。本发明能够兼顾检测精度和检测复杂度,检测精确度可以达到89.9%。

Figure 201310261621

A mosaic image detection method and system based on local two-dimensional features in the field of image processing and information security technology. The image is divided into squares with different side lengths and then subjected to block DCT transformation, and the obtained block DCT coefficients are localized. After the two-dimensional features are described and combined into complete detection features, a classifier is used for classification. The invention can take both detection accuracy and detection complexity into consideration, and the detection accuracy can reach 89.9%.

Figure 201310261621

Description

基于局部二维特征的拼接图像检测方法及系统Method and system for stitching image detection based on local two-dimensional features

技术领域technical field

本发明涉及的是一种图像处理及信息安全技术领域的方法及系统,具体是一种用于对不具有先验知识的拼接图像的基于局部二维特征的拼接图像检测方法及系统。The present invention relates to a method and system in the technical field of image processing and information security, in particular to a spliced image detection method and system based on local two-dimensional features for spliced images without prior knowledge.

背景技术Background technique

当今社会数字图像技术已经非常普及,使用的门槛也日趋降低。很多功能强大的数字图像处理软件也被大众所接触,未经过专业培训的普通人也能够制作出肉眼无法直接辨别的伪造图像。伪造图像出现于论坛、微博等热点帖子中时,能够给政府、企业或者个人造成极大的负面影响,因此对于采用计算机技术来检测伪造图片成为了当前信息内容安全领域的研究热点。Digital image technology has become very popular in today's society, and the threshold for using it is also decreasing day by day. Many powerful digital image processing software are also accessible to the public, and ordinary people without professional training can also create fake images that cannot be directly identified by the naked eye. When forged images appear in hot posts such as forums and microblogs, they can have a great negative impact on the government, enterprises or individuals. Therefore, the use of computer technology to detect forged images has become a research hotspot in the field of information content security.

通常来讲,目前主流的伪造图片检测技术分为两个大类:主动方式和被动方式。主动方式主要是在生成图片的过程中嵌入数字签名或者数字水印等,通过对这些标记的检测来保证图片不被篡改。被动方式则主要利用图片本身的统计特性,而不依赖于事先植入的可识别标记。主动方式植入的数字签名和数字水印等标记对于图片本身有一定的破坏性,在某些情况下不宜使用。相反被动方式的适应性要更强,因此成为了当前的主要研究方向。Generally speaking, the current mainstream counterfeit image detection technologies are divided into two categories: active methods and passive methods. The active method is mainly to embed digital signatures or digital watermarks in the process of generating pictures, and to ensure that pictures are not tampered with by detecting these marks. Passive methods mainly use the statistical properties of the image itself, rather than relying on pre-implanted identifiable marks. Marks such as digital signatures and digital watermarks implanted in an active way are destructive to the image itself and should not be used in some cases. On the contrary, the adaptability of the passive method is stronger, so it has become the main research direction at present.

图像的拼接是伪造图像的最基本的步骤。完整的图像篡改流程通常包括了拼接、缩放、旋转和后续处理,对于拼接操作的检测是绝大多数防伪鉴定方法的基础。Image stitching is the most basic step in forging images. The complete image tampering process usually includes stitching, scaling, rotation and subsequent processing, and the detection of stitching operations is the basis of most anti-counterfeiting identification methods.

经典的图像拼接检测方法有Ng等提出的双谱特征检测方法,见Ng TT,Chang SF.,Adataset of authentic and spliced image blocks(真实和拼接图像块的数据集),(ADVENT TechnicalReport,#203-2004-3,Columbia University.)该文献同时提供了一个通用的拼接图片检测数据集,用以比较各种算法的优劣,被广泛引用。Ng等在该数据集上取得了72%的检测准确率。Fu等人采用小波变换域的特征函数的希尔伯特-黄变换和矩特征来进行检测,取得了80.15%的检测准确率。The classic image mosaic detection method includes the bispectral feature detection method proposed by Ng et al., see Ng TT, Chang SF., Adataset of authentic and spliced image blocks (dataset of real and spliced image blocks), (ADVENT TechnicalReport, #203- 2004-3, Columbia University.) This document also provides a general spliced image detection data set to compare the pros and cons of various algorithms and is widely cited. Ng et al achieved a detection accuracy of 72% on this dataset. Fu et al. used the Hilbert-Huang transform and moment feature of the characteristic function in the wavelet transform domain for detection, and achieved a detection accuracy of 80.15%.

和上述统计方法不同的是,Johnson等人利用拼接图像不同拼接区域光照特征的不一致性进行检测,见Johnson MK,Farid H.Exposing digital forgeries by detecting inconsistencies inlighting(基于光照不一致性检测的数字伪造辨识).(In Proceedings of ACM Multimedia andSecurity Workshop,New York,USA,2005;1–9.)采用基于统计特征的检测方法因为具有成熟的方法框架,包括特征选择、特征提取和分类学习等步骤,从而成为常用的检测方法。但是现有的基于统计特征的检测方法检测准确度还有待提高,同时还有大量的统计特征并未被用于该问题的检测。Different from the above statistical method, Johnson et al. used the inconsistency of illumination features in different stitching areas of stitched images to detect, see Johnson MK, Farid H. Exposing digital forgeries by detecting inconsistencies inlighting (digital forgery identification based on illumination inconsistency detection) .(In Proceedings of ACM Multimedia and Security Workshop, New York, USA, 2005; 1–9.) The detection method based on statistical features has a mature method framework, including the steps of feature selection, feature extraction and classification learning. Commonly used detection methods. However, the detection accuracy of existing detection methods based on statistical features needs to be improved, and a large number of statistical features have not been used for detection of this problem.

经过对现有技术的检索发现,中国专利文献号CN102855496A,公开日2013-01-02,公开了一种遮挡人脸认证方法及系统,该技术包括:S1、采集人脸视频图像;S2、对所采集的人脸视频图像进行预处理;S3、对遮挡人脸进行检测计算,根据视频序列的运动信息,利用三帧差法对人脸图像的位置进行估计,然后通过Adaboost算法进行进一步人脸位置的确认;S4、对遮挡人脸进行识别计算,将人脸样本分为若干分块,采用结合监督1-NN近邻法的SVM二分算法对人脸分块进行遮挡判别,若分块被遮挡,则直接舍弃,若分块未被遮挡,则提取相应的LBP纹理特征向量进行加权识别,然后用基于正交投影方法的分类器用来减少特征匹配次数。该技术将图像机械地分成6块区域,虽然能够用来解决人脸和脸部重要器官位置相对固定的问题,但是对于图像没有任何先验知识的被动图像伪造鉴定的问题却无法解决。After searching the existing technology, it is found that Chinese Patent Document No. CN102855496A, with a publication date of 2013-01-02, discloses a method and system for face masking authentication. The technology includes: S1, collecting face video images; S2, The collected face video image is preprocessed; S3, detect and calculate the occluded face, according to the motion information of the video sequence, use the three-frame difference method to estimate the position of the face image, and then perform further face detection through the Adaboost algorithm. Confirmation of the position; S4. Identify and calculate the occluded face, divide the face sample into several blocks, and use the SVM dichotomy algorithm combined with the supervised 1-NN nearest neighbor method to perform occlusion discrimination on the face blocks. , then discard it directly. If the block is not occluded, extract the corresponding LBP texture feature vector for weighted recognition, and then use the classifier based on the orthogonal projection method to reduce the number of feature matching. This technology divides the image mechanically into 6 areas. Although it can be used to solve the problem of relatively fixed positions of the face and important facial organs, it cannot solve the problem of passive image forgery identification without any prior knowledge of the image.

发明内容Contents of the invention

本发明针对现有技术存在的上述不足,提出一种基于局部二维特征的拼接图像检测方法及系统,能够兼顾检测精度和检测复杂度。Aiming at the above-mentioned deficiencies in the prior art, the present invention proposes a mosaic image detection method and system based on local two-dimensional features, which can take both detection accuracy and detection complexity into consideration.

所述的局部二维特征是指一种针对拼接检测定制的本地二元模式(Local Binary Pattern,LBP)特征,LBP特征最早是Ojala等人提出的,见Ojala T,Pietikainen M,Maenpaa T.Multiresolution gray-scale and rotation invariant texture classification with local binary patterns.(采用本地二元模式的多分辨率的灰度和旋转不变纹理分类)(IEEE Transactions on Pattern Analysisand Machine Intelligence2002;24(7):971–987.)The local two-dimensional feature refers to a local binary pattern (Local Binary Pattern, LBP) feature customized for splicing detection. The LBP feature was first proposed by Ojala et al., see Ojala T, Pietikainen M, Maenpaa T.Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. (IEEE Transactions on Pattern Analysis and Machine Intelligence2002;24(7):971–987 .)

所述的LBP特征作为一种图像纹理特征被广泛应用于人脸识别、背景建模和隐写分析。LBP的核心思想是将一个图像像素和他周围的像素点比较,然后经过阈值化处理后,得到0/1序列。0/1序列可以看作一个二进制数,从而用一个十进制的整数来表示。图像矩阵内的每一个像素对应一个十进制整数。这些整数可以构成一个直方图,这个直方图反映了图像的边缘等统计规律。The LBP feature is widely used as an image texture feature in face recognition, background modeling and steganalysis. The core idea of LBP is to compare an image pixel with its surrounding pixels, and then get a 0/1 sequence after thresholding. The 0/1 sequence can be regarded as a binary number, so it can be represented by a decimal integer. Each pixel in the image matrix corresponds to a decimal integer. These integers can form a histogram, which reflects statistical laws such as the edges of the image.

本发明是通过以下技术方案实现的:The present invention is achieved through the following technical solutions:

本发明涉及一种基于局部二维特征的拼接图像检测方法,通过将图像采用不同边长的正方形分割后进行分块DCT变换,并将得到的分块DCT系数采用局部二维特征的方式进行描述并合并为完整检测特征后,采用分类器进行分类。The invention relates to a splicing image detection method based on local two-dimensional features, which divides the image into squares with different side lengths and performs block DCT transformation, and describes the obtained block DCT coefficients in the form of local two-dimensional features And after merging into a complete detection feature, a classifier is used for classification.

所述方法具体包括以下步骤:Described method specifically comprises the following steps:

步骤一:对待处理数据集中的任一图像进行多尺度的分块DCT(离散余弦)变换,得到的分块DCT系数矩阵,并对分块DCT系数全部取绝对值,得到分块DCT系数绝对值矩阵。Step 1: Perform multi-scale block DCT (discrete cosine) transform on any image in the data set to be processed to obtain the block DCT coefficient matrix, and take the absolute value of all block DCT coefficients to obtain the absolute value of block DCT coefficients matrix.

所述的分块DCT变换是指:选择一种边长b,将待处理图像划分为边长为b的相同大小的正方形小块,然后在每一个正方形小块中进行DCT变换。The block-by-block DCT transformation refers to: select a side length b, divide the image to be processed into small square blocks of the same size with side length b, and then perform DCT transformation in each small square block.

所述的正方形小块的边长b优选为8的倍数,如通常为8、16、32、…,但是根据应用需求也可以使任意数值。The side length b of the small square block is preferably a multiple of 8, such as usually 8, 16, 32, . . . , but can also be any value according to application requirements.

当待处理图像的大小不能整除b,则在待处理图像的最右侧和最下侧添加0,直至其符合b的整倍数;经过分块DCT变换后所得到的分块DCT系数矩阵是一个M×N的矩阵,待处理图像尺寸是M′×N′通常这两者是大小一样的。当在待处理图像最右侧和最下侧添加了0列和0行后,所得到的系数矩阵会大于待处理图像。When the size of the image to be processed cannot be divisible by b, add 0 to the rightmost and bottom sides of the image to be processed until it meets an integer multiple of b; the block DCT coefficient matrix obtained after the block DCT transformation is a For an M×N matrix, the size of the image to be processed is M′×N′, usually the two are the same size. When 0 columns and 0 rows are added to the rightmost and bottommost sides of the image to be processed, the resulting coefficient matrix will be larger than the image to be processed.

步骤二:采用局部二维特征来表征图像拼接造成的统计特征的变化,将分块DCT系数绝对值矩阵转化为局部二维特征直方图。Step 2: Use local two-dimensional features to represent the changes in statistical features caused by image stitching, and convert the block DCT coefficient absolute value matrix into a local two-dimensional feature histogram.

步骤二具体包括以下操作:Step 2 specifically includes the following operations:

2.1在分块DCT系数绝对值矩阵上任意元素的周围取P个点,记作gp,p={1…P}。这些点在中心点周围呈2π/P夹角等分排列。当周边点不在矩阵网格点上,利用其周边网格点数值进行插值估算,周边点和中心点的距离记为R。2.1 Take P points around any element on the block DCT coefficient absolute value matrix, denoted as g p , p={1...P}. These points are arranged equally at an angle of 2π/P around the central point. When the surrounding points are not on the grid points of the matrix, the value of the surrounding grid points is used for interpolation estimation, and the distance between the surrounding points and the central point is recorded as R.

在本发明中P个点优选取值为所述任意元素的上、下、左、右、以及左上、左下、右上、右下共8个点,周边点和中心点的距离R优选取1。In the present invention, the preferred values of the P points are 8 points of the upper, lower, left, right, and upper left, lower left, upper right, and lower right of the arbitrary element, and the distance R between the peripheral point and the central point is preferably 1.

2.2将周边点的灰度值依次与中心点的灰度值进行比较:当周边点的灰度值大于中心点的灰度值,则将比较结果记作1,否则为0;然后将P个点的比较结果按照自右向左排列,构成一个长度为P的0/1比较结果序列,将该比较结果序列作为二进制整数并用于转换为十进制整数。2.2 Compare the gray value of the peripheral points with the gray value of the center point in turn: when the gray value of the peripheral points is greater than the gray value of the center point, the comparison result is recorded as 1, otherwise it is 0; then P The comparison results of the points are arranged from right to left to form a 0/1 comparison result sequence of length P, which is used as a binary integer and converted to a decimal integer.

2.3将分块DCT系数绝对值矩阵上每一个元素进行步骤2.2的处理,得到对应的十进制整数,将所有的十进制整数构成一个局部二维特征直方图。2.3 Process each element of the block DCT coefficient absolute value matrix in step 2.2 to obtain the corresponding decimal integers, and form a local two-dimensional feature histogram with all the decimal integers.

步骤三:步骤一中取不同的正方形小块的边长值b则对应得到代表各种尺度的分块DCT系数的绝对值矩阵,并根据步骤二的操作得到若干个对应的局部二维特征直方图;把各个局部二维特征直方图所生成的特征串联起来构成一个完整的统计特征,然后利用SVM分类器进行学习和分类。Step 3: In step 1, take the side length values b of different small square blocks to obtain the absolute value matrix of block DCT coefficients representing various scales, and obtain several corresponding local two-dimensional feature histograms according to the operation of step 2 Figure; The features generated by each local two-dimensional feature histogram are concatenated to form a complete statistical feature, and then the SVM classifier is used for learning and classification.

所述的局部二维特征直方图中每一个唯一对应一个步骤一中的取不同边长值b的分块DCT变换。Each of the local two-dimensional feature histograms uniquely corresponds to a block DCT transformation with different side length values b in step 1.

所述的直方图的横向刻度取值为0到2P-1,把每一个横向刻度对应的直方图的值都作为一个特征,特征维度2PThe horizontal scale of the histogram ranges from 0 to 2 P -1, and the value of the histogram corresponding to each horizontal scale is taken as a feature, and the feature dimension is 2 P .

所述的分类器采用的是主流的SVM实现LibSVM,具体如Chang CC,Lin CJ.LIBSVM:alibrary for support vector machines.http://www.csie.ntu.edu.tw/cjlin/libsvm,2001。The classifier used is the mainstream SVM implementation LibSVM, specifically Chang CC, Lin CJ. LIBSVM: library for support vector machines. http://www.csie.ntu.edu.tw/cjlin/libsvm, 2001.

步骤三具体包括以下操作:Step three specifically includes the following operations:

3.1通过取不同的边长值b得到不同尺度的分块DCT系数绝对值矩阵,对每一个分块DCT绝对值矩阵采用步骤二所述得到局部二维特征直方图。3.1 Obtain the absolute value matrix of block DCT coefficients of different scales by taking different side length values b, and obtain a local two-dimensional feature histogram as described in step 2 for each block DCT absolute value matrix.

3.2从每一个局部二维特征直方图提取2P维特征,并将这些特征串接在一起形成一个完整的统计特征。3.2 Extract 2 P -dimensional features from each local two-dimensional feature histogram, and concatenate these features together to form a complete statistical feature.

3.3将待处理数据集中的其余所有图片按照上述的特征提取方法提取特征,同时将图片分为训练集与测试集两部分,先将训练集的特征和类别数据输入分类器并得到分类模型;再将分类模型和测试集的特征再次输入分类器,得到测试集的类别判别;最后依据已知的测试集的类别得到分类准确度。3.3 Extract features from all the remaining pictures in the data set to be processed according to the above-mentioned feature extraction method, and divide the pictures into two parts, the training set and the test set, first input the features and category data of the training set into the classifier and obtain the classification model; then The classification model and the features of the test set are input into the classifier again to obtain the category discrimination of the test set; finally, the classification accuracy is obtained according to the known category of the test set.

所述的训练集和测试集的图片数目比例优选为5:1;The picture number ratio of described training set and test set is preferably 5:1;

所述的训练集和测试集中包含的拼接图片和自然图片的个数优选均为1:1。The number of mosaic pictures and natural pictures contained in the training set and the test set is preferably 1:1.

本发明涉及一种基于局部二维特征的拼接图像检测系统,包括:预处理模块、局部二维直方图构建模块、特征提取模块和分类器模块,其中:预处理模块与局部二维直方图构建模块相连并接收原始图像进行分块DCT变换后得到分块DCT系数绝对值矩阵并输出至局部二维直方图构建模块,局部二维直方图构建模块与特征提取模块相连并将分块DCT系数绝对值矩阵进行LBP运算后得到局部二维直方图并输出到特征提取模块,特征提取模块与分类器模块相连接并将局部二维直方图信息进行特征提取运算后得到分类特征输出到分类器模块,分类器模块接收分类特征进行分类运算后得到对原始图片的分类判断。The invention relates to a mosaic image detection system based on local two-dimensional features, including: a preprocessing module, a local two-dimensional histogram construction module, a feature extraction module and a classifier module, wherein: the preprocessing module and the local two-dimensional histogram construction The modules are connected and receive the original image for block DCT transformation to obtain the block DCT coefficient absolute value matrix and output it to the local two-dimensional histogram construction module. The local two-dimensional histogram construction module is connected to the feature extraction module and the block DCT coefficient is absolute After the value matrix is subjected to LBP operation, the local two-dimensional histogram is obtained and output to the feature extraction module. The feature extraction module is connected to the classifier module and the local two-dimensional histogram information is subjected to feature extraction operation to obtain classification features and output to the classifier module. The classifier module receives the classification feature and performs a classification operation to obtain the classification judgment of the original picture.

技术效果technical effect

与现有技术相比,本发明通过采用多尺度分块DCT变换,有效捕捉拼接图片所造成的瑕疵。通过实验确定最优的局部二维特征算法的参数,P=8,R=1。本发明比现有的技术具有更高的检测准确率。Compared with the prior art, the present invention effectively captures the defects caused by splicing pictures by adopting multi-scale sub-block DCT transformation. The parameters of the optimal local two-dimensional feature algorithm are determined through experiments, P=8, R=1. The invention has higher detection accuracy than the prior art.

附图说明Description of drawings

图1为本发明流程示意图。Fig. 1 is a schematic flow chart of the present invention.

图2为实施例中LBP算法的参数说明示意图。Fig. 2 is a schematic diagram illustrating the parameters of the LBP algorithm in the embodiment.

图3为本发明系统结构示意图。Fig. 3 is a schematic diagram of the system structure of the present invention.

图4为实施例处理效果示意图;Fig. 4 is the schematic diagram of embodiment processing effect;

图中:(1)为原图、(2)为8×8分块DCT变换下的LBP直方图、(3)为16×16分块DCT变换下的LBP直方图、(4)为32×32分块DCT变换下的LBP直方图;由于0点的数值异常大、会影响直方图的显示、这里全部省略了0点的值。In the figure: (1) is the original image, (2) is the LBP histogram under the 8×8 block DCT transformation, (3) is the LBP histogram under the 16×16 block DCT transformation, (4) is the 32× LBP histogram under 32-block DCT transformation; since the value of 0 point is abnormally large, it will affect the display of the histogram, so the value of 0 point is omitted here.

具体实施方式Detailed ways

下面对本发明的实施例作详细说明,本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。The embodiments of the present invention are described in detail below. This embodiment is implemented on the premise of the technical solution of the present invention, and detailed implementation methods and specific operating procedures are provided, but the protection scope of the present invention is not limited to the following implementation example.

实施例1Example 1

如图1所示,本实施例包括以下步骤:As shown in Figure 1, this embodiment includes the following steps:

步骤一是对原始图像经过三种模式的分块DCT变换,并在图1中(3)中取绝对值,得到了三个分块DCT系数绝对值矩阵,其中:分块DCT可以用多种不同大小的分块方法,不同的分块大小能够捕捉不同的像素变换特征。同时分块模式的数量又决定了最终形成的统计特征的维度。因此分块DCT的模式选择需要兼顾检测精度和检测复杂度。Step 1 is to transform the original image through three modes of block DCT, and take the absolute value in (3) in Figure 1 to obtain three block DCT coefficient absolute value matrices, wherein: block DCT can be used in multiple Blocking methods of different sizes, different block sizes can capture different pixel transformation features. At the same time, the number of block patterns determines the dimension of the final statistical features. Therefore, the mode selection of the block DCT needs to take into account both the detection accuracy and the detection complexity.

在图1中给出的图1中(2)中的三种分块模式,4x4,8x8和16x16经过实验证明既保证了检测准确度,又具有较小的特征维度。The three block modes in (2) in Figure 1 given in Figure 1, 4x4, 8x8 and 16x16 have been proved by experiments to ensure the detection accuracy and have a smaller feature dimension.

表1为LBP8,1在不同的分块DCT模式下的检测准确度,准确度数值为20次随机样本分组的检测结果的平均值,括号中的数值为均方误差。所采用的图片库为哥伦比亚大学的拼接图像检测库,见Ng TT,Chang SF.,A dataset of authentic and spliced image blocks(真实和拼接图像块的数据集),(ADVENT Technical Report,#203-2004-3,Columbia University.)Table 1 shows the detection accuracy of LBP 8,1 in different block DCT modes. The accuracy value is the average value of the detection results of 20 random sample groups, and the value in brackets is the mean square error. The image library used is Columbia University's spliced image detection library, see Ng TT, Chang SF., A dataset of authentic and spliced image blocks (dataset of real and spliced image blocks), (ADVENT Technical Report, #203-2004 -3, Columbia University.)

表1Table 1

Figure BDA00003417964500051
Figure BDA00003417964500051

步骤二是对已经生成的分块DCT矩阵利用局部二维特征进行描述,具体是指:The second step is to describe the generated block DCT matrix using local two-dimensional features, specifically:

一个像素周围的用于比较的像素点个数记作P。这些像素点在中心点周围呈2π/P夹角等分排列。当周边像素点不在图像网格点上,采用其周边网格点数值的插值表示。另一个变量是R,这个变量表示的是周围比较像素点和中心点的距离。由于二进制数每一位代表的值是2p,因此二进制序列转换为十进制数的公式可以表示为:The number of pixels used for comparison around a pixel is denoted as P. These pixels are equally arranged around the central point at an angle of 2π/P. When the surrounding pixel points are not on the image grid point, the interpolation representation of the value of its surrounding grid point is used. Another variable is R, which represents the distance between the surrounding comparison pixels and the center point. Since the value represented by each bit of a binary number is 2 p , the formula for converting a binary sequence to a decimal number can be expressed as:

LBP P , R ( x c , y c ) = &Sigma; p = 0 P - 1 s ( g p - g c ) 2 p , 其中:xc和yc代表中心像素点的位置,gp和gc代表中心像素点和周围像素点的绝对值,P和R代表算法的两个参数,P代表周边点的个数,R代表周边点到中心点的距离;阈值函数s为: s ( x ) = 1 , x > &sigma; 0 , x < &sigma; , 其中:σ是参数,其定义为阈值函数s(x)的阈值。 LBP P , R ( x c , the y c ) = &Sigma; p = 0 P - 1 the s ( g p - g c ) 2 p , Among them: x c and y c represent the position of the central pixel, g p and g c represent the absolute value of the central pixel and surrounding pixels, P and R represent the two parameters of the algorithm, P represents the number of surrounding points, R Represents the distance from the surrounding points to the center point; the threshold function s is: the s ( x ) = 1 , x > &sigma; 0 , x < &sigma; , Where: σ is a parameter, which is defined as the threshold of the threshold function s(x).

P和R的组合通常有(8,1),(8,2)和(8,3)。除了单个考虑这些参数组合,也可以将他们结合在一起组成更大的特征,即所述的多尺度分析。表1中的结果是模式(8,1)的结果,下面表2中进一步给出多尺度分析的结果。Combinations of P and R usually have (8,1), (8,2) and (8,3). In addition to considering these parameter combinations individually, they can also be combined to form larger features, that is, the multi-scale analysis. The results in Table 1 are the results of model (8,1), and the results of the multi-scale analysis are further given in Table 2 below.

表2为多尺度分析结果Table 2 shows the results of multi-scale analysis

(P,R)(P,R) (8,1)+(8,2)(8,1)+(8,2) (8,1)+(8,2)+(8,3)(8,1)+(8,2)+(8,3) 准确度Accuracy 90.45%90.45% 90.48%90.48%

经过比较实验,权衡检测准确度和特征维度,确定采用单一尺度P=8,R=1是最佳的LBP描述子参数。s函数,通过取不同的σ,在0到2之间,以0.1为步长,观测到检测准确度先升后降的变化趋势。当σ=0.9时,检测准确度最高。图1中(4)所采用的参数如上所述。After comparative experiments, weighing the detection accuracy and feature dimension, it is determined that a single scale P=8, R=1 is the best LBP descriptor parameter. The s function, by taking different σ, between 0 and 2, with a step size of 0.1, observes that the detection accuracy first increases and then decreases. When σ=0.9, the detection accuracy is the highest. The parameters used in (4) in Fig. 1 are as above.

步骤三,利用步骤一中取得的多尺度分块DCT系数的绝对值矩阵,在步骤二得到的统计特征构成机器学习的特征向量,用LibSVM进行学习和测试,最终得到对原始图片的类别判断。Step 3: Use the absolute value matrix of multi-scale block DCT coefficients obtained in step 1, and the statistical features obtained in step 2 to form the feature vector of machine learning, use LibSVM for learning and testing, and finally obtain the category judgment of the original image.

在LibSVM中,需要确定所采用的核函数,经过比较选择了高斯RBF核函数。高斯RBF核函数有两个变量:惩罚变量C和高斯核宽度γ。In LibSVM, it is necessary to determine the kernel function used, and the Gaussian RBF kernel function is selected after comparison. The Gaussian RBF kernel function has two variables: the penalty variable C and the Gaussian kernel width γ.

采用网格搜索的方式确定最好的C和γ组合,其中:C的搜索范围是2{-1,1,3,5},γ的搜索范围是2{-5,-3,-1,1}Use grid search to determine the best combination of C and γ, where: the search range of C is 2 {-1,1,3,5} , and the search range of γ is 2 {-5,-3,-1, 1} .

上述C和γ的参数对构成一个网格。在训练集上进行交叉验证找到最佳的参数对,然后利用该参数对进行最终的测试。上述的表1和表2中的结果即是采用上述方法获得。The above parameter pairs of C and γ constitute a grid. Perform cross-validation on the training set to find the best parameter pair, and then use this parameter pair for final testing. The above results in Table 1 and Table 2 are obtained by the above method.

如图3所示,为实现上述方法的系统,该系统包括:预处理模块、局部二维直方图构建模块、特征提取模块和分类器模块,其中:预处理模块与局部二维直方图构建模块相连并接收原始图像进行分块DCT变换后得到分块DCT系数绝对值矩阵并输出至局部二维直方图构建模块,局部二维直方图构建模块与特征提取模块相连并将分块DCT系数绝对值矩阵进行LBP运算后得到局部二维直方图并输出到特征提取模块,特征提取模块与分类器模块相连接并将局部二维直方图信息进行特征提取运算后得到分类特征输出到分类器模块,分类器模块接收分类特征进行分类运算后得到对原始图片的分类判断。As shown in Figure 3, in order to realize the system of above-mentioned method, this system comprises: preprocessing module, local two-dimensional histogram building module, feature extraction module and classifier module, wherein: preprocessing module and local two-dimensional histogram building module After connecting and receiving the original image for block DCT transformation, the block DCT coefficient absolute value matrix is obtained and output to the local two-dimensional histogram construction module, the local two-dimensional histogram construction module is connected with the feature extraction module and the block DCT coefficient absolute value After the matrix is subjected to LBP operation, the local two-dimensional histogram is obtained and output to the feature extraction module. The feature extraction module is connected to the classifier module and the local two-dimensional histogram information is subjected to feature extraction operation to obtain classification features and output to the classifier module. The detector module receives the classification features and performs classification operations to obtain the classification judgment of the original picture.

表3显示的是在哥伦比亚大学的拼接图像检测库上,用本发明与现有的两种主流的检测方法的检测准确率的比较。分别为Shi的马尔可夫算法,Shi,Yun Q.,Chunhua Chen,and WenChen."A natural image model approach to splicing detection."(一种用于检测拼接图片的自然图片模型)Proceedings of the9th workshop on Multimedia&security.ACM,2007,和Ng的双谱算法,Ng,Tian-Tsong,Shih-Fu Chang,and Qibin Sun."Blind detection of photomontage using higherorder statistics."(基于高阶统计量的拼接图片盲检测)Circuits and Systems,2004.ISCAS'04.Proceedings of the2004International Symposium on.Vol.5.IEEE,2004.Table 3 shows the comparison of the detection accuracy between the present invention and two existing mainstream detection methods on the mosaic image detection library of Columbia University. Shi's Markov Algorithm, Shi, Yun Q., Chunhua Chen, and WenChen."A natural image model approach to splicing detection." (A natural image model for detecting spliced images) Proceedings of the 9th workshop on Multimedia&security.ACM, 2007, and Ng's bispectrum algorithm, Ng, Tian-Tsong, Shih-Fu Chang, and Qibin Sun."Blind detection of photomontage using higher order statistics." (Blind detection of stitched pictures based on high-order statistics) Circuits and Systems,2004.ISCAS'04.Proceedings of the2004International Symposium on.Vol.5.IEEE,2004.

表3table 3

本发明this invention 马尔可夫Markov 双谱Bispectrum

准确率Accuracy 89.9%89.9% 86.6%86.6% 72.3%72.3%

上表显示本发明的算法比现有的技术有更好的检测准确率。The above table shows that the algorithm of the present invention has better detection accuracy than the existing technology.

Claims (10)

1. the stitching image detection method based on local two dimensional character is characterized in that, may further comprise the steps:
Step 1: treat arbitrary image that deal with data concentrates and carry out multiple dimensioned block DCT transform, the piecemeal DCT matrix of coefficients that obtains, and piecemeal DCT coefficient all taken absolute value, obtain piecemeal DCT coefficient absolute value matrix;
Step 2: adopt local two dimensional character to come token image to splice the variation of the statistical nature that causes, piecemeal DCT coefficient absolute value matrix is converted into local two dimensional character histogram;
Step 3: the length of side value of getting different square tiles in the step 1 is the corresponding absolute value matrix that obtains representing the piecemeal DCT coefficient of various yardsticks then, and obtains the local two dimensional character histogram of several correspondences according to the operation of step 2; The feature that each local two dimensional character histogram is generated is together in series and consists of a complete statistical nature, then utilizes the svm classifier device to learn and classify.
2. method according to claim 1 is characterized in that, described block DCT transform refers to: select a kind of side length b, pending image is divided into the square tiles that the length of side is the formed objects of b, then carry out dct transform in each square tiles.
3. method according to claim 2 is characterized in that, the side length b of described square tiles is 8 multiple; As the big or small aliquant b of pending image, then add 0 in the rightmost side and the lower side of pending image, until it meets the integral multiple of b; Resulting piecemeal DCT matrix of coefficients is the matrix of a M * N behind the process block DCT transform, and pending picture size is that the two is equirotal to M ' * N ' usually; When added 0 row and 0 row in the pending image rightmost side and lower side after, resulting matrix of coefficients can be greater than pending image.
4. method according to claim 1 is characterized in that, described step 2 comprises following operation:
2.1 on piecemeal DCT coefficient absolute value matrix arbitrary element around get P point, be denoted as g p, p={1 ... P}; These points are minute arrangements such as 2 π/P angle around central point; When peripheral point not on matrix grid point, utilize its peripheral net point numerical value to carry out interpolation estimation, the distance of peripheral point and central point is designated as R;
2.2 the gray-scale value of peripheral point is compared with the gray-scale value of central point successively: when the gray-scale value of the peripheral point gray-scale value greater than central point, then comparative result is denoted as 1, otherwise is 0; Then the comparative result of P point is arranged according to right-to-left, consisted of 0/1 comparison result sequence that length is P, this comparison result sequence also is used for being converted to decimal integer as bigit;
2.3 each element on the piecemeal DCT coefficient absolute value matrix is carried out the processing of step 2.2, obtain corresponding decimal integer, all decimal integers are consisted of a local two dimensional character histogram.
5. method according to claim 4, it is characterized in that, described on piecemeal DCT coefficient absolute value matrix arbitrary element around get P and put and to refer to: in the upper and lower, left and right of described arbitrary element and upper left, lower-left, upper right, bottom right totally 8 points, the distance R of peripheral point and central point gets 1.
6. method according to claim 4 is characterized in that, described histogrammic horizontal scale value is 0 to 2 P-1, histogrammic value corresponding to each horizontal scale as a feature, characteristic dimension 2 P
7. method according to claim 1 is characterized in that, described step 3 comprises following operation:
3.1 obtain the piecemeal DCT coefficient absolute value matrix of different scale by getting different length of side value b, each piecemeal DCT absolute value matrix adopted step 2 is described to obtain local two dimensional character histogram;
3.2 extract 2 from each local two dimensional character histogram PDimensional feature, and these features are serially connected form a complete statistical nature;
3.3 all the other all pictures of pending data centralization are extracted feature according to above-mentioned feature extracting method, simultaneously picture is divided into training set and test set two parts, first feature and the categorical data of training set are inputted sorter and obtained disaggregated model; Again the feature of disaggregated model and test set is inputted sorter again, the classification that obtains test set is differentiated; Last classification according to known test set obtains classify accuracy.
8. method according to claim 7 is characterized in that, the number of pictures ratio of described training set and test set is 5:1.
9. method according to claim 7 is characterized in that, the splicing picture that comprises in described training set and the test set and the number of natural picture are 1:1.
10. detection system that is used for realizing the described method of above-mentioned arbitrary claim, it is characterized in that, comprise: pretreatment module, local two-dimensional histogram makes up module, characteristic extracting module and classifier modules, wherein: pretreatment module links to each other with local two-dimensional histogram structure module and receives original image and carries out obtaining piecemeal DCT coefficient absolute value matrix and export local two-dimensional histogram to making up module behind the block DCT transform, local two-dimensional histogram structure module links to each other with characteristic extracting module and piecemeal DCT coefficient absolute value matrix is carried out obtaining local two-dimensional histogram and outputing to characteristic extracting module after the LBP computing, characteristic extracting module is connected with classifier modules and local two-dimensional histogram information is carried out obtaining characteristic of division after the feature extraction computing and outputs to classifier modules, and classifier modules receives characteristic of division and carries out obtaining the classification of original image is judged after the sort operation.
CN2013102616210A 2013-06-27 2013-06-27 Mosaic image detection method and system based on local two-dimensional characteristics Pending CN103310236A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013102616210A CN103310236A (en) 2013-06-27 2013-06-27 Mosaic image detection method and system based on local two-dimensional characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013102616210A CN103310236A (en) 2013-06-27 2013-06-27 Mosaic image detection method and system based on local two-dimensional characteristics

Publications (1)

Publication Number Publication Date
CN103310236A true CN103310236A (en) 2013-09-18

Family

ID=49135430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013102616210A Pending CN103310236A (en) 2013-06-27 2013-06-27 Mosaic image detection method and system based on local two-dimensional characteristics

Country Status (1)

Country Link
CN (1) CN103310236A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544703A (en) * 2013-10-19 2014-01-29 侯俊 Digital image stitching detecting method
CN103914839A (en) * 2014-03-27 2014-07-09 中山大学 Image stitching and tampering detection method and device based on steganalysis
CN103955691A (en) * 2014-05-08 2014-07-30 中南大学 Multi-resolution LBP textural feature extracting method
CN103996044A (en) * 2014-05-29 2014-08-20 天津航天中为数据系统科技有限公司 Method and device for extracting targets through remote sensing image
CN104244016A (en) * 2014-08-12 2014-12-24 中山大学 H264 video content tampering detection method
CN104598929A (en) * 2015-02-03 2015-05-06 南京邮电大学 HOG (Histograms of Oriented Gradients) type quick feature extracting method
CN104899846A (en) * 2015-05-20 2015-09-09 上海交通大学 Digital image splicing passive detection method based on frequency domain local statistic model
CN106056523A (en) * 2016-05-20 2016-10-26 南京航空航天大学 Digital image stitching tampering blind detection method
CN106203492A (en) * 2016-06-30 2016-12-07 中国科学院计算技术研究所 The system and method that a kind of image latent writing is analyzed
CN108520215A (en) * 2018-03-28 2018-09-11 电子科技大学 Single-sample face recognition method based on multi-scale joint feature encoder
CN108645875A (en) * 2018-03-20 2018-10-12 上海市建筑科学研究院 A kind of defect identification method of precast shear wall grouting connection
CN109086801A (en) * 2018-07-06 2018-12-25 湖北工业大学 A kind of image classification method based on improvement LBP feature extraction
CN109885987A (en) * 2019-01-24 2019-06-14 中山大学 A Binary Image Steganalysis Method Based on Directional Local Binary Patterns
CN111145146A (en) * 2019-12-11 2020-05-12 北京航空航天大学 A HHT-based style transfer forgery image detection method and device
CN111415336A (en) * 2020-03-12 2020-07-14 泰康保险集团股份有限公司 Image tampering identification method and device, server and storage medium
CN112232162A (en) * 2020-10-06 2021-01-15 武汉烽火凯卓科技有限公司 Pedestrian detection method and device based on multi-feature fusion cascade classifier
CN113837976A (en) * 2021-09-17 2021-12-24 重庆邮电大学 Multi-focus image fusion method based on joint multi-domain

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080193031A1 (en) * 2007-02-09 2008-08-14 New Jersey Institute Of Technology Method and apparatus for a natural image model based approach to image/splicing/tampering detection
US20110002504A1 (en) * 2006-05-05 2011-01-06 New Jersey Institute Of Technology System and/or method for image tamper detection
US20110019907A1 (en) * 2006-01-13 2011-01-27 New Jersey Institute Of Technology Method for identifying marked images using statistical moments based at least in part on a jpeg array

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110019907A1 (en) * 2006-01-13 2011-01-27 New Jersey Institute Of Technology Method for identifying marked images using statistical moments based at least in part on a jpeg array
US20110002504A1 (en) * 2006-05-05 2011-01-06 New Jersey Institute Of Technology System and/or method for image tamper detection
US20080193031A1 (en) * 2007-02-09 2008-08-14 New Jersey Institute Of Technology Method and apparatus for a natural image model based approach to image/splicing/tampering detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUJIN ZHANG等: "Image-splicing forgery detection based on local binary patterns of DCT coefficients", 《WILEY ONLINE LIBRARY》, 26 February 2013 (2013-02-26), pages 2386 - 2395 *
张震等: "一种新的拼接图像检测方法", 《计算机应用研究》, vol. 26, no. 3, 15 March 2009 (2009-03-15), pages 1127 - 1130 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544703A (en) * 2013-10-19 2014-01-29 侯俊 Digital image stitching detecting method
CN103544703B (en) * 2013-10-19 2016-12-07 上海理工大学 Digital picture splicing detection method
CN103914839A (en) * 2014-03-27 2014-07-09 中山大学 Image stitching and tampering detection method and device based on steganalysis
CN103955691A (en) * 2014-05-08 2014-07-30 中南大学 Multi-resolution LBP textural feature extracting method
CN103996044B (en) * 2014-05-29 2017-10-10 天津航天中为数据系统科技有限公司 The method and apparatus that target is extracted using remote sensing images
CN103996044A (en) * 2014-05-29 2014-08-20 天津航天中为数据系统科技有限公司 Method and device for extracting targets through remote sensing image
CN104244016A (en) * 2014-08-12 2014-12-24 中山大学 H264 video content tampering detection method
CN104244016B (en) * 2014-08-12 2018-04-10 中山大学 A kind of H264 video contents altering detecting method
CN104598929A (en) * 2015-02-03 2015-05-06 南京邮电大学 HOG (Histograms of Oriented Gradients) type quick feature extracting method
CN104899846A (en) * 2015-05-20 2015-09-09 上海交通大学 Digital image splicing passive detection method based on frequency domain local statistic model
CN106056523A (en) * 2016-05-20 2016-10-26 南京航空航天大学 Digital image stitching tampering blind detection method
CN106056523B (en) * 2016-05-20 2019-05-24 南京航空航天大学 Blind detection method of digital image splicing for tampering
CN106203492A (en) * 2016-06-30 2016-12-07 中国科学院计算技术研究所 The system and method that a kind of image latent writing is analyzed
CN108645875A (en) * 2018-03-20 2018-10-12 上海市建筑科学研究院 A kind of defect identification method of precast shear wall grouting connection
CN108520215A (en) * 2018-03-28 2018-09-11 电子科技大学 Single-sample face recognition method based on multi-scale joint feature encoder
CN108520215B (en) * 2018-03-28 2022-10-11 电子科技大学 Single-sample face recognition method based on multi-scale joint feature encoder
CN109086801A (en) * 2018-07-06 2018-12-25 湖北工业大学 A kind of image classification method based on improvement LBP feature extraction
CN109885987A (en) * 2019-01-24 2019-06-14 中山大学 A Binary Image Steganalysis Method Based on Directional Local Binary Patterns
CN109885987B (en) * 2019-01-24 2023-01-24 中山大学 A Steganalysis Method of Binary Image Based on Directional Local Binary Pattern
CN111145146A (en) * 2019-12-11 2020-05-12 北京航空航天大学 A HHT-based style transfer forgery image detection method and device
CN111145146B (en) * 2019-12-11 2023-04-18 北京航空航天大学 Method and device for detecting style migration forged image based on HHT
CN111415336A (en) * 2020-03-12 2020-07-14 泰康保险集团股份有限公司 Image tampering identification method and device, server and storage medium
CN112232162A (en) * 2020-10-06 2021-01-15 武汉烽火凯卓科技有限公司 Pedestrian detection method and device based on multi-feature fusion cascade classifier
CN112232162B (en) * 2020-10-06 2023-04-18 武汉烽火凯卓科技有限公司 Pedestrian detection method and device based on multi-feature fusion cascade classifier
CN113837976A (en) * 2021-09-17 2021-12-24 重庆邮电大学 Multi-focus image fusion method based on joint multi-domain
CN113837976B (en) * 2021-09-17 2024-03-19 重庆邮电大学 Multi-focus image fusion method based on joint multi-domain

Similar Documents

Publication Publication Date Title
CN103310236A (en) Mosaic image detection method and system based on local two-dimensional characteristics
Goel et al. Dual branch convolutional neural network for copy move forgery detection
Jourabloo et al. Face de-spoofing: Anti-spoofing via noise modeling
Dua et al. Image forgery detection based on statistical features of block DCT coefficients
Doegar et al. Cnn based image forgery detection using pre-trained alexnet model
CN102902959B (en) Face recognition method and system for storing identification photo based on second-generation identity card
CN113536990A (en) Deep fake face data identification method
Costa et al. Open set source camera attribution and device linking
JP2020525947A (en) Manipulated image detection
Hussain et al. Evaluation of image forgery detection using multi-scale weber local descriptors
Sun et al. A face spoofing detection method based on domain adaptation and lossless size adaptation
CN105956572A (en) In vivo face detection method based on convolutional neural network
CN104504669B (en) A kind of medium filtering detection method based on local binary patterns
CN104933414A (en) Living body face detection method based on WLD-TOP (Weber Local Descriptor-Three Orthogonal Planes)
CN103345631A (en) Image characteristic extraction, training, detection method, module, device and system
Liu et al. Overview of image inpainting and forensic technology
Marasco et al. Fingerphoto presentation attack detection: Generalization in smartphones
Kumari et al. Image splicing forgery detection: A review
CN107103266A (en) The training of two-dimension human face fraud detection grader and face fraud detection method
CN111259792A (en) Face liveness detection method based on DWT-LBP-DCT feature
Zhu et al. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks
Diaa A deep learning model to inspect image forgery on SURF keypoints of SLIC segmented regions
Doegar et al. Image forgery detection based on fusion of lightweight deep learning models
Al-Shamasneh et al. Image splicing forgery detection using feature-based of sonine functions and deep features
CN102129569B (en) Based on body detection device and the method for multiple dimensioned contrast characteristic

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20130918