[go: up one dir, main page]

CN107392949A - Image zone duplicating and altering detecting method based on local invariant feature - Google Patents

Image zone duplicating and altering detecting method based on local invariant feature Download PDF

Info

Publication number
CN107392949A
CN107392949A CN201710579410.XA CN201710579410A CN107392949A CN 107392949 A CN107392949 A CN 107392949A CN 201710579410 A CN201710579410 A CN 201710579410A CN 107392949 A CN107392949 A CN 107392949A
Authority
CN
China
Prior art keywords
image
point
characteristic
line segment
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710579410.XA
Other languages
Chinese (zh)
Other versions
CN107392949B (en
Inventor
向北海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Youxiang Technology Co Ltd
Original Assignee
Hunan Youxiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Youxiang Technology Co Ltd filed Critical Hunan Youxiang Technology Co Ltd
Priority to CN201710579410.XA priority Critical patent/CN107392949B/en
Publication of CN107392949A publication Critical patent/CN107392949A/en
Application granted granted Critical
Publication of CN107392949B publication Critical patent/CN107392949B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of image zone duplicating and altering detecting method based on local invariant feature, set of characteristic points is calculated by improved SUSAN Corner Detection Algorithms first, then feature description is carried out to characteristic point by a kind of local tertiary mode, the result that color compares is divided into three classes, the details of preferably reaction texture variations is allowed to, finally by characteristic matching detection image with the presence or absence of duplication tampered region.The inventive method can be more careful description image grain distribution, improve image zone duplicating and altering detection accuracy.

Description

Image zone duplicating and altering detecting method based on local invariant feature
Technical field
The present invention relates to computer visual image processing technology field, it is specifically related to a kind of based on local invariant feature Image zone duplicating and altering detecting method.
Background technology
It is commonly used with various image processing softwares, people all kinds of digital pictures can be carried out easily processing and Editor, cause occur a large amount of artificial tampered images on network.These artificial tampered images to scientific research, law evidence obtaining, Insurance claim and medium public's trust etc. adversely affect, therefore carry out authenticity verification with very for digital picture Important realistic meaning.
In various distorted image modes, region duplication operation is the most widely used to distort one of means.By multiple A certain region in imaged is simultaneously pasted it on another disjoint region in same piece image, to reach covering mesh Mark information or forge the purpose of scene.Distorted for such, 2 identical areas can be whether there is in the image by detecting Domain, judge the image whether by artificially distorting.
The region duplication altering detecting method of existing digital picture:
1) method based on blocking characteristic matching:Mainly include feature matching method and base based on scattered cosine transform DCT In the detection algorithm of principal component analysis, this kind of method is typically fairly simple, but passes through geometry for replication region in tampered image The situation detection of conversion such as rotation, scaling and upset is invalid.
2) method of distinguished point based detection and matching, feature is carried out by the characteristic point in detection image first and to it Description, then positioned by matching algorithm and replicate sticking area.Typically compared using SIFT and SURF etc. feature point detection part Complicated algorithm, although ability of these algorithms with certain resistance replication region rotation and scale transformation.But exist tight The defects of weight:Replicate sticking area and must have the covering of a number of characteristic point, if replication region have it is non-significant Visual structure, then tampered region may be by complete missing inspection.
The content of the invention
In order to solve the above-mentioned technical problem, the invention provides a kind of image zone duplicating based on local invariant feature to usurp Change detection method.
The present invention provides a kind of image zone duplicating and altering detecting method based on local invariant feature, including following step Suddenly:
Step S100:Feature point detection is carried out to all pixels point in image to be detected using SUSAN algorithms, obtains spy Point set is levied, is designated as { (x1,y1),…,(xN,yN)};
Step S200:Set of characteristic points is traveled through with partial 3 d pattern, obtained corresponding with each characteristic point each for describing The image feature vector of characteristic point;
Step S300:Travel through set of characteristic points and carry out image feature vector and characteristic point bi-directional matching, obtain each characteristic point Similar features point pair, the Euclidean distance between the characteristic point of each similar features point centering is minimum, and by similar features point to collection Conjunction is designated as { [(xai,yai),(xbi,ybi)] | i=1 ..., t }, wherein t represents the number finally matched;
Step S400:By each pair similar features point to being connected with line segment in image to be detected, and calculate every line The slope f of sectioniAnd fi-fi+1, judge fi-fi+1Whether in -0.05~0.005, this line segment is designated as unanimously if meeting Line segment, it is unsatisfactory for, does not mark;
Step S500:The quantity of consistent line segment is counted, as slope quantity, judges that the slope quantity in image to be detected is No≤50, image to be detected is duplicating image if meeting, image to be detected distorts area in the absence of duplication if being unsatisfactory for Domain,
The remaining line segment in addition to consistent line segment in duplicating image is deleted, calculates the line segment distance of wantonly two consistent line segment, and is united The consistent line segment quantity that line segment distance is less than 100 is counted, whether judge the consistent line segment quantity of gained >=40, it is to be detected if meeting Image, which exists, replicates tampered region, is otherwise not present in image to be detected and replicates tampered region.
Further, step S100 comprises the following steps:
Step S110:Image to be detected is traveled through with circular shuttering, calculates the USAN at each pixel in image to be detected Value u (i, j);
Step S120:To USAN values u (i, j) thresholding of all pixels in testing image, obtain characteristic point response Rx (i, j);
Rx (i, j)=max (0, th2-u (i, j)) (4)
Wherein, th2 is gained threshold value;
Step S130:All pixels point in traversal image to be detected judges whether USAN value u (i, j) meet u (i, j) < The point is then labeled as preliminary characteristic point by th2, satisfaction, otherwise carries out step S120 to next pixel, and gained is all preliminary Feature point set is combined into preliminary set of characteristic points;
Step S140:Preliminary set of characteristic points is traveled through with non-maxima suppression, acquired results are assembled into set of characteristic points, It is designated as { (x1,y1),…,(xN,yN)};
Circular shuttering is all satisfactions (x-x1) centered on (x1, y1) pixel2+(y-y1)2≤ 10 pixel (x, Y) region of composition.
Further, step S200 comprises the following steps:To wait to appoint the characteristic point (x takenn,yn) centered on, by image P (x, Y, z) on 7 × 7 image-region be designated as characteristic area Fn(x, y, z), traversal characteristic area FnEach pixel in (x, y, z), First Model Comparison, second Model Comparison and the 3rd Model Comparison are sequentially carried out to each pixel, obtain feature to Measure { RT1 ..., RT3 }, image feature vector will be obtained after characteristic vector set obtained by each pixel.
Another aspect of the present invention additionally provides a kind of image-region based on local invariant feature of method described above Tampering detection apparatus is replicated, including:
Feature point detection module:For carrying out characteristic point to all pixels point in image to be detected using SUSAN algorithms Detection, obtains set of characteristic points, is designated as { (x1,y1),…,(xN,yN)};
Feature vector module:For traveling through set of characteristic points with partial 3 d pattern, use corresponding with each characteristic point is obtained In the image feature vector for describing each characteristic point;
Similar features point is to module:Image feature vector and characteristic point bi-directional matching are carried out for traveling through set of characteristic points, Obtaining the similar features point pair of each characteristic point, the Euclidean distance between the characteristic point of each similar features point centering is minimum, and by phase { [(x is designated as to set like characteristic pointai,yai),(xbi,ybi)] | i=1 ..., t }, wherein t represents the number finally matched;
Consistent line segment module:For in image to be detected by each pair similar features point to being connected with line segment, and count Calculate the slope f of every line segmentiAnd fi-fi+1, judge fi-fi+1Whether in -0.05~0.005, this line segment if meeting Consistent line segment is designated as, is unsatisfactory for, is not marked;
Judge module:For counting the quantity of consistent line segment, as slope quantity, the slope number in image to be detected is judged Amount whether≤50, if meet if image to be detected be duplicating image, image to be detected is usurped in the absence of duplication if being unsatisfactory for Change region,
The remaining line segment in addition to consistent line segment in duplicating image is deleted, calculates the line segment distance of wantonly two consistent line segment, and is united The consistent line segment quantity that line segment distance is less than 100 is counted, whether judge the consistent line segment quantity of gained >=40, it is to be detected if meeting Image, which exists, replicates tampered region, is otherwise not present in image to be detected and replicates tampered region.
Further, feature point detection module includes:
USAN value modules:For traveling through image to be detected with circular shuttering, calculate in image to be detected at each pixel USAN value u (i, j);
Threshold module:For USAN values u (i, j) thresholding to all pixels in testing image, characteristic point response is obtained Rx(i,j);
Rx (i, j)=max (0, th2-u (i, j)) (4)
Wherein, th2 is gained threshold value;
Preliminary feature point module:Judge whether USAN value u (i, j) are full for traveling through the point of all pixels in image to be detected The point is then labeled as preliminary characteristic point, otherwise enters threshold module to next pixel, by institute by sufficient u (i, j) < th2, satisfaction Obtain all preliminary feature point sets and be combined into preliminary set of characteristic points;
Non-maxima suppression module:For traveling through preliminary set of characteristic points with non-maxima suppression, by acquired results set Into set of characteristic points, { (x is designated as1,y1),…,(xN,yN)};
Circular shuttering is all satisfactions (x-x1) centered on (x1, y1) pixel2+(y-y1)2≤ 10 pixel (x, Y) region of composition.
The technique effect of the present invention:
The present invention provides the image zone duplicating and altering detecting method based on local invariant feature, passes through SUSAN angles first Point detection algorithm calculates set of characteristic points, then carries out feature description to characteristic point by local tertiary mode, color is compared Result be divided into three classes, be allowed to preferably reaction texture variations details, whether there is finally by characteristic matching detection image Replicate tampered region.The inventive method can be more careful description image grain distribution, improve image zone duplicating and altering The accuracy of detection.
The present invention provides the image zone duplicating and altering detecting method based on local invariant feature, can be more accurately to face Color comparison information is encoded.Feature description it is more accurate, behind characteristic matching result it is better, whole result certainly it is more accurate Really.
Specific refer to proposes according to the image zone duplicating and altering detecting method based on local invariant feature of the present invention Various embodiments it is described below, will cause apparent in terms of the above and other of the present invention.
Brief description of the drawings
Fig. 1 is the image zone duplicating and altering detecting method flow signal provided by the invention based on local invariant feature Figure;
Fig. 2 is the circular shuttering used in operator in SUSAN methods provided by the invention;
Fig. 3 is the image zone duplicating and altering detection means schematic diagram provided by the invention based on local invariant feature.
Embodiment
The accompanying drawing for forming the part of the application is used for providing a further understanding of the present invention, schematic reality of the invention Apply example and its illustrate to be used to explain the present invention, do not form inappropriate limitation of the present invention.
Referring to Fig. 1, the present invention proposes a kind of image zone duplicating and altering detecting method based on local invariant feature, wraps Include following steps:
Step S100:Feature point detection is carried out to all pixels point in image to be detected using SUSAN algorithms, obtains spy Point set is levied, is designated as { (x1,y1),…,(xN,yN)};
The color RGB image that note needs to carry out region duplication tampering detection is P (x, y, z), method category proposed by the present invention In this kind of method that distinguished point based is detected and matched, therefore firstly the need of progress feature point detection, low level image procossing The method (i.e. SmallUnivalReSegmentAssimilatingNucleus, abbreviation SUSAN algorithm) in the similar area of small nut value, It is a kind of classical angular-point detection method, it can extract the angle point and edge feature of target in compared with very noisy, and position Accurately.
Preferably, the present invention has carried out some improvement to SUSAN algorithms, with stronger Stability and veracity, Comprise the following steps:
Step S110:Image to be detected is traveled through with circular shuttering, calculated in described image to be detected at each pixel USAN value u (i, j);
Circular shuttering used is defined as follows:Center pixel is designated as (x1, y1), all to meet condition (x-x1)2+(y-y1)2≤ The region of 10 pixel (x, y) composition is the scope of circular shuttering.In the present embodiment, circular shuttering is as shown in Fig. 2 circle Template size is 7 × 7, altogether 37 pixels, and the pixel of wherein label 19 is center pixel.Other can also be used existing often Use template.
For any pixel (i0, j0) in image P (x, y, z), the center pixel of circular shuttering is placed on (i0, j0), Then calculate each pixel in image P (x, y, z) inside circular shuttering position pixel color value and center pixel (i0, J0 the difference of color value), if the pixel color value of a pixel and the face of center pixel (i0, j0) in image to be detected Difference between colour is less than or equal to the similarity degree threshold value th1 of setting, then the pixel belongs to USAN regions;Otherwise the pixel Point is not belonging to USAN regions.
By the above method so as to which whether each pixel judged in image to be detected belongs to USAN regions, in existing method He Tongzhi areas formula is absorbed to be expressed as follows:
C (i, j) represents whether the pixel belongs to USAN regions.
Wherein, th1 (i0, j0) represents the threshold value of similarity degree, typically one constant value of artificial selection as threshold value, This largely constrains it and is automatically processing the application of aspect.
The method that a kind of automatic selected threshold of energy is proposed to this present invention, similarity degree threshold value th1 is calculated by formula (2) (i0,j0):
Wherein, Ω1Represent the contiguous range that size is 3 × 3 centered on (i0, j0).
Then, all pixels point c (i, j) being in motion vector figure Vx (i, j) in circular shuttering position is carried out Statistics:
Wherein, Ω2Pixel point set in expression motion vector figure Vx (i, j) in circular shuttering position, u (i0, J0) be pixel (i0, j0) USAN values.
Using circular shuttering traversing graph as Vx (i, j) all pixels, the USAN value u (i, j) of all pixels are obtained.
Step S120:To USAN values u (i, j) thresholding of all pixels in the testing image, characteristic point response is obtained Rx(i,j);
Rx (i, j)=max (0, th2-u (i, j)) (4)
Wherein, th2 is gained threshold value;Value is 28 in the present embodiment.
Step S130:Travel through the point of all pixels in described image to be detected judge USAN value u (i, j) whether meet u (i, The point is then labeled as preliminary characteristic point by j) < th2, satisfaction, otherwise carries out step S120 to next pixel, gained is owned Preliminary feature point set is combined into preliminary set of characteristic points;
All pixels point in traversal image to be detected judges whether USAN value u (i, j) meet u (i, j) < th2, meets The point is then labeled as preliminary characteristic point, step S120 otherwise is carried out to lower pixel;By all preliminary set of characteristic points of gained For preliminary set of characteristic points.
During due to u (i, j) < th2, Rx (i, j) is only possible to be more than 0, and will meet the point is the preliminary of preliminary judgement Characteristic point.
Step S140:The preliminary set of characteristic points is traveled through with non-maxima suppression, acquired results are assembled into the spy Point set is levied, is designated as { (x1,y1),…,(xN,yN)};
For each preliminary judgement in preliminary set of characteristic points characteristic point such as (i1, j1), observe centered on it, In the field of size 5 × 5, if also have other pixels Rx (i, j) value than Rx (i1, j1) greatly, if not provided, namely Rx (i1, j1) is maximum, then retains this characteristic point (i1, j1);Otherwise this characteristic point (i1, j1) is deleted, also i.e. by Rx (i, j) Reset to 0.All preliminary characteristic points in preliminary set of characteristic points are handled, final all Rx (i, j) are more than 0 picture Element set is set of characteristic points.
Step S200:The set of characteristic points is traveled through with partial 3 d pattern, obtains use corresponding with each characteristic point In the image feature vector for describing each characteristic point;
By existing partial 3 d pattern.
Preferably, existing local binary mould is encoded by the information of color comparison chart picture, but dual mode is only Simple magnitude relationship is characterized, the difference degree of texture variations can not be reacted, and is easily affected by noise.Thus in order to gram These problems are taken, the present invention is extended to local binary, and the result that color compares is divided into 3 classes, so as to preferably anti- Answer the details of texture variations.
The step S200 comprises the following steps:To appoint the characteristic point (x takenn,yn) centered on, by 7 on image P (x, y, z) × 7 image-region is designated as characteristic area Fn(x, y, z), traversal characteristic area FnEach pixel in (x, y, z), to each Pixel sequentially carries out first Model Comparison, second Model Comparison and the 3rd Model Comparison, obtains characteristic vector { RT1 ..., RT3 }, image feature vector will be obtained after characteristic vector set obtained by each pixel.
Appoint below and take a characteristic point (xn,yn) exemplified by illustrate.On image P (x, y, z), with characteristic point (xn,yn) Centered on, the image-region that size is 7 × 7 is designated as characteristic area Fn(x, y, z), therefore characteristic point (xn,yn) and characteristic area Fn (x, y, z) is to correspond.Characteristic point (xn,yn) information be all reflected in characteristic area FnOn (x, y, z), therefore by spy Levy region Fn(x, y, z) carries out feature statement, to realize to characteristic point (xn,yn) purpose that is described.
Characteristic area Fn(x, y, z) shares 49 pixels, and the present invention carries out same processing side to each pixel Method, to be illustrated exemplified by one of pixel (x0, y0),
Centered on (x0, y0), image P (x, y, z) select its surrounding close to 8 pixels be designated as { (x01, y01),…,(x08,y08) contrast pixel as the point, then carry out first Model Comparison by formula (5):
Wherein, m ∈ { 1,2 ..., 8 } represent pixel serial number, and th1 represents first discrepancy threshold.Taken in the present embodiment Value 10, r1 (m) represent pixel (xm,ym) coding under first pattern.Final 8 pixels can obtain the two of one 8 System number { r1 (1), r1 (2) ..., r1 (8) }, it is pixel (x0, y0) to be translated into the value obtained by decimal number Characteristic value under one pattern, is designated as RT1.
Followed by the comparison of second pattern:
R2 (m) represents pixel (xm,ym) coding under first pattern.Same 8 pixels can obtain one 8 Binary number { r2 (1), r2 (2) ..., r2 (8) }, it is pixel (x0, y0) to be translated into the value obtained by decimal number Characteristic value in a second mode, is designated as RT2.
Carry out the comparison of the 3rd pattern:
Wherein, r3 (m) represents pixel (xm,ym) coding under the 3rd pattern.Same 8 pixels can obtain one The binary number { r3 (1), r3 (2) ..., r3 (8) } of individual 8, it is pixel to be translated into the value obtained by decimal number The characteristic value of (x0, y0) under the 3rd pattern, is designated as RT3.
It can be seen that the distribution of texture has been carried out thinner division by these three patterns, by discrepancy threshold th1 to color Comparison result carries out the classification of three kinds of situations, is no longer simple under local binary be more than or less than.Therefore the present invention carries The character description method gone out can preferably characterize the details of grain distribution.
The characteristic vector { RT1 ..., RT3 } that a length is three is obtained in final pixel point (x0, y0) one.
To characteristic area Fn(x, y, z) all pixels, are all handled according to the method described above, obtain respective feature Vector, all characteristic vectors is arranged together, it is final to obtain the characteristic vector that a length is 49 × 3=147 { RT1 ..., RT147 }, namely complete to characteristic point (xn,yn) description.
According to the method described above, all characteristic points are described respectively, available length is 49 × 3=147's Characteristic vector.
Step S300:Travel through set of characteristic points and carry out described image characteristic vector and the characteristic point bi-directional matching, obtain The similar features point pair of each characteristic point, the Euclidean distance between the characteristic point of each similar features point centering is most It is small, and the similar features point is designated as { [(x to setai,yai),(xbi,ybi)] | i=1 ..., t }, wherein t represents final The number matched somebody with somebody;
Following feature based vector matches to characteristic point, for set of characteristic points { (x1,y1),…,(xN,yN)} Each characteristic point, the minimum characteristic point of distance is found inside set of characteristic points by bi-directional matching.
Appoint and take a characteristic point (xa,ya) exemplified by illustrate, characteristic vector corresponding to note is RTa, and it is special to calculate itself and other The Euclidean distance of point is levied, then the minimum characteristic point of chosen distance is designated as (xb,yb) as its preliminary matches characteristic point.Then With same method to characteristic point (xb,yb) handled, when the minimum characteristic point of distance is exactly (xa,ya) when, it is final to determine Characteristic point (xa,ya) and characteristic point (xb,yb) it is a pair of similar features points, otherwise characteristic point (xa,ya) there is no similar features point.
Bi-directional matching is carried out to each characteristic point by the above method, a series of similar features points can be obtained, be designated as {[(xai,yai),(xbi,ybi)] | i=1 ..., t }, wherein t represents the number finally matched.
Step S400:By similar features point described in each pair to being connected with line segment in described image to be detected, and count Calculate the slope f of every line segmentiAnd fi-fi+1, judge fi-fi+1Whether in -0.05~0.005, this line segment if meeting Consistent line segment is designated as, is unsatisfactory for, is not marked;
Step S500:The quantity of the consistent line segment is counted, as slope quantity, judges the institute in described image to be detected Whether state slope quantity≤50, described image to be detected is duplicating image if meeting, described to be detected if being unsatisfactory for Image, which is not present, replicates tampered region,
The remaining line segment in addition to the consistent line segment in the duplicating image is deleted, calculates the line of the wantonly two consistent line segments Segment distance, and the consistent line segment quantity of the line segment distance less than 100 is counted, whether judge the consistent line segment quantity of gained >= 40, described image to be detected, which exists, if meeting replicates tampered region, is otherwise usurped in described image to be detected in the absence of duplication Change region.
After similar features point set is obtained, every a pair of similar features point is connected with line segment on image P (x, y, z) Come, calculate the direction of every line segment, namely slope fi
Wherein, { fi| i=1 ..., t } slope is represented, the difference of these slopes is counted, difference is in scope (- 0.05,0.05) Interior line segment, the direction for regarding as line segment are basically identical.It is basically identical if there is at least 50 slopes, namely at least 50 lines The direction of section is basically identical, then preliminary to judge duplication tampered region in image to be detected P (x, y, z) be present;Otherwise directly judge Image P (x, y, z) is without duplication tampered region.
If after tentatively judging that image to be detected P (x, y, z) has duplication tampered region, retain on image P (x, y, z) The basically identical line segment information of those slopes, deletes other line segments.Then the distance of these line segments is calculated, formula is as follows:
dij=| yai-yaj|+|xai-xaj| (9)
Wherein i, j represent the label information of line segment, dijRepresent the distance of two lines section.If there is between 40 line segments Mutual distance is less than 100, then final to judge that image P (x, y, z) be present has duplication tampered region;Otherwise judge image P (x, y, Z) without duplication tampered region.
Referring to Fig. 3, another aspect of the present invention additionally provides a kind of image based on local invariant feature of above method Region duplication tampering detection apparatus, including:
Feature point detection module:For carrying out characteristic point to all pixels point in image to be detected using SUSAN algorithms Detection, obtains set of characteristic points, is designated as { (x1,y1),…,(xN,yN)};
Feature vector module:For traveling through the set of characteristic points with partial 3 d pattern, obtain and each characteristic point The corresponding image feature vector for being used to describe each characteristic point;
Similar features point is to module:Described image characteristic vector and the characteristic point pair are carried out for traveling through set of characteristic points To matching, the similar features point pair of each characteristic point is obtained, between the characteristic point of each similar features point centering Euclidean distance is minimum, and the similar features point is designated as into { [(x to setai,yai),(xbi,ybi)] | i=1 ..., t }, wherein t Represent the number finally matched;
Consistent line segment module:For in described image to be detected by similar features point described in each pair to being connected with line segment Come, and calculate the slope f of every line segmentiAnd fi-fi+1, judge fi-fi+1Whether in -0.05~0.005, if meeting This line segment is designated as consistent line segment, is unsatisfactory for, and does not mark;
Judge module:For counting the quantity of the consistent line segment, as slope quantity, judge in described image to be detected The slope quantity whether≤50, if meet if described image to be detected be duplicating image, it is described if being unsatisfactory for treat Detection image, which is not present, replicates tampered region,
The remaining line segment in addition to the consistent line segment in the duplicating image is deleted, calculates the line of the wantonly two consistent line segments Segment distance, and the consistent line segment quantity of the line segment distance less than 100 is counted, whether judge the consistent line segment quantity of gained >= 40, described image to be detected, which exists, if meeting replicates tampered region, is otherwise usurped in described image to be detected in the absence of duplication Change region.
Preferably, feature point detection module includes:
USAN value modules:For traveling through image to be detected with circular shuttering, each pixel in described image to be detected is calculated USAN value u (i, j) at point;
Threshold module:For USAN values u (i, j) thresholding to all pixels in the testing image, characteristic point is obtained Respond Rx (i, j);
Rx (i, j)=max (0, th2-u (i, j)) (4)
Wherein, th2 is gained threshold value;
Preliminary feature point module:All pixels point for traveling through in described image to be detected judges that USAN value u (i, j) are No to meet u (i, j) < th2, the point is then labeled as preliminary characteristic point, otherwise enters threshold module to next pixel by satisfaction, All preliminary feature point sets of gained are combined into preliminary set of characteristic points;
Non-maxima suppression module:For traveling through the preliminary set of characteristic points with non-maxima suppression, by acquired results The set of characteristic points is assembled, is designated as { (x1,y1),…,(xN,yN)};
The circular shuttering is all satisfactions (x-x1) centered on (x1, y1) pixel2+(y-y1)2≤ 10 pixel The region of point (x, y) composition.
Simulation example will be carried out by the condition in above-described embodiment below, specific example is done further specifically to the present invention It is bright.
The color RGB image that note needs to carry out region duplication tampering detection is P (x, y, z), method category proposed by the present invention In this kind of method that distinguished point based is detected and matched, therefore firstly the need of progress feature point detection, low level image procossing The method (i.e. SmallUnivalReSegmentAssimilatingNucleus, abbreviation SUSAN algorithm) in the similar area of small nut value, It is a kind of classical angular-point detection method, it can extract the angle point and edge feature of target in compared with very noisy, and position Accurately.The present invention has carried out some improvement to SUSAN algorithms, and with stronger Stability and veracity, specific steps are such as Under:
(1) using circular shuttering traversing graph as P (x, y, z), the value in the absorption He Tongzhi areas (USAN) at every place of calculating;
Circular shuttering is defined as follows:Center pixel is designated as (x1, y1), all to meet condition (x-x1)2+(y-y1)2≤ 10 The region of pixel (x, y) composition is the scope of circular shuttering.Circular shuttering is as shown in Fig. 2 circular shuttering size is 7 × 7, one Totally 37 pixels, the pixel of wherein label 19 is center pixel.For image P (x, y, z) any pixel (i0, j0), by circle The center pixel of shape template is placed on (i0, j0), then calculates each picture in image P (x, y, z) inside circular shuttering position The difference of the pixel color value of element and the color value of center pixel (i0, j0), if the pixel color value of a pixel and center Difference between the color value of pixel (i0, j0) is less than or equal to the threshold value th1 of the similarity degree of setting, then the pixel belongs to USAN regions;Otherwise the pixel is not belonging to USAN regions.By the above method so as to judging whether pixel belongs to USAN areas Domain, specific formula are as follows:
C (i, j) represents whether the pixel belongs to USAN regions.Wherein th1 (i0, j0) represents the threshold value of similarity degree, Typically one constant value of artificial selection, this largely constrains it and is automatically processing the application of aspect.To this The method that invention proposes a kind of automatic selected threshold of energy, calculation formula are as follows:
Wherein Ω represents the contiguous range that size is 3 × 3 centered on (i0, j0).
Then all pixels point being in motion vector figure Vx (i, j) in circular shuttering position is counted:
Here the pixel point set in Ω expression motion vector figure Vx (i, j) in circular shuttering position, u (i0, J0) be pixel (i0, j0) USAN values.
Using circular shuttering traversing graph as Vx (i, j) all pixels, the USAN value u (i, j) of all pixels can be obtained.
(2) after the USAN values for calculating all pixels, a preliminary characteristic point is obtained by thresholding and responds Rx (i, j);
Rx (i, j)=max (0, th2-u (i, j)) (4)
Wherein th2 represents threshold value, here value 28, and only as u (i, j) < th2, Rx (i, j) is only possible to be more than 0, It is the characteristic point of a preliminary judgement to show the point.Pixel set of all Rx (i, j) more than 0 is preliminary feature point set Close.
(3) preliminary set of characteristic points is handled using non-maxima suppression, obtains final set of characteristic points.
For each preliminary judgement in preliminary set of characteristic points characteristic point such as (i1, j1), observe using it in The heart, in the field of size 5 × 5, if there is Rx (i, the j) values of other pixels than Rx (i1, j1) greatly, if not provided, namely Rx (i1, j1) is maximum, then retains this characteristic point (i1, j1);Otherwise delete this characteristic point (i1, j1), also i.e. by Rx (i, J) 0 is reset to.The characteristic point of all preliminary judgements in preliminary set of characteristic points is handled, final all Rx (i, j) Pixel set more than 0 is final set of characteristic points, is designated as { (x1,y1),…,(xN,yN)}。
Next characteristic vector description is carried out to each characteristic point.The present invention has used for reference the think of of local binary description Think, encoded by color comparison information, but dual mode only characterizes simple magnitude relationship, can not react texture change The difference degree of change, and be easily affected by noise.In order to overcome these problems, the present invention expands local binary Exhibition, the result that color compares is divided into 3 classes, the details of texture variations is preferably reacted, comprises the following steps that:
Appoint and take a characteristic point (xn,yn) exemplified by illustrate.On image P (x, y, z), with (xn,yn) centered on, greatly It is small be 7 × 7 image-region be designated as characteristic area Fn(x, y, z), therefore characteristic point (xn,yn) and characteristic area Fn(x, y, z) is Correspond.Characteristic point (xn,yn) information be all reflected in characteristic area FnOn (x, y, z), therefore we pass through to characteristic area Fn(x, y, z) carries out feature statement, to realize to characteristic point (xn,yn) purpose that is described.
Characteristic area Fn(x, y, z) shares 49 pixels, and the present invention carries out same processing side to each pixel Method, to be illustrated exemplified by one of pixel (x0, y0),
Centered on (x0, y0), image P (x, y, z) select its surrounding close to 8 pixels be designated as { (x01, y01),…,(x08,y08) contrast pixel as the point, then carry out first Model Comparison:
Wherein m ∈ { 1,2 ..., 8 } represent pixel serial number, and th1 represents first discrepancy threshold, here value 10, r1 (m) pixel (x is representedm,ym) coding under first pattern.Final 8 pixels can obtain the binary number of one 8 { r1 (1), r1 (2) ..., r1 (8) }, it is pixel (x0, y0) at first to be translated into the value obtained by decimal number Characteristic value under pattern, is designated as RT1.
Followed by the comparison of second pattern:
R2 (m) represents pixel (xm,ym) coding under first pattern.Same 8 pixels can obtain one 8 Binary number { r2 (1), r2 (2) ..., r2 (8) }, it is pixel (x0, y0) to be translated into the value obtained by decimal number Characteristic value in a second mode, be designated as RT2.
Carry out the comparison of the 3rd pattern:
Wherein r3 (m) represents pixel (xm,ym) coding under the 3rd pattern.Same 8 pixels can obtain one The binary number { r3 (1), r3 (2) ..., r3 (8) } of 8, be translated into the value obtained by decimal number be pixel (x0, Y0 the characteristic value under the 3rd pattern), is designated as RT3.
It can be seen that the distribution of texture has been carried out thinner division by these three patterns, by discrepancy threshold th1 to color Comparison result carries out the classification of three kinds of situations, is no longer simple under local binary be larger and smaller than.Therefore the present invention carries The character description method gone out can preferably characterize the details of grain distribution.
The characteristic vector { RT1 ..., RT3 } that a length is three is obtained in final pixel point (x0, y0) one.
To characteristic area Fn(x, y, z) all pixels, are all handled according to the method described above, obtain respective feature Vector, all characteristic vectors is arranged together, it is final to obtain the characteristic vector that a length is 49 × 3=147 { RT1 ..., RT147 }, namely complete to characteristic point (xn,yn) description.
According to the method described above, all characteristic points are described respectively, available length is 49 × 3=147's Characteristic vector.
Following feature based vector matches to characteristic point, for set of characteristic points { (x1,y1),…,(xN,yN)} Each characteristic point, the minimum characteristic point of distance is found inside set of characteristic points by bi-directional matching.
Appoint and take a characteristic point (xa,ya) exemplified by illustrate, characteristic vector corresponding to note is RTa, and it is special to calculate itself and other The Euclidean distance of point is levied, then the minimum characteristic point of chosen distance is designated as (xb,yb) as its preliminary matches characteristic point.Then With same method to characteristic point (xb,yb) handled, when the minimum characteristic point of distance is exactly (xa,ya) when, it is final to determine Characteristic point (xa,ya) and characteristic point (xb,yb) it is a pair of similar features points, otherwise characteristic point (xa,ya) there is no similar features point.
Bi-directional matching is carried out to each characteristic point by the above method, a series of similar features points can be obtained, be designated as {[(xai,yai),(xbi,ybi)] | i=1 ..., t }, wherein t represents the number finally matched.
After similar features point set is obtained, every a pair of similar features point is connected with line segment on image P (x, y, z) Come, calculate the direction of every line segment, namely slope:
{fi| i=1 ..., t } slope is represented, the difference of these slopes is counted, difference is interior in scope (- 0.05,0.05), I Think that the direction of line segment is basically identical.It is basically identical if there is at least 50 slopes, namely the direction of at least 50 line segments It is basically identical, it is then preliminary to judge that image P (x, y, z) be present has duplication tampered region;Otherwise directly judge image P (x, y, Z) without duplication tampered region.
If after tentatively judging that image P (x, y, z) be present has duplication tampered region, retain that on image P (x, y, z) The basically identical line segment information of a little slopes, deletes other line segments.Then the distance of these line segments is calculated, formula is as follows:
dij=| yai-yaj|+|xai-xaj| (9)
Wherein i, j represent the label information of line segment, dijRepresent the distance of two lines section.If there is between 40 line segments Mutual distance is less than 100, then final to judge that image P (x, y, z) be present has duplication tampered region;Otherwise judge image P (x, y, Z) without duplication tampered region.
The present invention proposes a kind of image zone duplicating and altering detecting method based on local invariant feature, first by changing The SUSAN Corner Detection Algorithms entered calculate set of characteristic points, then carry out feature to characteristic point by a kind of local tertiary mode Description, the result that color compares is divided into three classes, is allowed to the details of preferably reaction texture variations, is examined finally by characteristic matching Altimetric image is with the presence or absence of duplication tampered region.The inventive method can be more careful description image grain distribution, improve figure As the accuracy of region duplication tampering detection.
Those skilled in the art will be clear that the scope of the present invention is not restricted to example discussed above, it is possible to which it is carried out Some changes and modification, the scope of the present invention limited without departing from appended claims.Although oneself is through in accompanying drawing and explanation Illustrate and describe the present invention in book in detail, but such explanation and description are only explanations or schematical, and it is nonrestrictive. The present invention is not limited to the disclosed embodiments.
By to accompanying drawing, the research of specification and claims, when implementing of the invention, those skilled in the art can be with Understand and realize the deformation of the disclosed embodiments.In detail in the claims, term " comprising " is not excluded for other steps or element, And indefinite article "one" or " one kind " be not excluded for it is multiple.The some measures quoted in mutually different dependent claims The fact does not mean that the combination of these measures can not be advantageously used.Any reference marker in claims is not formed pair The limitation of the scope of the present invention.

Claims (5)

1. a kind of image zone duplicating and altering detecting method based on local invariant feature, it is characterised in that comprise the following steps:
Step S100:Feature point detection is carried out to all pixels point in image to be detected using SUSAN algorithms, obtains characteristic point Set, is designated as { (x1,y1),…,(xN,yN)};
Step S200:The set of characteristic points is traveled through with partial 3 d pattern, obtained corresponding with each characteristic point for retouching State the image feature vector of each characteristic point;
Step S300:Travel through set of characteristic points and carry out described image characteristic vector and the characteristic point bi-directional matching, obtain each institute State the similar features point pair of characteristic point, the Euclidean distance between the characteristic point of each similar features point centering is minimum, and The similar features point is designated as { [(x to setai,yai),(xbi,ybi)] | i=1 ..., t }, wherein t represents what is finally matched Number;
Step S400:By similar features point described in each pair to being connected with line segment in described image to be detected, and calculate every The slope f of bar line segmentiAnd fi-fi+1, judge fi-fi+1Whether in -0.05~0.005, this line segment is designated as if meeting Consistent line segment, is unsatisfactory for, and does not mark;
Step S500:The quantity of the consistent line segment is counted, as slope quantity, is judged described oblique in described image to be detected Rate quantity whether≤50, if meet if described image to be detected be duplicating image, described image to be detected if being unsatisfactory for In the absence of replicate tampered region,
Delete the remaining line segment in addition to the consistent line segment in the duplicating image, calculate the line segments of the wantonly two unanimously line segments away from From, and the consistent line segment quantity of the line segment distance less than 100 is counted, and whether judge the consistent line segment quantity of gained >=40 Individual, described image to be detected, which exists, if meeting replicates tampered region, is otherwise distorted in described image to be detected in the absence of duplication Region.
2. the image zone duplicating and altering detecting method according to claim 1 based on local invariant feature, its feature exist In the step S100 comprises the following steps:
Step S110:Image to be detected is traveled through with circular shuttering, calculates the USAN at each pixel in described image to be detected Value u (i, j);
Step S120:To USAN values u (i, j) thresholding of all pixels in the testing image, obtain characteristic point response Rx (i, j);
Rx (i, j)=max (0, th2-u (i, j)) (4)
Wherein, th2 is gained threshold value;
Step S130:Travel through the point of all pixels in described image to be detected and judge whether USAN value u (i, j) meet u (i, j) < The point is then labeled as preliminary characteristic point by th2, satisfaction, otherwise carries out step S120 to next pixel, and gained is all preliminary Feature point set is combined into preliminary set of characteristic points;
Step S140:The preliminary set of characteristic points is traveled through with non-maxima suppression, acquired results are assembled into the characteristic point Set, is designated as { (x1,y1),…,(xN,yN)};
The circular shuttering is all satisfactions (x-x1) centered on (x1, y1) pixel2+(y-y1)2≤ 10 pixel (x, Y) region of composition.
3. the image zone duplicating and altering detecting method according to claim 2 based on local invariant feature, its feature exist In the step S200 comprises the following steps:Wait to appoint the characteristic point (x taken with describedn,yn) centered on, by image P (x, y, z) 7 × 7 image-region is designated as characteristic area Fn(x, y, z), traversal characteristic area FnEach pixel in (x, y, z), to each Pixel sequentially carries out first Model Comparison, second Model Comparison and the 3rd Model Comparison, obtains characteristic vector { RT1 ..., RT3 }, image feature vector will be obtained after characteristic vector set obtained by each pixel.
It is 4. a kind of such as the image zone duplicating based on local invariant feature of method according to any one of claims 1 to 3 Tampering detection apparatus, it is characterised in that including:
Feature point detection module:For carrying out feature point detection to all pixels point in image to be detected using SUSAN algorithms, Set of characteristic points is obtained, is designated as { (x1,y1),…,(xN,yN)};
Feature vector module:For traveling through the set of characteristic points with partial 3 d pattern, obtain corresponding with each characteristic point Be used for the image feature vector of each characteristic point is described;
Similar features point is to module:Described image characteristic vector and two-way of the characteristic point are carried out for traveling through set of characteristic points Match somebody with somebody, obtain the similar features point pair of each characteristic point, it is European between the characteristic point of each similar features point centering Distance is minimum, and the similar features point is designated as into { [(x to setai,yai),(xbi,ybi)] | i=1 ..., t }, wherein t is represented The number finally matched;
Consistent line segment module:For in described image to be detected by similar features point described in each pair to being connected with line segment, And calculate the slope f of every line segmentiAnd fi-fi+1, judge fi-fi+1Whether in -0.05~0.005, if meet if this Line segment is designated as consistent line segment, is unsatisfactory for, and does not mark;
Judge module:For counting the quantity of the consistent line segment, as slope quantity, the institute in described image to be detected is judged Whether state slope quantity≤50, described image to be detected is duplicating image if meeting, described to be detected if being unsatisfactory for Image, which is not present, replicates tampered region,
Delete the remaining line segment in addition to the consistent line segment in the duplicating image, calculate the line segments of the wantonly two unanimously line segments away from From, and the consistent line segment quantity of the line segment distance less than 100 is counted, and whether judge the consistent line segment quantity of gained >=40 Individual, described image to be detected, which exists, if meeting replicates tampered region, is otherwise distorted in described image to be detected in the absence of duplication Region.
5. the image zone duplicating and altering detection means according to claim 4 based on local invariant feature, its feature exist In the feature point detection module includes:
USAN value modules:For traveling through image to be detected with circular shuttering, calculate in described image to be detected at each pixel USAN value u (i, j);
Threshold module:For USAN values u (i, j) thresholding to all pixels in the testing image, characteristic point response is obtained Rx(i,j);
Rx (i, j)=max (0, th2-u (i, j)) (4)
Wherein, th2 is gained threshold value;
Preliminary feature point module:All pixels point for traveling through in described image to be detected judges whether USAN value u (i, j) are full The point is then labeled as preliminary characteristic point, otherwise enters threshold module to next pixel, by institute by sufficient u (i, j) < th2, satisfaction Obtain all preliminary feature point sets and be combined into preliminary set of characteristic points;
Non-maxima suppression module:For traveling through the preliminary set of characteristic points with non-maxima suppression, by acquired results set Into the set of characteristic points, { (x is designated as1,y1),…,(xN,yN)};
The circular shuttering is all satisfactions (x-x1) centered on (x1, y1) pixel2+(y-y1)2≤ 10 pixel (x, Y) region of composition.
CN201710579410.XA 2017-07-17 2017-07-17 Image zone duplicating and altering detecting method and detection device based on local invariant feature Active CN107392949B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710579410.XA CN107392949B (en) 2017-07-17 2017-07-17 Image zone duplicating and altering detecting method and detection device based on local invariant feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710579410.XA CN107392949B (en) 2017-07-17 2017-07-17 Image zone duplicating and altering detecting method and detection device based on local invariant feature

Publications (2)

Publication Number Publication Date
CN107392949A true CN107392949A (en) 2017-11-24
CN107392949B CN107392949B (en) 2019-11-05

Family

ID=60339347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710579410.XA Active CN107392949B (en) 2017-07-17 2017-07-17 Image zone duplicating and altering detecting method and detection device based on local invariant feature

Country Status (1)

Country Link
CN (1) CN107392949B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521821A (en) * 2011-10-24 2012-06-27 南开大学 Automatic identification and tampered area positioning method in digital image
CN104537654A (en) * 2014-12-19 2015-04-22 大连理工大学 Printed image tampering forensic methods based on half-tone dot location distortion
CN104616297A (en) * 2015-01-26 2015-05-13 山东省计算中心(国家超级计算济南中心) Improved SIFI algorithm for image tampering forensics
US20160132985A1 (en) * 2014-06-10 2016-05-12 Sam Houston State University Rich feature mining to combat anti-forensics and detect jpeg down-recompression and inpainting forgery on the same quantization
CN105631871A (en) * 2015-12-28 2016-06-01 辽宁师范大学 Color image duplicating and tampering detection method based on quaternion exponent moments
US20170091588A1 (en) * 2015-09-02 2017-03-30 Sam Houston State University Exposing inpainting image forgery under combination attacks with hybrid large feature mining

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521821A (en) * 2011-10-24 2012-06-27 南开大学 Automatic identification and tampered area positioning method in digital image
US20160132985A1 (en) * 2014-06-10 2016-05-12 Sam Houston State University Rich feature mining to combat anti-forensics and detect jpeg down-recompression and inpainting forgery on the same quantization
CN104537654A (en) * 2014-12-19 2015-04-22 大连理工大学 Printed image tampering forensic methods based on half-tone dot location distortion
CN104616297A (en) * 2015-01-26 2015-05-13 山东省计算中心(国家超级计算济南中心) Improved SIFI algorithm for image tampering forensics
US20170091588A1 (en) * 2015-09-02 2017-03-30 Sam Houston State University Exposing inpainting image forgery under combination attacks with hybrid large feature mining
CN105631871A (en) * 2015-12-28 2016-06-01 辽宁师范大学 Color image duplicating and tampering detection method based on quaternion exponent moments

Also Published As

Publication number Publication date
CN107392949B (en) 2019-11-05

Similar Documents

Publication Publication Date Title
TW393629B (en) Hand gesture recognition system and method
Rosin et al. Evaluation of global image thresholding for change detection
Heath et al. A robust visual method for assessing the relative performance of edge-detection algorithms
US7502496B2 (en) Face image processing apparatus and method
US8000505B2 (en) Determining the age of a human subject in a digital image
CN109409190A (en) Pedestrian detection method based on histogram of gradients and Canny edge detector
CN111160249A (en) Multi-class target detection method in optical remote sensing images based on cross-scale feature fusion
CN103294989A (en) Method for discriminating between a real face and a two-dimensional image of the face in a biometric detection process
CN101147159A (en) Fast method of object detection by statistical template matching
CN109800682B (en) Driver attribute identification method and related products
Liu et al. Optimal matching problem in detection and recognition performance evaluation
JP2000517452A (en) Viewing method
CN108765407A (en) A kind of portrait picture quality determination method and device
WO2020217812A1 (en) Image processing device that recognizes state of subject and method for same
CN107689039A (en) Estimate the method and apparatus of image blur
CN110288040A (en) A kind of similar evaluation method of image based on validating topology and equipment
Panetta et al. Unrolling post-mortem 3D fingerprints using mosaicking pressure simulation technique
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
CN107392949B (en) Image zone duplicating and altering detecting method and detection device based on local invariant feature
Wu et al. Detecting image forgeries using metrology
CN103093204B (en) Behavior monitoring method and device
CN110348464A (en) An Image Forgery Detection Algorithm Based on Local Brightness Sequence of Multiple Support Regions
CN104732521B (en) A kind of similar purpose dividing method based on weight group similar active skeleton pattern
CN109690555A (en) Face detector based on curvature
CN107122714A (en) A kind of real-time pedestrian detection method based on edge constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant