CN108549836B - Photo copying detection method, device, equipment and readable storage medium - Google Patents
Photo copying detection method, device, equipment and readable storage medium Download PDFInfo
- Publication number
- CN108549836B CN108549836B CN201810196019.6A CN201810196019A CN108549836B CN 108549836 B CN108549836 B CN 108549836B CN 201810196019 A CN201810196019 A CN 201810196019A CN 108549836 B CN108549836 B CN 108549836B
- Authority
- CN
- China
- Prior art keywords
- image
- photo
- face
- key
- extracting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 36
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 60
- 238000000034 method Methods 0.000 claims abstract description 44
- 230000004927 fusion Effects 0.000 claims abstract description 20
- 238000004590 computer program Methods 0.000 claims description 29
- 238000006243 chemical reaction Methods 0.000 claims description 23
- 239000011159 matrix material Substances 0.000 claims description 20
- 238000000605 extraction Methods 0.000 claims description 15
- 238000013507 mapping Methods 0.000 claims description 11
- 238000010845 search algorithm Methods 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 12
- 230000008859 change Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a photo reproduction detection method, which comprises the following steps: converting the photo into a binary image, and acquiring each key part in the photo based on a connected domain algorithm of the binary image; extracting characteristic values in each key part; normalizing the characteristic values in the key parts, and fusing the normalized characteristic values to obtain fused characteristic values; inputting the fusion characteristic value into a classifier for classification to obtain a classification result; and if the classification result is a reproduction type, confirming that the photo is a reproduction type. According to the method for detecting the reproduction of the face photo, the face is firstly subdivided into key areas, and then the features of the key areas are extracted, so that the precision of the method for detecting the reproduction of the face photo is higher. The invention also provides a detection device, equipment and a readable storage medium for the reproduction.
Description
Technical Field
The invention relates to the field of image recognition, in particular to a photo copying detection method, a photo copying detection device, photo copying detection equipment and a readable storage medium.
Background
At present, the application of face recognition is more and more extensive, however, some lawbreakers use the loophole of the face recognition technology to perform system cheating by copying the face certificate photo, so that the face recognition system is cheated, and the illegal authority of the system is obtained. Therefore, how to detect whether the face photo is copied becomes a hot problem in the field.
The traditional reproduction detection method is a video-based reproduction detection method, mainly utilizes time information attached to a face in a video image, namely, facial expressions of the face can change to a certain extent along with the change of time, and the changes inevitably cause the change of image characteristics. However, the reproduced photograph does not undergo a subtle change in facial expression with time, and therefore living organism recognition can be performed using such features. However, the video-based reproduction detection method requires cooperation of users, and is inconvenient to use.
Therefore, a copy detection method based on a single photo appears, but the current copy detection method based on a single photo usually extracts features of a human face as a whole, and the method has low detection accuracy because different regions of the human face have different characteristics.
Disclosure of Invention
Therefore, it is necessary to provide a method, an apparatus, a device and a readable storage medium for detecting the reproduction of a photo, aiming at the problem that the conventional method has low accuracy in the reproduction detection of a photo of a human face.
A method of detecting a reproduction of a photograph, wherein the method comprises:
converting the photo into a binary image, and acquiring each key part in the photo based on a connected domain algorithm of the binary image;
extracting characteristic values in each key part;
normalizing the characteristic values in the key parts, and fusing the normalized characteristic values to obtain fused characteristic values;
inputting the fusion characteristic value into a classifier for classification to obtain a classification result;
and if the classification result is a reproduction type, confirming that the photo is a reproduction type.
According to the method for detecting the reproduction of the face photo, the face is firstly subdivided into key areas, and then the features of the key areas are extracted, so that the precision of the method for detecting the reproduction of the face photo is higher.
As an embodiment, the converting the photo into a binary image, and obtaining each key part in the photo based on a connected component domain algorithm of the binary image includes:
judging whether the image to be detected comprises face information or not;
if yes, the photo is converted into a binary image, and each key part in the photo is obtained based on a connected domain algorithm of the binary image.
As an embodiment, the converting the photo into a binary image, and acquiring the key part in the image to be detected based on a connected component domain algorithm of the binary image includes:
carrying out color space conversion on an input image to obtain a color image;
binarizing the color image to obtain a binarized image;
carrying out vertical mapping on the binary image to obtain a face area in the binary image;
carrying out gray level conversion on the face area to obtain a gray level image of the face area;
performing image convolution on the gray level image, and performing binarization on the convolved image to obtain a binarized convolution image;
and extracting the key part from the binary convolution image through a connected domain search algorithm.
As an example, wherein the key parts include the nose, eyes and mouth;
the step of extracting features in the key region comprises:
extracting nasal part characteristics by adopting a gray level co-occurrence matrix;
extracting eye features by adopting an LBP algorithm;
and extracting the oral characteristics by adopting wavelet transform.
A device for detecting the reproduction of a photograph, wherein the device comprises:
the key part acquisition module is used for converting the photo into a binary image and acquiring each key part in the photo based on a connected domain algorithm of the binary image;
the characteristic value extraction module is used for extracting the characteristics in each key part;
the characteristic value fusion module is used for normalizing the characteristic values in the key parts and fusing the normalized characteristic values to obtain fusion characteristic values;
a classification result obtaining module, configured to input the fusion feature value into a classifier for classification to obtain a classification result;
and the copying judgment module is used for confirming that the photo is copied if the classification result is the copying type.
According to the device for detecting the reproduction of the face photo, the face is firstly subdivided into key areas, and then the features of the key areas are extracted, so that the precision of the method for detecting the reproduction of the face photo is higher.
As an embodiment, wherein the key part obtaining module includes:
the face detection unit is used for judging whether the image to be detected comprises face information or not;
if yes, the key part obtaining module continues to convert the photo into a binary image, and obtains each key part in the photo based on a connected domain algorithm of the binary image.
As an embodiment, wherein the key part obtaining module includes:
the color image acquisition unit is used for carrying out color space conversion on the input image to obtain a color image;
a binarization image obtaining unit, configured to binarize the color image to obtain a binarization image;
a face region obtaining unit, configured to perform vertical mapping on the binarized image to obtain a face region in the binarized image;
the gray level image acquisition unit is used for carrying out gray level conversion on the face area to obtain a gray level image of the face area;
a binarization convolution image unit used for carrying out image convolution on the gray level image and carrying out binarization on the convolved image to obtain a binarization convolution image;
and the key part extracting unit is used for extracting the key part from the binary convolution image through a connected domain searching algorithm.
As an example, wherein the critical parts include the nose, eyes and mouth;
the feature value extraction module includes:
the nose feature extraction unit is used for extracting nose features by adopting a gray level co-occurrence matrix;
the eye feature extraction unit is used for extracting eye features by adopting an LBP algorithm;
and the oral feature extraction unit is used for extracting oral features by adopting wavelet transformation.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of:
converting the photo into a binary image, and acquiring each key part in the photo based on a connected domain algorithm of the binary image;
extracting features in the key parts;
normalizing the characteristic values in the key parts, and fusing the normalized characteristic values to obtain fused characteristic values;
inputting the fusion characteristic value into a classifier for classification to obtain a classification result;
and if the classification result is a reproduction type, confirming that the photo is a reproduction type.
The method for realizing the computer program in the computer equipment when the computer program is executed by the processor firstly subdivides the human face into key areas and then extracts the characteristics of the key areas, so that the precision of the method for detecting the duplication of the human face photos is higher.
As an embodiment, the converting the photo into a binary image and obtaining each key part in the photo based on a connected domain algorithm of the binary image, executed by the processor, includes:
judging whether the image to be detected comprises face information or not;
if yes, the photo is converted into a binary image, and each key part in the photo is obtained based on a connected domain algorithm of the binary image.
As an embodiment, the converting the photo into a binary image and acquiring the key part in the image to be detected based on a connected domain algorithm of the binary image executed by the processor includes:
carrying out color space conversion on an input image to obtain a color image;
binarizing the color image to obtain a binarized image;
carrying out vertical mapping on the binary image to obtain a face area in the binary image;
carrying out gray level conversion on the face area to obtain a gray level image of the face area;
performing image convolution on the gray level image, and performing binarization on the convolved image to obtain a binarized convolution image;
and extracting the key part from the binary convolution image through a connected domain search algorithm.
As an embodiment, among the critical sites performed by the processor are nose, eye, and mouth;
the step of extracting features in the key region comprises:
extracting nasal part characteristics by adopting a gray level co-occurrence matrix;
extracting eye features by adopting an LBP algorithm;
and extracting the oral characteristics by adopting wavelet transform.
A readable storage medium storing a computer program which when executed by a processor performs the steps of:
converting the photo into a binary image, and acquiring each key part in the photo based on a connected domain algorithm of the binary image;
extracting features in the key parts;
normalizing the characteristic values in the key parts, and fusing the normalized characteristic values to obtain fused characteristic values;
inputting the fusion characteristic value into a classifier for classification to obtain a classification result;
and if the classification result is a reproduction type, confirming that the photo is a reproduction type.
The computer program of the readable storage medium is implemented by a processor, firstly, the human face is subdivided into key areas, and then the features of the key areas are extracted, so that the precision of the copying detection method of the human face photos is higher.
As an embodiment, the converting the photo into a binary image and acquiring each key part in the photo based on a connected domain algorithm of the binary image, which is implemented by a computer program stored in a storage medium when the computer program is executed by a processor, includes:
judging whether the image to be detected comprises face information or not;
if yes, the photo is converted into a binary image, and each key part in the photo is obtained based on a connected domain algorithm of the binary image.
As an embodiment, the converting the photo into a binary image, and acquiring the key part in the image to be detected based on a connected domain algorithm of the binary image, which is implemented by a computer program stored in a storage medium when the computer program is executed by a processor, includes:
carrying out color space conversion on an input image to obtain a color image;
binarizing the color image to obtain a binarized image;
carrying out vertical mapping on the binary image to obtain a face area in the binary image;
carrying out gray level conversion on the face area to obtain a gray level image of the face area;
performing image convolution on the gray level image, and performing binarization on the convolved image to obtain a binarized convolution image;
and extracting the key part from the binary convolution image through a connected domain search algorithm.
As an embodiment, wherein the computer program stored on the storage medium, when executed by the processor, implements the method wherein the critical areas include the nose, eyes, and mouth;
the step of extracting features in the key region comprises:
extracting nasal part characteristics by adopting a gray level co-occurrence matrix;
extracting eye features by adopting an LBP algorithm;
and extracting the oral characteristics by adopting wavelet transform.
Drawings
FIG. 1 is a flow chart of a method for detecting a copy in accordance with one embodiment;
FIG. 2 is a partial flow diagram of a method for detecting a copy in accordance with one embodiment;
FIG. 3 is a partial flow diagram of a method for detecting a copy in accordance with one embodiment;
FIG. 4 is a diagram illustrating a LBP value calculation process according to an embodiment;
fig. 5 is a schematic structural diagram of a photograph duplication detection apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a reproduction detection method according to an embodiment.
And S102, converting the photo into a binary image, and acquiring each key part in the photo based on a connected domain algorithm of the binary image.
Specifically, first, a key part is acquired in an image to be detected, and the key part may include at least one of a nose, an eye and a mouth.
Specifically, the key part of the face can be detected by information such as color, and the key part can be acquired. Firstly, the photo is converted into a binary image, and then a key part in the photo is obtained based on a connected domain algorithm of the binary image.
Step S104, extracting the features in each key part;
specifically, because the characteristics of each key part are different, the feature values are extracted aiming at different parts, and the accuracy of the human face photo duplication detection can be improved.
Specifically, the feature values of the key parts may be extracted by a geometric method, a model method, a signal processing method, or the like. And extracting the characteristic values of the key parts by respectively adopting a gray level co-occurrence matrix algorithm, an LBP algorithm or a wavelet transform algorithm according to different key parts. For example, the nasal features may be extracted using a gray level co-occurrence matrix, the ocular features may be extracted using an LBP algorithm, and the oral features may be extracted using wavelet transform.
And S106, normalizing the characteristic values in the key parts, and fusing the normalized characteristic values to obtain a fused characteristic value.
Specifically, the extracted features are fused, and the feature values of the key parts extracted in S104 may be normalized, where the normalization is to convert the feature values of other parts according to a standard of any one of the key parts. And then, splicing by adopting an addition mode to obtain fusion features, or normalizing the features of the key parts extracted in the step S104, then giving different weights according to specific requirements, and obtaining the fusion features by a weighting addition mode. It can be understood that the fused feature needs to represent the features in each critical portion, but the features of different portions may be set with different weights according to specific requirements, so as to represent the difference in importance of the features of different portions in the fused feature.
Step S108, inputting the fusion characteristic value into a classifier for classification to obtain a classification result;
in particular, a Support Vector Machine (SVM) or other classifier may be used for detection. Training is first performed using a number of training images to generate a training model. And then, classifying the input fusion features by combining the training model through a Support Vector Machine (SVM) or other classifiers to obtain a classification result, and determining whether the photo is a copy according to the classification result.
And step S110, if the classification result is a reproduction type, confirming that the photo is a reproduction type.
Specifically, the classification result may be a reproduction type and a non-reproduction type, and if the classification result is the reproduction type, the photo is a reproduced photo. Correspondingly, if the classification result is a non-reproduction type, the photo is judged not to be a reproduced photo. According to the method for detecting the reproduction of the face photo, the face is firstly subdivided into key areas, and then the features of the key areas are extracted, so that the precision of the method for detecting the reproduction of the face photo is higher.
According to the method for detecting the reproduction of the face photo, the face is firstly subdivided into key areas, and then the features of the key areas are extracted, so that the precision of the method for detecting the reproduction of the face photo is higher.
In one embodiment, the step of acquiring a key region in an image to be detected includes: judging whether the image to be detected comprises face information or not; if yes, executing the step of acquiring a key part in the image to be detected; if not, the detection is finished.
Specifically, a face detection algorithm is adopted to detect the face in the image, and a non-existent face photo is directly excluded. The face detection algorithm can adopt a feature point-based recognition algorithm, a neural network-based recognition algorithm and the like according to specific requirements.
According to the method for detecting the duplication of the face photos, a face recognition process is added before the step of acquiring the key parts in the images to be detected, the photos without the faces are directly eliminated, and the duplication detection efficiency is further improved.
In the method for detecting a reproduction, the key parts include a nose, eyes and a mouth.
Specifically, in order to obtain higher detection accuracy, the critical parts may include three critical parts, namely, a nose part, an eye part and a mouth part.
According to the method for detecting the reproduction of the face photo, three key areas, namely the nose area, the eye area and the mouth area, are used at the same time, and then the features of the key areas are extracted, so that the precision of the method for detecting the reproduction of the face photo is higher.
Referring to fig. 2 together, fig. 2 is a partial flowchart of a reproduction detection method according to an embodiment.
In step S102, the step of converting the photo into a binary image and acquiring the key part in the image to be detected based on the connected domain algorithm of the binary image may be completed by the following steps, which specifically include:
step S202, performing color space conversion on the input image to obtain a color image.
Specifically, color space conversion is performed on an input image to obtain a color image. For example, YcbCr conversion may be performed on the input image, resulting in an image in the YcbCr domain.
Further, the input image RGB values may be converted according to the following formula:
Y=0.2990R+0.5870G+0.1140B
Cb=-0.1787R-0.3313G+0.5000B+128
Cr=0.500R-0.4187G-0.0813B+128。
and step S204, carrying out binarization on the color image to obtain a binarized image.
Specifically, the color image obtained in step S202 is subjected to binarization processing, so that a binarized image can be obtained. For example, the values of Cb, Cr in the image of the YcbCr domain are binarized, i.e., if 93< Cb <133 and 123< Cr <175, the pixels belong to the skin color range. For pixels belonging to the skin color range, a value of 255 is assigned, otherwise a value of 0 is assigned. Thus, a binarized image I (x, y) can be obtained.
And step S206, performing vertical mapping and curve smoothing on the binary image to obtain a face area in the binary image.
Specifically, the face region can be determined by first performing vertical mapping on the binarized image and then performing curve smoothing.
Step S208, carrying out gray level conversion on the face area to obtain a gray level image of the face area.
Specifically, according to the face region, RGB domain values of the face region can be obtained, and the values are converted by the following formula, so that a grayscale image can be obtained.
I=0.259R+0.587G+0.144B。
And step S210, performing image convolution on the gray level image, and performing binarization on the convolved image to obtain a binarized convolution image.
Specifically, the grayscale image is subjected to image convolution using a convolution template, and then the convolved image is binarized.
And S212, extracting the key part from the binary convolution image through a connected domain search algorithm.
Specifically, based on the connected domain algorithm of the scanning lines, three key parts of the eyes, the nose and the mouth can be obtained.
According to the method for detecting the face photo, the three key areas of the eyes, the nose and the mouth are obtained by the method, and the obtaining efficiency is high.
Referring to fig. 3, fig. 3 is a partial flowchart of a duplication detection method provided as an embodiment, wherein the step of extracting the features in the key portions includes:
and step S302, extracting the nose features by adopting a gray level co-occurrence matrix.
Specifically, for the nose region, the gray level co-occurrence matrix is firstly obtained, and then at least one of contrast, energy and entropy is selected as the nose feature through the gray level co-occurrence matrix.
Specifically, assume that an arbitrary point a with coordinates (x, y) and a point B deviating from the arbitrary point a in the nose image are provided, and the coordinates of the point B are (x + a, y + B), where a and B are integers configured in advance according to specific requirements, the point a and the point B form a point pair, and the grayscale values of the point pair are (f1, f 2). Different points in the nose image can obtain the gray values of different point pairs. If the maximum gray level of the nose image is L, the total number of the images is L2Different gray values are used. For the whole image, counting the occurrence frequency of each (f1, f2) value, then arranging the values into a square matrix, and normalizing the values into the probability P of occurrence (f1, f2) by the total occurrence frequency of (f1, f2), wherein the generated matrix is the gray level co-occurrence matrix.
In particular, the contrast, also known as contrast, is used to measure the distribution of values of the matrix and the local amount of change in the image. The contrast reflects the definition of the image and the depth of the grooves of the texture, the deeper the grooves of the texture, the greater the contrast and the clearer the image, and conversely, the shallower the grooves, the smaller the contrast and the blurry the image. The energy is the sum of squares of all element values of the gray level co-occurrence matrix and is used for measuring the gray level change stability degree of the image texture, the energy reflects the image gray level distribution uniformity degree and the texture thickness degree, and the large energy value indicates that the current texture is more stably changed. The entropy is used for measuring the randomness of the information content of the image, when all values in the co-occurrence matrix are equal or the pixel value shows larger randomness, the entropy is larger, so that the entropy value shows the complexity degree of the gray level distribution of the image, and the larger the entropy value is, the more complex the image is.
And S304, extracting eye features by adopting an LBP algorithm.
Specifically, in the eye region, due to the characteristics of the eye region, the LBP algorithm is adopted to extract the eye features, so that a better effect is obtained.
Specifically, a region of different scales is selected, for example, a square region of 3 × 3, the gray values of 8 adjacent pixels are compared with a central pixel of the region as a threshold, if the values of surrounding pixels are greater than the value of the central pixel, the position of the pixel is marked as 1, otherwise, the position is 0, and thus the LBP value of the central pixel can be obtained, where the LBP value may reflect the texture feature of the region. Referring to fig. 4, fig. 4 is a flowchart illustrating a LBP value calculation process according to an embodiment. The reference numeral 401 is the gray value of 9 pixel points in a 3 × 3 square region, where the gray value of the pixel point in the center of the region is 5. The reference numeral 403 is a binarized square area with gray values, the gray values are arranged clockwise from the top left corner, and a binary code 405 is obtained, and the reference numeral 407 is an LBP value of the central pixel obtained by decimal conversion of the binary code: 19.
and step 306, extracting the oral characteristics by adopting wavelet transform.
Specifically, according to the characteristics of the oral characteristics, a better effect can be obtained by adopting a wavelet transform algorithm to extract the oral characteristics.
Specifically, a wavelet function is selected and a transformation scale is set as required, for example, a Haar wavelet function may be selected and a transformation scale may be set to 3. The image is then row transformed and column transformed. And finally, quantizing the transformed numerical value to obtain the characteristic vector.
It is understood that there is no sequence among the steps 302, 304 and 306, and the three steps may be combined in any sequence according to the requirement.
The copying detection method of the face photos adopts different methods to obtain the characteristics of three key areas, namely the eye, the nose and the mouth, so that the detection accuracy is further improved.
A device for detecting the reproduction of a photograph, wherein the device comprises:
a key part obtaining module 501, configured to convert the photo into a binary image, and obtain each key part in the photo based on a connected domain algorithm of the binary image;
a feature value extraction module 503, configured to extract features in the key portions;
a feature value fusion module 505, configured to normalize the feature values in the key portions, and fuse the normalized feature values to obtain a fusion feature value;
a classification result obtaining module 507, configured to input the fusion feature value into a classifier for classification to obtain a classification result;
a reproduction judging module 509, configured to determine that the photo is a reproduction if the classification result is a reproduction type.
According to the device for detecting the reproduction of the face photo, the face is firstly subdivided into key areas, and then the features of the key areas are extracted, so that the precision of the method for detecting the reproduction of the face photo is higher.
As an embodiment, wherein the key part obtaining module includes:
the face detection unit is used for judging whether the image to be detected comprises face information or not;
if yes, the key part obtaining module continues to convert the photo into a binary image, and obtains each key part in the photo based on a connected domain algorithm of the binary image.
As an embodiment, wherein the key part obtaining module includes:
the color image acquisition unit is used for carrying out color space conversion on the input image to obtain a color image;
a binarization image obtaining unit, configured to binarize the color image to obtain a binarization image;
a face region obtaining unit, configured to perform vertical mapping on the binarized image to obtain a face region in the binarized image;
the gray level image acquisition unit is used for carrying out gray level conversion on the face area to obtain a gray level image of the face area;
a binarization convolution image unit used for carrying out image convolution on the gray level image and carrying out binarization on the convolved image to obtain a binarization convolution image;
and the key part extracting unit is used for extracting the key part from the binary convolution image through a connected domain searching algorithm.
As an example, wherein the critical parts include the nose, eyes and mouth;
the feature value extraction module includes:
the nose feature extraction unit is used for extracting nose features by adopting a gray level co-occurrence matrix;
the eye feature extraction unit is used for extracting eye features by adopting an LBP algorithm;
and the oral feature extraction unit is used for extracting oral features by adopting wavelet transformation.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of:
converting the photo into a binary image, and acquiring each key part in the photo based on a connected domain algorithm of the binary image;
extracting features in the key parts;
normalizing the characteristic values in the key parts, and fusing the normalized characteristic values to obtain fused characteristic values;
inputting the fusion characteristic value into a classifier for classification to obtain a classification result;
and if the classification result is a reproduction type, confirming that the photo is a reproduction type.
The method for realizing the computer program in the computer equipment when the computer program is executed by the processor firstly subdivides the human face into key areas and then extracts the characteristics of the key areas, so that the precision of the method for detecting the duplication of the human face photos is higher.
As an embodiment, the converting the photo into a binary image and obtaining each key part in the photo based on a connected domain algorithm of the binary image, executed by the processor, includes:
judging whether the image to be detected comprises face information or not;
if yes, the photo is converted into a binary image, and each key part in the photo is obtained based on a connected domain algorithm of the binary image.
As an embodiment, the converting the photo into a binary image and acquiring the key part in the image to be detected based on a connected domain algorithm of the binary image executed by the processor includes:
carrying out color space conversion on an input image to obtain a color image;
binarizing the color image to obtain a binarized image;
carrying out vertical mapping on the binary image to obtain a face area in the binary image;
carrying out gray level conversion on the face area to obtain a gray level image of the face area;
performing image convolution on the gray level image, and performing binarization on the convolved image to obtain a binarized convolution image;
and extracting the key part from the binary convolution image through a connected domain search algorithm.
As an embodiment, among the critical sites performed by the processor are nose, eye, and mouth;
the step of extracting features in the key region comprises:
extracting nasal part characteristics by adopting a gray level co-occurrence matrix;
extracting eye features by adopting an LBP algorithm;
and extracting the oral characteristics by adopting wavelet transform.
A readable storage medium storing a computer program which when executed by a processor performs the steps of:
converting the photo into a binary image, and acquiring each key part in the photo based on a connected domain algorithm of the binary image;
extracting features in the key parts;
normalizing the characteristic values in the key parts, and fusing the normalized characteristic values to obtain fused characteristic values;
inputting the fusion characteristic value into a classifier for classification to obtain a classification result;
and if the classification result is a reproduction type, confirming that the photo is a reproduction type.
The computer program of the readable storage medium is implemented by a processor, firstly, the human face is subdivided into key areas, and then the features of the key areas are extracted, so that the precision of the copying detection method of the human face photos is higher.
As an embodiment, the converting the photo into a binary image and acquiring each key part in the photo based on a connected domain algorithm of the binary image, which is implemented by a computer program stored in a storage medium when the computer program is executed by a processor, includes:
judging whether the image to be detected comprises face information or not;
if yes, the photo is converted into a binary image, and each key part in the photo is obtained based on a connected domain algorithm of the binary image.
As an embodiment, the converting the photo into a binary image, and acquiring the key part in the image to be detected based on a connected domain algorithm of the binary image, which is implemented by a computer program stored in a storage medium when the computer program is executed by a processor, includes:
carrying out color space conversion on an input image to obtain a color image;
binarizing the color image to obtain a binarized image;
carrying out vertical mapping on the binary image to obtain a face area in the binary image;
carrying out gray level conversion on the face area to obtain a gray level image of the face area;
performing image convolution on the gray level image, and performing binarization on the convolved image to obtain a binarized convolution image;
and extracting the key part from the binary convolution image through a connected domain search algorithm.
As an embodiment, wherein the computer program stored on the storage medium, when executed by the processor, implements the method wherein the critical areas include the nose, eyes, and mouth;
the step of extracting features in the key region comprises:
extracting nasal part characteristics by adopting a gray level co-occurrence matrix;
extracting eye features by adopting an LBP algorithm;
and extracting the oral characteristics by adopting wavelet transform.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method for detecting reproduction of a photograph, the method comprising:
detecting the face in the image according to a preset face detection algorithm to obtain a face image photo;
converting the photo into a binary image, and acquiring each key part in the photo based on a connected domain algorithm of the binary image;
extracting characteristic values in each key part;
normalizing the characteristic values in the key parts, and fusing the normalized characteristic values to obtain fused characteristic values;
inputting the fusion characteristic value into a classifier for classification to obtain a classification result;
and if the classification result is a reproduction type, confirming that the photo is a reproduction type.
2. The method for detecting the duplication according to claim 1, wherein the converting the photo into a binary image and obtaining each key part in the photo based on a connected domain algorithm of the binary image comprises:
judging whether the face image photo to be detected comprises face information or not;
if yes, the photo is converted into a binary image, and each key part in the photo is obtained based on a connected domain algorithm of the binary image.
3. The reproduction detection method according to claim 1 or 2, wherein converting the photo into a binary image, and acquiring the key part in the image to be detected based on a connected domain algorithm of the binary image comprises:
carrying out color space conversion on an input image to obtain a color image;
binarizing the color image to obtain a binarized image;
carrying out vertical mapping on the binary image to obtain a face area in the binary image;
carrying out gray level conversion on the face area to obtain a gray level image of the face area;
performing image convolution on the gray level image, and performing binarization on the convolved image to obtain a binarized convolution image;
and extracting the key part from the binary convolution image through a connected domain search algorithm.
4. The method for detecting a reproduction according to claim 1, wherein the key parts include a nose, eyes and a mouth;
the step of extracting features in the key region comprises:
extracting nasal part characteristics by adopting a gray level co-occurrence matrix;
extracting eye features by adopting an LBP algorithm;
and extracting the oral characteristics by adopting wavelet transform.
5. A device for detecting the reproduction of a photograph, said device comprising:
the face detection module is used for detecting the face in the image according to a preset face detection algorithm to obtain a face image photo;
the key part acquisition module is used for converting the photo into a binary image and acquiring each key part in the photo based on a connected domain algorithm of the binary image;
the characteristic value extraction module is used for extracting the characteristic values in the key parts;
the characteristic value fusion module is used for normalizing the characteristic values in the key parts and fusing the normalized characteristic values to obtain fusion characteristic values;
a classification result obtaining module, configured to input the fusion feature value into a classifier for classification to obtain a classification result;
and the copying judgment module is used for confirming that the photo is copied if the classification result is the copying type.
6. The apparatus according to claim 5, wherein the key portion acquiring module comprises:
the face detection unit is used for judging whether the face image photo to be detected comprises face information or not;
if yes, the key part obtaining module continues to convert the photo into a binary image, and obtains each key part in the photo based on a connected domain algorithm of the binary image.
7. The apparatus according to claim 5 or 6, wherein the key portion acquiring module comprises:
the color image acquisition unit is used for carrying out color space conversion on the input image to obtain a color image;
a binarization image obtaining unit, configured to binarize the color image to obtain a binarization image;
a face region obtaining unit, configured to perform vertical mapping on the binarized image to obtain a face region in the binarized image;
the gray level image acquisition unit is used for carrying out gray level conversion on the face area to obtain a gray level image of the face area;
a binarization convolution image unit used for carrying out image convolution on the gray level image and carrying out binarization on the convolved image to obtain a binarization convolution image;
and the key part extracting unit is used for extracting the key part from the binary convolution image through a connected domain searching algorithm.
8. The apparatus according to claim 5, wherein the key parts include a nose, eyes and mouth;
the feature value extraction module includes:
the nose feature extraction unit is used for extracting nose features by adopting a gray level co-occurrence matrix;
the eye feature extraction unit is used for extracting eye features by adopting an LBP algorithm;
and the oral feature extraction unit is used for extracting oral features by adopting wavelet transformation.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-4 are implemented when the program is executed by the processor.
10. A readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810196019.6A CN108549836B (en) | 2018-03-09 | 2018-03-09 | Photo copying detection method, device, equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810196019.6A CN108549836B (en) | 2018-03-09 | 2018-03-09 | Photo copying detection method, device, equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108549836A CN108549836A (en) | 2018-09-18 |
CN108549836B true CN108549836B (en) | 2021-04-06 |
Family
ID=63516018
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810196019.6A Active CN108549836B (en) | 2018-03-09 | 2018-03-09 | Photo copying detection method, device, equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108549836B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815960A (en) * | 2018-12-21 | 2019-05-28 | 深圳壹账通智能科技有限公司 | Recognition method, device, device and medium for remake image based on deep learning |
CN109754059A (en) * | 2018-12-21 | 2019-05-14 | 平安科技(深圳)有限公司 | Reproduction image-recognizing method, device, computer equipment and storage medium |
CN109859227B (en) * | 2019-01-17 | 2023-07-14 | 平安科技(深圳)有限公司 | Method and device for detecting flip image, computer equipment and storage medium |
CN109886309A (en) * | 2019-01-25 | 2019-06-14 | 成都浩天联讯信息技术有限公司 | A method of digital picture identity is forged in identification |
CN111008651B (en) * | 2019-11-13 | 2023-04-28 | 科大国创软件股份有限公司 | Image reproduction detection method based on multi-feature fusion |
CN111191568B (en) * | 2019-12-26 | 2024-06-14 | 中国平安人寿保险股份有限公司 | Method, device, equipment and medium for identifying flip image |
CN111259915B (en) * | 2020-01-20 | 2024-06-14 | 中国平安人寿保险股份有限公司 | Method, device, equipment and medium for identifying flip image |
CN111428740A (en) * | 2020-02-28 | 2020-07-17 | 深圳壹账通智能科技有限公司 | Detection method and device for network-shot photo, computer equipment and storage medium |
CN112580621B (en) * | 2020-12-24 | 2022-04-29 | 成都新希望金融信息有限公司 | Identity card copying and identifying method and device, electronic equipment and storage medium |
CN112950559B (en) * | 2021-02-19 | 2022-07-05 | 山东矩阵软件工程股份有限公司 | Method and device for detecting copied image, electronic equipment and storage medium |
CN113870454A (en) * | 2021-09-29 | 2021-12-31 | 平安银行股份有限公司 | Attendance checking method and device based on face recognition, electronic equipment and storage medium |
CN114819142B (en) * | 2022-04-18 | 2024-09-06 | 支付宝(杭州)信息技术有限公司 | Screen shooting image recognition and training method and device for models thereof and electronic equipment |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4893863B1 (en) * | 2011-03-11 | 2012-03-07 | オムロン株式会社 | Image processing apparatus and image processing method |
CN103116763B (en) * | 2013-01-30 | 2016-01-20 | 宁波大学 | A kind of living body faces detection method based on hsv color Spatial Statistical Character |
CN105354856A (en) * | 2015-12-04 | 2016-02-24 | 北京联合大学 | Human matching and positioning method and system based on MSER and ORB |
CN106650669A (en) * | 2016-12-27 | 2017-05-10 | 重庆邮电大学 | Face recognition method for identifying counterfeit photo deception |
CN106778704A (en) * | 2017-01-23 | 2017-05-31 | 安徽理工大学 | A kind of recognition of face matching process and semi-automatic face matching system |
CN106971161A (en) * | 2017-03-27 | 2017-07-21 | 深圳大图科创技术开发有限公司 | Face In vivo detection system based on color and singular value features |
-
2018
- 2018-03-09 CN CN201810196019.6A patent/CN108549836B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108549836A (en) | 2018-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108549836B (en) | Photo copying detection method, device, equipment and readable storage medium | |
CN111488756B (en) | Face recognition-based living body detection method, electronic device, and storage medium | |
JP6778247B2 (en) | Image and feature quality for eye blood vessels and face recognition, image enhancement and feature extraction, and fusion of eye blood vessels with facial and / or subface regions for biometric systems | |
CN112381775B (en) | Image tampering detection method, terminal device and storage medium | |
JP5844783B2 (en) | Method for processing grayscale document image including text region, method for binarizing at least text region of grayscale document image, method and program for extracting table for forming grid in grayscale document image | |
CN106951869B (en) | A kind of living body verification method and equipment | |
Türkyılmaz et al. | License plate recognition system using artificial neural networks | |
AU2011250827B2 (en) | Image processing apparatus, image processing method, and program | |
CN105243376A (en) | Living body detection method and device | |
CN111259680B (en) | Two-dimensional code image binarization processing method and device | |
CN113642639B (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
CN107256543B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111062426A (en) | Method, device, electronic equipment and medium for establishing training set | |
Purnapatra et al. | Presentation attack detection with advanced cnn models for noncontact-based fingerprint systems | |
JP2013140428A (en) | Edge detection device, edge detection program, and edge detection method | |
JP2005190474A5 (en) | ||
Nguyen et al. | Face presentation attack detection based on a statistical model of image noise | |
JP6185807B2 (en) | Wrinkle state analysis method and wrinkle state analyzer | |
Simon et al. | DeepLumina: A method based on deep features and luminance information for color texture classification | |
KR101672814B1 (en) | Method for recognizing gender using random forest | |
CN113221842A (en) | Model training method, image recognition method, device, equipment and medium | |
Berbar | Skin colour correction and faces detection techniques based on HSL and R colour components | |
JP7415202B2 (en) | Judgment method, judgment program, and information processing device | |
CN112183156B (en) | Living body detection method and equipment | |
Amelia | Age Estimation on Human Face Image Using Support Vector Regression and Texture-Based Features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CB03 | Change of inventor or designer information |
Inventor after: Luo Jing Inventor after: Chen Shujun Inventor after: Liu Yang Inventor after: Li Hongyan Inventor before: Chen Shujun Inventor before: Liu Yang Inventor before: Li Hongyan |
|
CB03 | Change of inventor or designer information |