[go: up one dir, main page]

CN101131728A - A Face Shape Matching Method Based on Shape Context - Google Patents

A Face Shape Matching Method Based on Shape Context Download PDF

Info

Publication number
CN101131728A
CN101131728A CNA2007100466764A CN200710046676A CN101131728A CN 101131728 A CN101131728 A CN 101131728A CN A2007100466764 A CNA2007100466764 A CN A2007100466764A CN 200710046676 A CN200710046676 A CN 200710046676A CN 101131728 A CN101131728 A CN 101131728A
Authority
CN
China
Prior art keywords
shape
matching method
shape matching
steps
following
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007100466764A
Other languages
Chinese (zh)
Inventor
夏小玲
乐嘉锦
王绍宇
柴望
甘泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CNA2007100466764A priority Critical patent/CN101131728A/en
Publication of CN101131728A publication Critical patent/CN101131728A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明涉及一种基于Shape Context的人脸形状匹配方法,首先采用图像金字塔及扩散滤波技术进行预处理;再利用Canny边缘检测算法和轮廓提取算法提取边界轮廓信息;对提取的边界信息进行对数极坐标变换,得到对数极坐标直方图;通过计算Cs值,获得所有相匹配点;计算相近似度值,进行形状匹配判断。本发明具有简便、准确、经济及可扩展性良好等众多优势,因此人脸形状匹配可广泛应用于出入口控制、安全验证、安防监控与搜寻罪犯等有关方面。

The invention relates to a face shape matching method based on Shape Context. Firstly, image pyramid and diffusion filter technology are used for preprocessing; then the Canny edge detection algorithm and contour extraction algorithm are used to extract boundary contour information; the extracted boundary information is logarithmic Polar coordinate transformation to obtain a logarithmic polar coordinate histogram; by calculating the Cs value, all matching points are obtained; calculating the similarity value to judge the shape matching. The present invention has many advantages such as simplicity, accuracy, economy and good expandability, so the face shape matching can be widely used in entrance and exit control, security verification, security monitoring, criminal search and other related aspects.

Description

Face Shape matching method based on Shape Context
Technical Field
The invention relates to a face recognition technology, in particular to a face Shape matching method based on Shape Context.
Background
The intelligent video monitoring is based on digital and networked video monitoring, is different from general networked video monitoring, and is a higher-end video monitoring application. The intelligent video monitoring is a monitoring mode which analyzes the video image content of a monitoring scene based on the computer vision technology, extracts key information in the scene and forms corresponding events and alarms, and is a new generation of monitoring system based on video content analysis. In an intelligent video monitoring system, the modeling of a human body motion image is the first stage, and people can be extracted from a monitoring video through the modeling of the human body motion image to obtain information of each characteristic part of the human body. Intelligent video surveillance has many advantages and functions, such as detection of object movement, PTZ tracking, human behavior analysis, and so forth. Among them, face recognition is particularly important, which automatically detects and recognizes facial features of a person and identifies or verifies the identity of the person by comparing with a database file, which has become an indispensable part of people's real life.
At present, the face automatic recognition technology has been a great research hotspot of disciplines such as pattern recognition, image processing and the like. The automatic face recognition system comprises two main technical links, namely face detection and positioning, and then feature extraction and recognition (matching) of the face.
Face shape matching, most of the current researches mainly aim at two-dimensional front face images, and some methods exist in front face recognition, such as: template matching, hidden markov models, etc. However, since the facial expression is rich; the face changes with age; the influence of decorations such as hairstyle, beard, glasses and the like on the human face; the image formed by the face is affected by illumination, imaging angle, imaging distance and the like, which are difficult points for implementing face shape matching, and the effect is not ideal. ( Reference [1]: li-Na Liu. Yi-Zheng Qiao "Some Aspects of Human Face Recognition Technologies" )
Currently, most of the proposed face shape matching algorithms use a numerical value or a feature vector to represent the target, and thus have certain limitations. The limitation is manifested in 2D invariance without scale, rotation, and translation. In the traditional method, in the face shape matching of video intelligent monitoring, after 2D deformation, the matching effect becomes poor.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for extracting and identifying the characteristics of the human face shape in an automatic human face identification system, and the method simultaneously overcomes the defects of no limitation of 2D invariance of proportion, rotation and translation and the like.
The technical scheme adopted by the invention for solving the technical problem is as follows: a face Shape matching method based on Shape Context is provided, which comprises the following steps:
(1) Preprocessing by adopting an image pyramid and diffusion filtering technology;
(2) Extracting boundary contour information by using a Canny edge detection algorithm and a contour extraction algorithm;
(3) Carrying out logarithmic polar coordinate transformation on the extracted boundary information to obtain a logarithmic polar coordinate histogram;
(4) Obtaining all matching points by calculating the Cs value;
(5) And calculating a phase approximation value, and performing shape matching judgment.
The image pyramid adopts a searching method from high to low and from coarse to fine, so that a matching position can be more accurately searched, the tracking speed is effectively improved, and the real-time requirement is met.
The pyramid structure mode is suitable for multi-resolution images. The images of different resolutions are stored in different layers of the pyramid respectively: the original image is stored at the bottom of the image pyramid, the resolution ratio is reduced along with the rise of the layer of the pyramid structure, and the space used for storing the corresponding image is reduced. The resolution of the image is reduced to N times of the original resolution, and the space used for storing the corresponding image is 1/N of the original space. When matching, firstly, matching the target at the highest layer (lowest resolution) of the image pyramid by adopting a global search strategy to obtain the target position of the layer. Then, the search strategy from high to low and from coarse to fine is used to obtain a more accurate position. By adopting the mode, the tracking speed can be effectively improved, and the real-time requirement can be better met.
The diffusion filtering technology adopts a method of enhancing the boundary and blurring the detail content.
In the image smoothing process of diffusion filtering, the diffusion function can automatically adjust the diffusion coefficient according to the image content, the image smoothing is strengthened in the flat area of the image, the image smoothing is weakened in the characteristic edge area, and meanwhile the anisotropic diffusion behavior is also shown. By diffusion filtering, the boundary is enhanced, and the detail content is blurred, so that the image has better smooth result and smooth quality.
The step (2) is realized by the following steps:
(a) Extracting the edge information to obtain a binary image;
(b) And (3) searching the contour in the binary image by using a contour extraction algorithm to obtain the contour boundary of the basic information of the human face.
The approximation algorithm mode of the contour extraction algorithm is to compress horizontal, vertical and diagonal segmentation, namely, a curve representing edge information only keeps a pixel point at the tail end, so that the point acquisition reaches nonuniformity. The resulting profile has the following effects: the method not only extracts the contour characteristic points as little as possible, but also better retains the characteristic points (if the boundary is a straight line, the sampling interval of the contour characteristic points is larger, and if the boundary is a curve, the sampling interval is smaller if the curvature is larger).
The step (3) comprises the following steps:
(a) Carrying out logarithmic polar coordinate transformation by taking the selected point as a coordinate origin;
(b) Calculating the number of points falling in each grid;
(c) And carrying out normalization processing by using an empirical density method to obtain a histogram.
The empirical density normalization method comprises the following steps: hx (1.. N), hy (1.. N) are 2 sets, respectively, normalized using empirical density functions Hx and Hy.
Figure A20071004667600061
And (5) the Cs value in the step (4) is the Cost value of Shape Context.
Wherein Cs is the X of the two histograms 2 And (5) counting the value. g (k) and h (k) are the values of the corresponding histograms, respectively. Then:
the step (5) is realized by the following steps:
(a) Selecting points in the graph, and finding a matching point in another graph by solving a minimum Cost value;
(b) Storing the matching information by using a visual library;
(c) Repeating the step (a), and matching the remaining points until all the points are matched;
(d) And (5) calculating the matching rate K under the Cs standard deviation and the threshold value T.
When the threshold T of the match rate K is less than or equal to 0.3, it indicates that 2 targets are matched.
The invention has the advantages of simplicity, convenience, accuracy, economy, good expandability and the like, so the human face shape matching can be widely applied to relevant aspects such as access control, safety verification, security monitoring, criminal searching and the like.
Drawings
Fig. 1 is a flowchart of a face shape matching method according to the present invention.
FIG. 2 illustrates boundary extraction according to the present invention.
Fig. 3 is a log polar histogram of the present invention.
Fig. 4 is an example of the present invention.
Fig. 5 and 6 are specific experimental diagrams.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes and modifications of the present invention may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
After considering execution efficiency and portability, the present system is implemented using standard C + + and aided by OpenCV (Intel @. Keyed computer vision library). The face database used was UMIST (564 images divided into 20 subjects, each including a different face pose from side to front.)
Corresponding to the method of the present invention, a total of 6 tasks are designed for the face shape matching system, and the names of the tasks and the functions of the tasks are described in table 1.
Table 1: task specification of human face shape matching system
Task name Function(s)
Image pyramid processing Searching from high to low and from coarse to fine, and finding the matching position more accurately.
Diffusion filtering process And enhancing the boundary and blurring the detail content.
Boundary extraction And extracting the boundary contour.
Log polar transformation And (5) solving a log-polar coordinate histogram.
Calculation of the Cs value The Cost value of ShapeContext is found.
Similarity calculation And judging the face shape matching according to the similarity.
As shown in fig. 1, the whole face shape matching process is as follows: firstly, carrying out multi-scale matching and diffusion filtering by using an image pyramid to carry out smooth preprocessing on an image; then, obtaining boundary information by using a Canny edge detection algorithm and a contour extraction algorithm; then carrying out logarithmic coordinate transformation on the boundary to obtain a logarithmic coordinate histogram; then, using Shape Context Shape matching processing to obtain a similarity value; and finally, carrying out mathematical statistics according to the similarity value and judging whether the shapes are matched.
As shown in fig. 2, the Canny edge detection algorithm and the contour extraction algorithm are adopted to perform boundary extraction: adopting a Canny algorithm to carry out edge detection, extracting edge information and obtaining a binary image, wherein the edge information obtained by the Canny algorithm is the binary image formed by a series of curves; and then, searching a contour in the binary image by using a contour extraction algorithm to obtain a contour boundary of the basic information of the human face.
As shown in fig. 3, log-polar transformation: and after the step of extracting the boundary is finished, adopting a logarithmic polar coordinate method for the extracted contour boundary, and calculating to obtain a logarithmic polar coordinate histogram.
As shown in fig. 4, after the Cs value is obtained, the matching points are found by using Cs as follows, and the matching is completed to obtain the matched point set and the Cs value: 1) There are P, H two graphs, for point P in P i Finding out the point H with the minimum Cost value among all H i (ii) a 2) Information to be matched (including P) i Matching point H i Cost value) is saved with an OpenCV sequence (i.e., linked list); 3) And (5) repeating the step (1) and matching the remaining points until all the points are matched.
And calculating the Cs standard deviation and the matching rate K under the threshold value T. Through experimentation, we found that when the threshold T of the match rate K is ≦ 0.3, 2 targets are shown to be matched.
As shown in fig. 5 and 6, three experimental conditions are set respectively: different faces of the same person, namely class a; the human faces (size scaling, translation and scaling) at different size positions of the same person, namely b types; different human faces, i.e. class c.
For the three cases, the standard deviation of the Cs of the total sample is counted for the Cs value when the matching degree is 100%, the matching degree when the Cs threshold is 0.1, 0.2, and 0.3, respectively.
Contour point interval d =10
a1 a2 b1 b2 c1 c2 c3 c4
Cs 0.17 0.19 0.34 0.33 0.14 0.15 0.27 0.2
T=0.1 41% 46% 0% 0% 92% 82% 27% 32%
T=0.2 100% 100% 59% 68% 100% 100% 95% 100%
T=0.3 100% 100% 97% 99% 100% 100% 100% 100%
S( 0.001) 5.920 8.641 16.463 10.488 4.134 4.841 13.856 8.611
Contour point interval d =5
a1 a2 b1 b2 c1 c2 c3 c4
Cs 0.18 0.18 0.31 0.32 0.16 0.18 0.25 0.19
T=0.1 67% 66% 0% 0% 95% 91% 38% 53%
T=0.2 100% 100% 74% 74% 100% 100% 96% 100%
T=0.3 100% 100% 99% 99% 100% 100% 100% 100%
S( * 0.001) 4.566 6.986 13.912 11.206 3.671 4.274 12.008 7.825
The experimental results show that when the threshold T =0.3 of the matching rate K, the degree of matching is 100% (except for the 2D transform). After 2D transformation, the degree of matching when T =0.3 is also substantially 100%. This indicates that when determining whether the shapes match, the threshold T of determining a perfect match (or a substantially perfect match) is less than or equal to 3. This empirical value is applicable in many cases and has good versatility.
In addition, experimental data shows that the precision of the method is not greatly influenced by the density of the contour point taking intervals, and the method has strong applicability to various cameras.

Claims (8)

1. A face Shape matching method based on Shape Context comprises the following steps:
(1) Preprocessing by adopting an image pyramid and a diffusion filtering technology;
(2) Extracting boundary contour information by using a Canny edge detection algorithm and a contour extraction algorithm;
(3) Carrying out logarithmic polar coordinate transformation on the extracted boundary information to obtain a logarithmic polar coordinate histogram;
(4) Obtaining all matching points by calculating the Cs value;
(5) And calculating a phase approximation value, and performing shape matching judgment.
2. The Shape matching method for the face based on Shape Context of claim 1, wherein the Shape matching method comprises the following steps: the image pyramid adopts a searching method from high to low and from coarse to fine.
3. The Shape matching method for the face based on Shape Context of claim 1, wherein the Shape matching method comprises the following steps: the diffusion filtering technology adopts a method of enhancing the boundary and blurring the detail content.
4. The Shape matching method for the face based on Shape Context of claim 1, wherein the Shape matching method comprises the following steps: the step (2) is realized by the following steps:
(a) Extracting the edge information to obtain a binary image;
(b) And (3) searching the contour in the binary image by using a contour extraction algorithm to obtain the contour boundary of the basic information of the human face.
5. The Shape matching method for the face based on Shape Context of claim 1, wherein the Shape matching method comprises the following steps: the step (3) comprises the following steps:
(a) Carrying out logarithmic polar coordinate transformation by taking the selected point as the origin of coordinates;
(b) Calculating the number of points falling in each grid;
(c) And carrying out normalization processing by using an empirical density method to obtain a histogram.
6. The Shape matching method for the face based on Shape Context of claim 1, wherein the Shape matching method comprises the following steps: and (4) the Cs value in the step (4) is the Cost value of Shape Context.
7. The Shape matching method for the face based on Shape Context of claim 1, wherein the Shape matching method comprises the following steps: the step (5) is realized by the following steps:
(a) Selecting points in the graph, and finding a matching point in another graph by solving a minimum Cost value;
(b) Storing the matching information by using a visual library;
(c) Repeating the step (a), and matching the remaining points until all the points are matched;
(d) And (5) calculating the Cs standard deviation and the matching rate K under the threshold value T.
8. The Shape matching method for faces based on Shape Context of claim 7, wherein the Shape matching method comprises the following steps: the visual library is OpenCV.
CNA2007100466764A 2007-09-29 2007-09-29 A Face Shape Matching Method Based on Shape Context Pending CN101131728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2007100466764A CN101131728A (en) 2007-09-29 2007-09-29 A Face Shape Matching Method Based on Shape Context

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2007100466764A CN101131728A (en) 2007-09-29 2007-09-29 A Face Shape Matching Method Based on Shape Context

Publications (1)

Publication Number Publication Date
CN101131728A true CN101131728A (en) 2008-02-27

Family

ID=39128992

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007100466764A Pending CN101131728A (en) 2007-09-29 2007-09-29 A Face Shape Matching Method Based on Shape Context

Country Status (1)

Country Link
CN (1) CN101131728A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833763A (en) * 2010-04-28 2010-09-15 天津大学 A Method for Detection of Water Surface Reflection Image
CN102521582A (en) * 2011-12-28 2012-06-27 浙江大学 Human upper body detection and splitting method applied to low-contrast video
CN102654902A (en) * 2012-01-16 2012-09-05 江南大学 Contour vector feature-based embedded real-time image matching method
CN102679871A (en) * 2012-05-07 2012-09-19 上海交通大学 Rapid detection method of sub-pixel precision industrial object
CN101714209B (en) * 2008-10-03 2012-11-14 索尼株式会社 Image processing apparatus, and image processing method
CN102954760A (en) * 2011-08-11 2013-03-06 株式会社三丰 Image measurement apparatus and image measurement method
CN101770582B (en) * 2008-12-26 2013-05-08 鸿富锦精密工业(深圳)有限公司 Image matching system and method
CN103136520A (en) * 2013-03-25 2013-06-05 苏州大学 Shape matching and target recognition method based on PCA-SC algorithm
CN103902983A (en) * 2014-04-14 2014-07-02 夷希数码科技(上海)有限公司 Wearable face recognition method and device
CN104143076A (en) * 2013-05-09 2014-11-12 腾讯科技(深圳)有限公司 Matching method and system for face shape
CN104156952A (en) * 2014-07-31 2014-11-19 中国科学院自动化研究所 Deformation resisting image matching method
CN104376298A (en) * 2013-08-16 2015-02-25 联想(北京)有限公司 Matching method and electronic device
CN108407759A (en) * 2018-05-21 2018-08-17 辽宁工业大学 Automobile intelligent starting module based on recognition of face and startup method
CN108563997A (en) * 2018-03-16 2018-09-21 新智认知数据服务有限公司 It is a kind of establish Face datection model, recognition of face method and apparatus
WO2019114036A1 (en) * 2017-12-12 2019-06-20 深圳云天励飞技术有限公司 Face detection method and device, computer device, and computer readable storage medium
CN111437033A (en) * 2020-04-03 2020-07-24 天津理工大学 Virtual sensor for vascular intervention surgical robot system

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714209B (en) * 2008-10-03 2012-11-14 索尼株式会社 Image processing apparatus, and image processing method
US8582806B2 (en) 2008-10-03 2013-11-12 Sony Corporation Device, method, and computer-readable storage medium for compositing images
CN101770582B (en) * 2008-12-26 2013-05-08 鸿富锦精密工业(深圳)有限公司 Image matching system and method
CN101833763A (en) * 2010-04-28 2010-09-15 天津大学 A Method for Detection of Water Surface Reflection Image
CN101833763B (en) * 2010-04-28 2012-11-14 天津大学 Method for detecting reflection image on water surface
CN102954760A (en) * 2011-08-11 2013-03-06 株式会社三丰 Image measurement apparatus and image measurement method
CN102954760B (en) * 2011-08-11 2016-08-03 株式会社三丰 Image measuring apparatus and image measuring method
US8995773B2 (en) 2011-08-11 2015-03-31 Mitutoyo Corporation Image measurement apparatus and method of measuring works using edge detection tools
CN102521582A (en) * 2011-12-28 2012-06-27 浙江大学 Human upper body detection and splitting method applied to low-contrast video
CN102654902A (en) * 2012-01-16 2012-09-05 江南大学 Contour vector feature-based embedded real-time image matching method
CN102654902B (en) * 2012-01-16 2013-11-20 江南大学 Contour vector feature-based embedded real-time image matching method
CN102679871B (en) * 2012-05-07 2015-03-11 上海交通大学 Rapid detection method of sub-pixel precision industrial object
CN102679871A (en) * 2012-05-07 2012-09-19 上海交通大学 Rapid detection method of sub-pixel precision industrial object
CN103136520A (en) * 2013-03-25 2013-06-05 苏州大学 Shape matching and target recognition method based on PCA-SC algorithm
CN103136520B (en) * 2013-03-25 2016-01-20 苏州大学 The form fit of Based PC A-SC algorithm and target identification method
CN104143076A (en) * 2013-05-09 2014-11-12 腾讯科技(深圳)有限公司 Matching method and system for face shape
CN104143076B (en) * 2013-05-09 2016-08-03 腾讯科技(深圳)有限公司 The matching process of face shape and system
CN104376298A (en) * 2013-08-16 2015-02-25 联想(北京)有限公司 Matching method and electronic device
CN103902983A (en) * 2014-04-14 2014-07-02 夷希数码科技(上海)有限公司 Wearable face recognition method and device
CN104156952A (en) * 2014-07-31 2014-11-19 中国科学院自动化研究所 Deformation resisting image matching method
CN104156952B (en) * 2014-07-31 2017-11-14 中国科学院自动化研究所 A kind of image matching method for resisting deformation
WO2019114036A1 (en) * 2017-12-12 2019-06-20 深圳云天励飞技术有限公司 Face detection method and device, computer device, and computer readable storage medium
CN108563997A (en) * 2018-03-16 2018-09-21 新智认知数据服务有限公司 It is a kind of establish Face datection model, recognition of face method and apparatus
CN108563997B (en) * 2018-03-16 2021-10-12 新智认知数据服务有限公司 Method and device for establishing face detection model and face recognition
CN108407759A (en) * 2018-05-21 2018-08-17 辽宁工业大学 Automobile intelligent starting module based on recognition of face and startup method
CN111437033A (en) * 2020-04-03 2020-07-24 天津理工大学 Virtual sensor for vascular intervention surgical robot system

Similar Documents

Publication Publication Date Title
CN101131728A (en) A Face Shape Matching Method Based on Shape Context
Si et al. Novel approaches to improve robustness, accuracy and rapidity of iris recognition systems
CN108009472B (en) Finger back joint print recognition method based on convolutional neural network and Bayes classifier
CN104951940B (en) A kind of mobile payment verification method based on personal recognition
JP5959093B2 (en) People search system
US20120014562A1 (en) Efficient method for tracking people
CN102663411B (en) Recognition method for target human body
CN103413119A (en) Single sample face recognition method based on face sparse descriptors
CN107610177B (en) The method and apparatus of characteristic point is determined in a kind of synchronous superposition
Shah et al. A novel biomechanics-based approach for person re-identification by generating dense color sift salience features
CN101996308A (en) Human face identification method and system and human face model training method and system
CN111639562B (en) Intelligent positioning method for palm region of interest
WO2013075295A1 (en) Clothing identification method and system for low-resolution video
CN111460884A (en) Multi-face recognition method based on human body tracking
CN111104857A (en) Identity recognition method and system based on gait energy diagram
CN107392105A (en) A kind of expression recognition method based on reverse collaboration marking area feature
CN107784263A (en) Based on the method for improving the Plane Rotation Face datection for accelerating robust features
Manh et al. Small object segmentation based on visual saliency in natural images
Fritz et al. Building detection from mobile imagery using informative SIFT descriptors
CN110969101A (en) A Face Detection and Tracking Method Based on HOG and Feature Descriptors
Bychkov et al. Development of Information Technology for Person Identification in Video Stream.
CN110348386A (en) A kind of facial image recognition method based on fuzzy theory, device and equipment
Hrkac et al. Tattoo detection for soft biometric de-identification based on convolutional neural networks
Curran et al. The use of neural networks in real-time face detection
CN116386118B (en) A Cantonese opera matching makeup system and method based on portrait recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication