CN100361138C - Method and system of real time detecting and continuous tracing human face in video frequency sequence - Google Patents
Method and system of real time detecting and continuous tracing human face in video frequency sequence Download PDFInfo
- Publication number
- CN100361138C CN100361138C CNB2005101356688A CN200510135668A CN100361138C CN 100361138 C CN100361138 C CN 100361138C CN B2005101356688 A CNB2005101356688 A CN B2005101356688A CN 200510135668 A CN200510135668 A CN 200510135668A CN 100361138 C CN100361138 C CN 100361138C
- Authority
- CN
- China
- Prior art keywords
- face
- people
- detection
- tracks
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 238000001514 detection method Methods 0.000 claims abstract description 91
- 238000011897 real-time detection Methods 0.000 claims abstract description 28
- 238000012545 processing Methods 0.000 claims abstract description 18
- 238000012795 verification Methods 0.000 claims abstract description 7
- 238000010606 normalization Methods 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 14
- 230000007774 longterm Effects 0.000 claims description 8
- 238000010200 validation analysis Methods 0.000 claims description 8
- 230000001815 facial effect Effects 0.000 claims description 7
- 230000002045 lasting effect Effects 0.000 claims description 7
- 230000008901 benefit Effects 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 45
- 230000008569 process Effects 0.000 description 34
- 238000012360 testing method Methods 0.000 description 17
- 210000000887 face Anatomy 0.000 description 15
- 238000010586 diagram Methods 0.000 description 11
- 230000008859 change Effects 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000005520 cutting process Methods 0.000 description 5
- 210000000056 organ Anatomy 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 239000004744 fabric Substances 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 235000015055 Talinum crassifolium Nutrition 0.000 description 1
- 244000010375 Talinum crassifolium Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The present invention provides a method and a system for the real time detection and the continuous tracking of human faces in video sequences. The method comprises the procedures: a video image is input; human face detection is carried out to the input video image by using a real time human face detection algorithm; then detection verification is carried out to a human face which is detected by using a coarse and fine two-stage human face detection algorithm; the human face which is verified is tracked by using an object tracking algorithm; verification processing is carried out to the human face which is tracked through the verification of a tracking area. The present invention uses a human face detection method based on AdaBoost statistical layering classifiers, and the real time detection of vertical front human faces is realized. Moreover, the present invention is combined with a human face tracking method based on Mean shift and histogram characteristics, and the real time detection and the tracking of human faces are realized. The present invention has the advantages of fast detection and tracking and strong real time.
Description
Technical field
The present invention relates to people's face detection and tracking field, refer in particular to a kind of real-time detection and the method and system of continue following the tracks of of people's face in video sequence.
Background technology
People's face is one of convenient mode of man-machine interaction in the computer vision system.It is exactly information such as the position of determining everyone face in image or image sequence, size that people's face detects, and face tracking then is the one or more detection people faces that continue in the tracking video sequence.People's face detects the prerequisite of being not only technology such as recognition of face, Expression Recognition, people's face be synthetic with tracking technique, and its value that has a wide range of applications in fields such as intelligent human-machine interaction, video conference, intelligent monitoring, video frequency searchings.
The importance that people's face detects and the complicacy of people's face pattern make people's face detect the research focus that problem is a computer vision field always, and pertinent literature and method are very many, mainly are divided into based on heuristic rule with based on statistical model two classes.Edge, the colour of skin, motion, symmetry, profile, face's organ that at first extracts people's face based on the method for heuristic rule etc. has the feature of clear and definite physical meaning, formulate series of rules according to priori then, whether meet these priori rules by verification characteristics at last and detect people's face people's face.This class methods speed is general than very fast, but will depend on fixing priori rules, adapts to the ability that changes, and false-alarm is more.Method based on statistics then is to adopt a large amount of " people's faces " and " non-face " sample, pass through the pixel grey scale feature or other transform domain feature that are adopted, training and structural classification device, and utilize the sorter constructed that the candidate face zone that institute might size is judged, obtain all possible positions and big or small people's face thereby detect.
Although above-mentioned method is to have the great amount of samples training to obtain, and is more reliable on statistical significance, expanded the scope that detects, improved the robustness of detection system, the people's face that is suitable in the complex scene detects, but the time that needs is long, and real-time is poor.
Although also adopted people's face detection algorithm of AdaBoost in the prior art, but be based on said method as seen, still exist the training time long, the too much problem of extracting of face characteristic number, existing simultaneously people's face detection and tracking technique have only been considered the histogram of former frame when obtaining the position of present frame target, with the histogram of former frame as template, easy like this some unstable results that cause, if certain frame tracking results is not accurate enough, then the tracking results of subsequent frame also can be made mistakes continuously, also there is the ability that detects the even people's face of left and right sides uneven illumination, and be subjected to noise easily, the problem that stability is poor.
And because image is the video sequence of video frequency pick-up head input, this means that people's face of input camera exists very big uncertainty, often expressed one's feelings, the interference of appearance that beard, glasses, hair also can influence the outward appearance of people's face.In addition, the blocking of the variation of people's face size, rotation, attitude, pitching, regional area, and the variation that causes of image-forming condition etc., all greatly influence the outward appearance of people's face, thereby influence algorithm performance.
Summary of the invention
The present invention solves the technical matters that the real-time detection of carrying out people's face in the video sequence that collects in the prior art is long with the method and system computing time that continues to follow the tracks of, real-time is poor, the objective of the invention is to adopt method for detecting human face based on AdaBoost statistics layering sorter, realize the real-time detection of positive homo erectus's face, and in conjunction with face tracking method based on Mean shift and histogram feature, realized the real-time detection and tracking of people's face, method of the present invention has detection and follows the tracks of rapid, real-time advantage.
The object of the present invention is achieved like this:
The real-time detection of people's face and the method that continues to follow the tracks of in a kind of video sequence may further comprise the steps:
Inputted video image;
Adopt the real-time face detection algorithm that the video image of input is carried out the detection of people's face;
Adopt thickness two-stage people face detection algorithm that detected people's face is carried out detection validation again;
Adopt the people's face after the object tracking algorithm keeps track is verified;
By checking people's face of following the tracks of is verified processing to tracing area.
Described real-time face detection algorithm is based on the AdaBoost algorithm and is realized by multistage classifier.
Described people's face detects and comprises following steps:
The image information that receives is carried out image zoom, searches people's face window;
Detected people's face is carried out positioning feature point, people's face is carried out geometrical normalization;
People's face is carried out the gray balance processing;
People's face is rotated convergent-divergent;
The standard faces image that obtains detecting.
Comprise following situation in the described step that adopts thickness two-stage people face detection algorithm detected people's face to be carried out detection validation again:
As in certain two field picture, detecting one or more people's faces, in ensuing two two field pictures, follow the tracks of these people's faces, and people's face of following the tracks of in follow-up two two field pictures is detected and verifies;
After continuous three frames in certain position detect people's face, determine that people's face exists, select one of them to begin to follow the tracks of.
Method of the present invention also comprises the normalized step of people's face both sides gray scale difference after people's face is carried out the gray balance processing, the average of half of gray scale about people's face and variance is equated.
The step that also comprises little feature calculation and sorter judgement behind described people's face rotation convergent-divergent, this step are to adopt the integrogram of the facial image after handling and the microstructure features that square integrogram obtains any yardstick, position in the image.
The described step that detected people's face is verified comprises:
People's face of continue to follow the tracks of selecting in subsequent frame is lower than setting value as the tracking results similarity of back one frame in the consecutive frame and former frame, stops tracking; After last target stops to follow the tracks of, in successive image, carry out people's face again and detect,, verify laggard line trace step up to finding new people's face.
The described thickness two-stage people face detection algorithm that adopts again carries out in the step of detection validation detected people's face, the detection of people's face divides two-stage to realize, the resolution of people's face window of rough detection search is surveyed the resolution of people's face window of search less than described examining, image to each yardstick, earlier with after its corresponding dwindling, people's face window with the rough detection search carries out the detection of people's face, eliminate non-face window, in the archeus image, people's face window of surveying search with examining carries out people's face to remaining people's face candidate window and detects again.
In people's face step after carrying out tracking verification, described object tracking algorithm is based on Meanshift and histogram feature is made.
Described histogram comprises long-term histogram, short-term histogram and color histogram.
Step to the tracing area checking is meant that carrying out people's face every the number frame at tracing area detects, if detect front face at tracing area, then according to the size and the position renewal tracking parameter that detect people's face, described tracking parameter comprises center, radius and the histogram feature of people's face; If continuous hundreds of frame all traces into target, but all do not detect front face, stop to follow the tracks of, in successive image, carry out people's face again and detect,, verify laggard line trace step up to finding new people's face at tracing area.
The present invention also proposes the real-time detection and the lasting tracker of people's face in a kind of video sequence, comprises people's face pick-up unit and face tracking device, and people's face pick-up unit comprises people's face processing unit, little feature calculation unit and sorter unit; Described people's face processing unit receives image information, and the image that receives is carried out convergent-divergent, exhaustive search candidate face window, the average of calculation window gray scale and variance; Described feature calculation unit goes out the microstructure features of each window according to the AdaBoost algorithm computation, and send it to described sorter unit and adjudicate, the sorter unit sends it to face tracking device after adopting thickness two-stage people face detection algorithm to adjudicate; Described face tracking device comprises object tracking unit and tracing area authentication unit, the object tracking unit adopts histogram feature to calculate, realization is followed the tracks of image, and the tracing area authentication unit carries out the zone to the image of following the tracks of and detects, and people's face of following the tracks of is verified.
The sorter unit of described people's face pick-up unit also comprises thickness two-stage people face detecting unit, receive microstructure features, carrying out the thickness two-stage detects: the resolution of people's face window of rough detection search is surveyed the resolution of people's face window of search less than described examining, image to each yardstick, earlier with after its corresponding dwindling, people's face window with the rough detection search carries out the detection of people's face, eliminate non-face window, again in the archeus image, people's face window of surveying search with examining carries out people's face to remaining people's face candidate window and detects, and determines people's face window.
The histogram that described object tracking unit adopts comprises long-term histogram, short-term histogram and color histogram.
The technique effect that the present invention produces is significant:
Method and system of the present invention is based on that the AdaBoost algorithm detects, and adopts roughcast type and the thin model to carry out people's face detection validation, make the present invention obtain very high verification and measurement ratio, and detection speed is very fast, efficient height, but real-time implementation; The present invention's average and variance according to the standard faces gray scale in testing process carried out the normalization of gray scale to left and right sides people's face resolution, eliminates the uneven influence of left and right sides human face light; The present invention proceeds the tracking and the checking of people's face at short notice after the detection of people's face finishes, eliminate the influence that people's face detects false-alarm; Introduce long-term histogram and two local features of short-term histogram in the face tracking process, the histogrammic change procedure of target in the reflection previous image, the assurance algorithm can be followed the tracks of the moving target that attitude constantly changes.
Description of drawings
Fig. 1 is the process flow diagram of people's face detection of the present invention and tracking.
Fig. 2 a and Fig. 2 b are that people's face of the present invention detects and the detection of tracking and the synoptic diagram of tracking results.
Fig. 3 is people's face testing process synoptic diagram of realizing based on AdaBoost hierarchical classification device.
Fig. 4 is the positive sample facial image of the part in the sorter.
Fig. 5 does not comprise the anti-sample image of people's face for part in the sorter.
Fig. 6 a to Fig. 6 d is the demarcation of people's face sample and the synoptic diagram of collection.
Fig. 7 is the sample after handling through the both sides gray scale normalization.
Fig. 8 is an original sample and through the sample behind the mirror image, left rotation and right rotation, processing and amplifying.
Fig. 9 is the anti-sample data after the yardstick normalization.
Figure 10 is seven groups of little features that the embodiment of people's face detection algorithm of the present invention selects.
Figure 11 is the training schematic flow sheet of AdaBoost training algorithm K layer sorter.
Figure 12 a and 12b detect the synoptic diagram of an embodiment of aftertreatment result for people's face.
Figure 13 a and Figure 13 b are face tracking synoptic diagram of the present invention.
Figure 14 face testing result synoptic diagram of behaving.
Figure 15 behaves, and face detects and the synoptic diagram of tracking results.
Figure 16 is a kind of formation block diagram of system of the present invention.
Figure 17 is that the another kind of system of the present invention constitutes block diagram.
Embodiment
The present invention proposes a kind of real-time detection and the method for continue following the tracks of of people's face in video sequence, in conjunction with content shown in Figure 1:
At first, inputted video image, in this step by the real-time inputted video image of camera;
Then, adopt the real-time face detection algorithm that the video image of input is carried out the detection of people's face;
After receiving the video image of real-time input, every two field picture is searched for, detected the existence of people's face; Shown in Fig. 2 a, provided the result that people's face detects, wherein square-shaped frame is expressed as detected people's face; In testing process, if in certain two field picture, detect one or more people's faces, then in ensuing two two field pictures, follow the tracks of these people's faces, and people's face of following the tracks of in follow-up two two field pictures is detected and verifies, judge whether the testing result of front is genuine people's face;
Only after continuous three frames in certain position all detected people's face, algorithm thought that just this people from position face exists, if having a plurality of people's faces in the scene, picked out people's face and began to follow the tracks of; Adopt people's face of following the tracks of maximum to describe in the embodiments of the invention;
In subsequent frame, continue to follow the tracks of select people's face,, then stop tracking if the similarity of back one frame and the tracking results of former frame is lower than setting value (this setting value can be set arbitrarily) in the consecutive frame; If certain tracking target region does not detect positive homo erectus's face for a long time, think that then the tracking value of this target is little, stop to follow the tracks of this target; After previous tracking target stops to follow the tracks of, in successive image, carry out people's face again and detect,, follow the tracks of new people's face up to finding new people's face.Provided the result of face tracking as Fig. 2 b.
Below in conjunction with the training process of sample, describe for testing process of the present invention, the present invention adopts the statistics training method to carry out the detection of front face in the scene, and adopts the training of the theoretical people of realization of AdaBoost face detection statistics model.Based on people's face detection algorithm of AdaBoost at first by a large amount of " people's face " and " people's face/non-face " two class sorters of " non-face " sample training, in testing process, whether the rectangular window of being determined certain yardstick by this sorter is people's face, if rectangle is long is m, wide is n, then the flow process of people's face detection is exactly: at first be continuous according to a certain percentage scaling image, exhaustive search and differentiation all big or small m * n pixel window in the image series that obtains, each window is input in " people's face/non-face " sorter, stay identification and be the candidate window of people's face, adopt the candidate of adjacent position again, mean value after being combined calculates, and exports the position of all detected people's faces, information such as size.
A kind of real-time face detection algorithm of the present invention, be to use a kind of microstructure features of similar Harr small echo to come expressing human face pattern, and a kind of feature selection approach has been proposed in conjunction with above-mentioned AdaBoost algorithm, a plurality of Weak Classifiers based on single feature are consisted of a strong classifier, then a plurality of strong classifiers are cascaded into complete people's face and detect sorter, in conjunction with shown in Figure 3.The every layer of sorter that forms in the testing process among the present invention all is a strong classifier that is obtained by the training of AdaBoost algorithm, and every layer of strong classifier is made up of the Weak Classifier of some again.When detecting, be that False does not further differentiate with regard to getting rid of this subwindow, if be output as True then use down the more complicated sorter of one deck that subwindow is differentiated if certain one deck strong classifier is differentiated a subwindow.In the search procedure of candidate window, each layer strong classifier can both allow almost whole people's face samples pass through, and refuses most of non-face sample.The window of input low layer strong classifier is just many like this, and the high-rise window of input significantly reduces.
People's face detection algorithm need be to one dimensioning of the image zooming after the exhaustive search, obtain people's face window, in the present embodiment, image with 320 * 240 pixels is an example, as dwindling 10 times according to 1.25 ratio, and search for a window surplus needing altogether to judge 170000 with the window individual element of 20 * 20 sizes.This means that people's face detects all needs to search for a large amount of windows, and in the face of big calculated amount like this, the low layer sorter must be very simple in the testing process of Fig. 3, and promptly the Weak Classifier quantity that comprises of the sorter of front is few.The sorter complexity of back, the Weak Classifier quantity that comprises is many, and the layer of back can adopt more feature to refuse to remove candidate window with human face similarity like this, thereby has guaranteed lower false drop rate.
When carrying out the detection of people's face, the sample data of detected image and training is compared, when wherein carrying out the training of model, need to collect a large amount of positive sample of people's face and anti-sample data, collection by these data, set up sample database of the present invention, positive sample in this database comprises several people's face samples, these samples comprise people's face of different table feelings, the different colour of skin, all ages and classes, the people's face that comprises-20 ° to 20 ° degree of depth rotations, comprise the people's face of wearing and do not wear glasses, the part sample as shown in Figure 4.Anti-sample data is exactly the image that does not comprise people's face in a large number, comprises landscape image, animal, literal etc., referring to content shown in Figure 5.In training, analyze the key feature point of everyone face, determine two centers, nose, face center and the chin of each positive sample people face.According to these calibration points each individual face is carried out geometrical normalization, the major organs aligning that is about to facial image is to normal place, reduce yardstick, translation and plane rotation difference between sample, cut out out human face region according to organ site then and become to be people's face sample, make people's face sample introduce background interference less, and the organ site of different people face sample have consistance as far as possible.In the process of carrying out the detection of people's face, also need detected people's face is made determining of organ site, to carry out the geometrical normalization of people's face.
With reference to the content shown in figure 6a~Fig. 6 d, represented that a width of cloth standard faces image carries out the cutting process of geometrical normalization and human face region to each individual face sample: determine that at first the detection window yardstick m * n that provides previously is 20 * 20, then obtain the front face image of a width of cloth standard, two y coordinate unanimity in the standard picture, people's face is also symmetrical fully, as Fig. 6 a, demarcate five key feature points of this image.The position of the square human face region of determining cutting according to the distance and the position of eyes in this image.If two distance is r, the central point of two lines is (x
Center, y
Center), the length and width of gathering rectangle are made as 2r, i.e. twice binocular interval, the then coordinate (x in clipping rectangle zone
Left, y
Top, x
Right, y
Bottom) be:
The human face region of cutting is normalized to 20 * 20 size,, and obtain the coordinate [x of five calibration points after the normalization as Fig. 6 b
Stad(i), y
Stad(i)], i=0,1,2,3,4.
Five unique point [x of any given primitive man's face sample and demarcation
Label(i), y
Label(i)], i=0,1,2,3,4, as Fig. 6 c, calculate the affined transformation coefficient between five point coordinate after these five points and the standard picture normalization.In conversion process, should guarantee the total shape invariance of each individual face sample, promptly long face still should be long face, short face still should be short face, detection algorithm just can detect dissimilar people's faces like this, the stretching conversion that therefore can not add people's face all directions in the affined transformation formula, we only consider rotation and two conversion of whole convergent-divergent, can determine that thus the affined transformation formula is:
Adopt least square method can obtain transformation parameter in the following formula (a, b, c, d).
If the facial image after the cutting is I, this picture size is 20 * 20, then can calculate in this image more arbitrarily (x, y) corresponding point coordinate (x in the original sample by the affined transformation coefficient
Ori, y
Ori).
For eliminating The noise, (x, pixel value y) is made as corresponding point (x in the images cut
Ori, y
Ori) the interior average that a pixel value is arranged of neighborhood scope.The pixel value that can obtain among the I thus to be had a few is as Fig. 6 d.
In the process that detects, because factors such as ambient light photograph, imaging device may cause facial image brightness or contrast unusual, strong shadow or situation such as reflective appear, also there is this difference in addition between the colour of skin of different ethnic groups, therefore need carry out the gray balance processing to the people's face sample behind the geometrical normalization, improve its intensity profile, the consistance between enhancement mode.Because in people's face testing process, each search window all needs to carry out the gray balance processing, therefore can not adopt the method for calculating than complicated to carry out gray scale normalization.The present invention adopts the gradation of image average of available fast algorithm implementation, the gray balanceization that sample is carried out in variance normalization.In addition, be subjected to the particularly influence of ambient light photograph of different directions, the brightness of the right and left people face often exists evident difference in the actual scene.Therefore the present invention carries out normalization respectively to both sides people's face gray scale, makes the average of half of gray scale about people's face and the standard value that variance all equals a setting.Fig. 7 has provided part through the facial image after the normalized of both sides.
For strengthening sorter to the rotation of people's face certain angle and the detection robustness of change in size, the present invention carries out mirror transformation, rotation ± 20 ° angle, size to each sample and amplifies 1.1 times, as Fig. 8.Each sample is extended for five samples, has so formed the positive sample set of AdaBoost training.
Anti-sample data then in every layer of training process of AdaBoost in the anti-sample image storehouse picked at random.Determine the sequence number of anti-sample image earlier at random, determine the size and the position of anti-sample at random, then in this sequence number correspondence image, cut out corresponding zone, the size of cutting image normalization to 20 * 20, obtain an anti-sample, Fig. 9 is the anti-sample data of part.
The present invention has adopted feature extracting method when carrying out feature calculation, used seven groups of little features in the present embodiment, comes the design feature of expressing human face pattern effectively.Figure 10 has provided little feature structures all in 20 * 20 images, and the difference of corresponding black region and white portion interior pixel gray average obtains feature in the computed image.The size of black rectangle and white rectangle is consistent in the six groups of little features in front, and the length and width of white rectangle are three times of black rectangle in the 7th group of little feature.Each is organized the length and width of black rectangle in little feature or white rectangle and can select arbitrarily, i.e. optional 1 to 20 any number of the size of each rectangle.Each is organized the position of central point in little feature and also can select arbitrarily, therefore can obtain 20 * 20 * 20 * 20 * 7=1120000 feature in theory in 20 * 20 images.Consider in a lot of stack features black or white portion to 20 * 20 images outsides, we ignore to this category feature.Therefore effectively characteristic number is 89199.
In people's face testing process, need constantly to calculate little feature, and little feature is input in each layer AdaBoost strong classifier adjudicates.Therefore little feature calculation efficient has just determined the efficient of people's face detection algorithm.Can utilize the integrogram of entire image and a kind of microstructure features that square integrogram obtains any yardstick, optional position in the image fast, thereby for the realization of people's face real-time detecting system provides possibility, and adopt this method to need not all pixel values of each 20 * 20 window of extracting are carried out gray scale normalization, only need carry out the normalization of microstructure features in conjunction with gray average and variance half of about 20 * 20 windows.
As shown in figure 11, be the training process of K layer sorter in the AdaBoost algorithm training flow process.In the sorter that the K-1 layer has trained before at first the positive sample people face data after all normalization being input to, will be input in the training module of K layer sorter by the positive sample of these sorters.Select anti-sample at random again in the anti-sample image of above-mentioned 5400 width of cloth, before each anti-sample also is input in the K-1 layer sorter, sample that will be by this K-1 layer sorter is as the anti-sample of input of the training module of K layer sorter.We can guarantee that every layer of sorter can not be eliminated or eliminate considerably less positive sample people face in training process.Therefore the positive sample that is input at last in the training module remains on certain quantity substantially.For obtaining classification performance preferably, and training effectiveness is unlikely low excessively, and the present invention selects and the positive suitable anti-sample of sample size.Therefore when the anti-sample number of selecting at random reaches the suitable quantity of positive sample, stop to choose of anti-sample, begin the training of K layer sorter.
The strong classifier of AdaBoost is made up of the Weak Classifier based on single feature, i.e. the corresponding little feature of each Weak Classifier.Weak Classifier of the present invention is defined as:
Wherein x is 20 * 20 image window, g
j(x) eigenwert of presentation video under j feature, θ
jBe the decision threshold of j feature correspondence, h
j(x) the judgement output of presentation video under j feature.Following formula has defined a little feature j and threshold value θ for each Weak Classifier
j, the judgement mode has three kinds of possibilities, promptly according to the little feature g that imports
j(x) be greater than this threshold value, still less than this threshold value, or absolute value is 1 or 0 less than this threshold value decision judgement output.Each Weak Classifier can only select a kind of judgement possibility, in training process, can handle current little feature according to all positive and negative samples, obtain three kinds of classification error rates on judgement corresponding threshold and the training sample set respectively, with the mode of error rate minimum judgement mode as current Weak Classifier.
Then every layer of strong classifier trained, be exactly and in fact training process obtains the process of little feature sequence number of each Weak Classifier correspondence, thereby total make the classification capacity of each Weak Classifier on training set the strongest.We set the Weak Classifier that every layer of sorter comprise and fix, and be made as T, and a little feature can only occur once.Little feature of first sorter correspondence will not considered in the training process of back.Listed the training flow process of algorithm below:
Given the training set { (x that comprises n sample
i, y
i), i=0,1 ..., n-1{ () }, y
i=0 or 1, the input sample x that expression is corresponding
iBe people's face sample or non-face sample, wherein people's face sample size is m, and non-face sample size is 1;
Select misclassification risk multiple c, the risk size of expression training sample classification error is then for people's face sample
To non-face sample
The weight of initial each sample
C is big more, represents that then the risk of positive sample classification mistake is big more, just should guarantee as far as possible that at training classifier the classification error rate of positive sample is as much as possible little;
(1), utilizes single features training sorter h to each feature j
j, according to the weights W of training sample set
tObtain optimum threshold parameter, make h
jError rate ε
jMinimum:
(2) obtain the Weak Classifier of error rate minimum as t Weak Classifier h when the anterior layer strong classifier
t, the characteristic of correspondence sequence number is f
t, corresponding error rate is ε
t
(3) calculating parameter
(4) upgrade the weight of all samples
E wherein
iThe correct recognition sample x of=0 current Weak Classifier of expression
iOtherwise, e then
i=1, Z
tBe normalized factor, make the weight sum of renewal equal 1
Expression input sample x passes through this layer strong classifier, otherwise thinks that the input sample is non-face.
The classification error rate that every layer of strong classifier aligns sample is low as much as possible, and for the individual layer strong classifier, c is big more, and the initial weight of positive sample is big more, and the initial weight of anti-sample is more little, and training algorithm is for reducing the error rate ε of each Weak Classifier
tCan guarantee as far as possible that the big sample classification of weight is correct, therefore c is big more, the classification error rate of positive sample is just more little, the classification error rate of anti-sample is just high more, therefore we make each layer strong classifier very little to the classification error rate of training positive sample in the method for training process employing manual adjustments parameter c, generally are less than 0.05%.And for the individual layer sorter, we are that false alarm rate does not then have clear and definite requirement to the classification accuracy rate of anti-sample.Because people's face detection algorithm is formed by a plurality of sorter cascades, every layer false alarm rate does not need too low, and tens of layers false alarm rate multiply each other the total false alarm rate that obtains just can be very little, if every layer positive sample error rate is all less than 0.05%, total accuracy still can reach 99%, so just can guarantee that algorithm can detect various types of training samples, training sample has comprised people's face of all kinds, a plurality of direction, a plurality of standard and attitude again, and therefore last people's face detection model of realizing can detect and add the front face sample that multiclass is disturbed.
In addition, the Weak Classifier of individual layer strong classifier is counted T also needs careful adjustment.T is big more, and Weak Classifier quantity is many more, and false alarm rate is generally more little; Owing to when being input to each candidate face window in the strong classifier, need to calculate earlier T little feature of this layer strong classifier correspondence, so T is big more means that also the counting yield of this layer strong classifier is higher.We need constantly to regulate according to false alarm rate the size of each layer sorter T in training process, seek between false alarm rate and counting yield and trade off, and principle is that the false alarm rate of every layer of sorter is little, and T can not be too big simultaneously, and counting yield is than higher.
In people's face of the present invention detects, adopt thickness two-stage people face detection algorithm that detected people's face is carried out detection validation; Describe below in conjunction with specific embodiment:
Because little characteristic number of 20 * 20 images has reached 89199, this means all needs to search out best feature in the training process of each Weak Classifier from 89199 features, and this process is very consuming time.In order to improve the efficient of training algorithm, simultaneously also in order to improve the performance of detection algorithm, the thickness two-stage people face detection algorithm that the present invention proposes, divide two-stage to realize people's face testing process, people's face window size of rough detection search is fixed as 10 * 10, and people's face window size that search is surveyed in examining is only 20 * 20.We obtain the image of different scale equally in testing process, then the resolution to each scalogram picture reduces by half again, the search size is people's face candidate window of 10 * 10, each window is input in each layer strong classifier of rough detection, calculate little feature of each layer strong classifier, and adjudicate, eliminate non-face window, remaining at last minority candidate face window is input in the examining survey; Will 10 * 10 windows by rough detection expanding be 20 * 20 windows, carry out examining and survey, and continue these candidate window of search in the scalogram picture that does not reduce by half to original resolution, determine final people's face window.Equally, training process also is divided into two parts, trains the rough detection model earlier, and model is surveyed in the retraining examining.
The positive sample resolution of the somebody of institute face in the training set is reduced by half again, obtain rough detection people's face sample of 10 * 10.And the total little characteristic number of 10 * 10 people's face samples only is 5676, and the training effectiveness of rough detection model is just surveyed the efficient of model far above examining like this.The counting yield of each little feature of people's face also is higher than the efficient of little feature in 20 * 20 images in other 10 * 10 images, therefore adopts this two-stage detection method also can improve the speed that people's face detects greatly.
Import piece image arbitrarily, for detecting people's face of certain range scale in this image, we divide yardstick that this image is carried out convergent-divergent.For example, if detect people's face of 320 * 240 image mesoscales from 20 * 20 to 240 * 240, we need be in a plurality of yardstick downscaled images.For rough detection, minimum minification should be 2, and maximum minification is 24; Examining is surveyed, and minimum minification is 1, and maximum minification is 12.If the ratio of the minification of adjacent two yardsticks is made as 1.25, then the minification of image is respectively (for rough detection) 2,2.5,3.13,3.91,4.88,6.10,7.63,9.54,11.9,14.9,18.6,23.3, totally 12 yardsticks.
To change to the yardstick and the position of original input picture in all scalogram pictures by the square-shaped frame contravariant of thickness two-stage detection, obtain position candidate and candidate's size of people's face in the original image.Generally speaking, a real human face tends to detect repeatedly with the position adjacent place under different yardsticks, and the appearance of false-alarm is often more isolated, and Figure 12 a is an example.Need carry out aftertreatment this moment to testing result, people's face frame of adjacent position is merged, if the difference in size and the position difference of two candidate face frames are all very little, perhaps these two people's face frame overlapping areas are very big, just these two frames can be united two into one, the position that merges frame is the average of these two frame positions, and size also is the average of these two frame sizes.Remaining at last several individual face frames, each frame are all formed by the candidate frame merging of some, and this numerical value is a very important parameter, have determined whether the people's face frame that detects is real people's face.This paper sets a threshold value, if merge the frame number greater than this threshold value, then people's face current location is a real human face, otherwise eliminates this candidate frame.Figure 12 b is the result after merging.
Above-mentioned people's face detection algorithm only can detect front face, and patient people's face change in depth, the plane anglec of rotation are all very limited.In addition, for obtaining the people's face in the piece image, people's face detection algorithm need be searched in a large number to the zoomed image of different scale, although algorithm can be realized the detection of people's face in tens of milliseconds, this process still is very consuming time and consumes calculated amount.Detect if every two field picture of the video sequence of real-time input is all carried out people's face, then the calculated amount of whole algorithm will be very surprising.The important purpose of face tracking another one is exactly to realize the lasting tracking of someone's face, confirm that long-time target of following the tracks of is same individual face, follow-up like this people's face Processing Algorithm such as recognition of face, Expression Recognition etc. can comprehensive video in the multiframe recognition result, improve the precision of recognizer greatly.
The tracking of people's face is to realize on the basis that people's face detects.Input picture to video sequence carries out the detection of people's face earlier, in order to reduce the occupation rate of program CPU, can detect people's face every several frames.In follow-up two frames, follow the tracks of and verify after detecting people's face, stay and verify one maximum in the people's face that passes through, continue to follow the tracks of the various posture changings of handler's face detecting people's face.The colour of skin of considering people's face portion has high-definition feature, and is all very big with the difference of hair, clothes, photographed scene, so the present invention also adopts features of skin colors to realize the tracking of people's face, and obtaining by the color histogram feature of features of skin colors realizes.As Figure 13, this paper calculates the color histogram in the border circular areas, and utilizes people's face coordinate, the histogram feature of k-1 frame to search in the k two field picture, obtains people's face position of k frame.
The present invention is when carrying out face tracking, adopted object tracking algorithm based on Mean shift and histogram feature, this algorithm adopts histogram feature to realize the quick tracking of certain color target just, the treatment effeciency of algorithm is very high, this algorithm and people's face detect and combine, and realize the lasting detection and the tracking of people's face in the video sequence.
Method of the present invention when calculating people's face color histogram feature, can be quantified as 8 grades with R, G, each color space of B, and total color space is quantified as 8 * 8 * 8 grades, and the histogram feature of so each calculating is 512 dimensions.Human face region has three parametric description (x
Cen, y
Cen, rad), represent the xy coordinate of people's face central point, the radius of circular people's face respectively, as Figure 13 a, in practical application, also can adopt other space quantization level to realize certainly.When the input of the new image of a width of cloth, the reposition that track algorithm calculates present frame people face according to the people's face position size and the histogram feature of former frame, and the radius of new person's face more, the reflection people little conversion of being bold.
The present invention has introduced long-term histogram and two local features of short-term histogram in the process of following the tracks of, its medium-term and long-term histogram is the histogrammic averages of the tens of frames in front, reflected the Changing Pattern of people's appearance when the long-time interior colour of skin, the short-term histogram then is that the histogrammic average of frame is counted in the front, reflection then be the variation of the colour of skin in people's face short time.When the position of search present frame people face, the matching histogram template that adopts equals the average of long-term histogram feature and short-term histogram feature, even present frame people face denies that attitude, illumination, expression etc. change acutely like this, the difference of its features of skin colors and histogram template is also not too large, the position of adopting Mean shift algorithm (motion tracking algorithm) just can obtain people's face fast.
For guaranteeing that tracking target must be people's face, but and guarantee the variation of radius r ad accurate description people face size, the present invention has added the checking of tracing area when face tracking, promptly carrying out front face every the number frame at tracing area detects, the position of people's face and size are approximate at this moment knows, therefore do not need to search for entire image, only need search a few locations and minority yardstick to get final product.If detect front face at tracing area, system comprises center, radius and the histogram feature of people's face again according to the size and the position renewal tracking parameter that detect people's face.In addition,, but all do not detect front face, can think this moment to stop not necessarily people's face of target following the tracks of at tracing area if continuous hundreds of frame all traces into target.In subsequent frame, input picture is searched for thoroughly, detected people's face and tracking again.This method can be used for methods such as recognition of face, Expression Recognition, people's face be synthetic goes, so that carry out real-time detection and tracking.
People's face detection algorithm of the present invention can accurately detect-20 ° to 20 ° plane rotation people faces ,-20 ° to 20 ° left and right sides degree of depth rotation people faces, can detect new line and low tribal chief's face of certain limit, can detect different table sweet heart face, wearing and not wearing glasses does not influence detecting effect.Figure 14 is lineup's face testing result, everyone face image comprises the testing result frame after a plurality of candidate frames and the processing among Figure 14 a wherein, frame table shown in its center line 141 shows the testing result after the aftertreatment, what the frame table shown in the line 142 showed is a candidate frame, has comprised a plurality of candidate frames in this figure; Frame among Figure 14 b-d is the output result who carries out after the aftertreatment, the left and right sides illumination of people's face differs greatly among Figure 14 c-d, but the method that the present invention has adopted left and right sides gray scale normalization to handle in method, improved the anti-illumination interference capability of detection algorithm, realized the accurate detection of this class people face, Figure 15 is one group of tracking results.
The above-mentioned method of the present invention can be realized by the real-time detection and the lasting tracker of people's face in the video sequence, content with reference to Figure 16, this system comprises people's face pick-up unit 1 and face tracking device 2, and people's face pick-up unit 1 comprises people's face processing unit 11, little feature calculation unit 12 and sorter unit 14; Described people's face processing unit 11 receives image information, the image that receives is carried out convergent-divergent, candidate face window in the exhaustive search zoomed image, the average and the variance of one side of something about the calculation window image, then it is sent to described little feature calculation unit 12, described feature calculation unit 12 goes out microstructure features according to the AdaBoost algorithm computation, and send it to described sorter unit 14 and adjudicate, after adjudicating, sorter unit 14 sends it to face tracking device 2; Described face tracking device 2 comprises object tracking unit 21 and tracing area authentication unit 22, object tracking unit 21 adopts histogram feature to calculate, realization is followed the tracks of image, and the image of 22 pairs of tracking of tracing area authentication unit carries out the zone and detects, and people's face of following the tracks of is verified.
In system of the present invention, the sorter unit 14 of this people's face pick-up unit 1 also comprises thickness two-stage people face detecting unit 13, receives microstructure features, carries out the thickness two-stage and detects and aftertreatment, determines people's face window, follows the tracks of again.The formation block diagram of this system can be referring to content shown in Figure 17.
The detection that the present invention has realized people's face in the video sequence of real-time input with continue to follow the tracks of.This algorithm adopts the method for detecting human face based on AdaBoost statistics layering sorter, realize the real-time detection of positive homo erectus's face, and can detect the different people's faces of expressing one's feelings of different scenes, attitude, the rotation change of tolerance certain limit and certain angle, and employing has realized detecting the real-time follow-up of people's face based on the face tracking method of Mean shift and histogram feature, track algorithm speed piece, the CPU occupation rate is low, and is not subjected to the influence of human face posture, and side, rotation people face can be followed the tracks of equally.
The above only is preferred embodiment of the present invention, and is in order to restriction the present invention, within the spirit and principles in the present invention not all, any modification of being done, is equal to replacement etc., all should be included within protection scope of the present invention.
Claims (14)
1, the real-time detection of people's face and the method that continues to follow the tracks of in a kind of video sequence is characterized in that, may further comprise the steps:
Inputted video image;
Adopt the real-time face detection algorithm that the video image of input is carried out the detection of people's face;
Adopt thickness two-stage people face detection algorithm that detected people's face is carried out detection validation again;
Adopt the people's face after the object tracking algorithm keeps track is verified;
By checking people's face of following the tracks of is verified processing to tracing area.
2, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 1 is characterized in that described real-time face detection algorithm is based on the AdaBoost algorithm and is realized by multistage classifier.
3, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 2 is characterized in that described people's face detects and comprises following steps:
The image information that receives is carried out image zoom, searches people's face window;
Detected people's face is carried out positioning feature point, people's face is carried out geometrical normalization;
People's face is carried out the gray balance processing;
People's face is rotated convergent-divergent;
The standard faces image that obtains detecting.
4, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 3, it is characterized in that, comprise following situation in the described step that adopts thickness two-stage people face detection algorithm detected people's face to be carried out detection validation again: as in certain two field picture, detecting one or more people's faces, in ensuing two two field pictures, follow the tracks of these people's faces, and people's face of following the tracks of in follow-up two two field pictures is detected and verifies;
After continuous three frames in certain position detect people's face, determine that people's face exists, select one of them to begin to follow the tracks of.
5, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 3, it is characterized in that, after people's face carried out the gray balance processing, also comprise, the average of half of gray scale about people's face and variance are equated people's face both sides gray scale normalized step respectively.
6, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 3, it is characterized in that, the step that also comprises little feature calculation and sorter judgement behind described people's face rotation convergent-divergent, this step are to adopt the integrogram of the facial image after handling and the microstructure features that square integrogram obtains any yardstick, position in the image.
7, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 4 is characterized in that the described step that detected people's face is verified comprises:
People's face of continue to follow the tracks of selecting in subsequent frame is lower than setting value as the tracking results similarity of back one frame in the consecutive frame and former frame, stops tracking; After last target stops to follow the tracks of, in successive image, carry out people's face again and detect,, verify laggard line trace step up to finding new people's face.
8, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 1, it is characterized in that, the described thickness two-stage people face detection algorithm that adopts again carries out in the step of detection validation detected people's face, the detection of people's face divides two-stage to realize, the resolution of people's face window of rough detection search is surveyed the resolution of people's face window of search less than described examining, image to each yardstick, earlier with after its corresponding dwindling, people's face window with the rough detection search carries out the detection of people's face, eliminate non-face window, in the archeus image, people's face window of surveying search with examining carries out people's face to remaining people's face candidate window and detects again.
9, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 1 is characterized in that in the people's face step after carrying out tracking verification, described object tracking algorithm is based on Mean shift and histogram feature is made.
10, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 9 is characterized in that described histogram comprises long-term histogram, short-term histogram and color histogram.
11, the real-time detection of people's face and the method that continues to follow the tracks of in the video sequence as claimed in claim 9, it is characterized in that, step to the tracing area checking is meant that carrying out people's face every the number frame at tracing area detects, if detect front face at tracing area, then according to the size and the position renewal tracking parameter that detect people's face, described tracking parameter comprises center, radius and the histogram feature of people's face; If continuous hundreds of frame all traces into target, but all do not detect front face, stop to follow the tracks of, in successive image, carry out people's face again and detect,, verify laggard line trace step up to finding new people's face at tracing area.
12, the real-time detection of people's face and lasting tracker in a kind of video sequence is characterized in that, comprise people's face pick-up unit and face tracking device, and people's face pick-up unit comprises people's face processing unit, little feature calculation unit and sorter unit; Described people's face processing unit receives image information, and the image that receives is carried out convergent-divergent, exhaustive search candidate face window, the average of calculation window gray scale and variance; Described little feature calculation unit goes out the microstructure features of each window according to the AdaBoost algorithm computation, and send it to described sorter unit and adjudicate, the sorter unit sends it to face tracking device after adopting thickness two-stage people face detection algorithm to adjudicate; Described face tracking device comprises object tracking unit and tracing area authentication unit, the object tracking unit adopts histogram feature to calculate, realization is followed the tracks of image, and the tracing area authentication unit carries out the zone to the image of following the tracks of and detects, and people's face of following the tracks of is verified.
13, the real-time detection of people's face and lasting tracker in the video sequence as claimed in claim 12, it is characterized in that, the sorter unit of described people's face pick-up unit also comprises thickness two-stage people face detecting unit, receive microstructure features, carrying out the thickness two-stage detects: the resolution of people's face window of rough detection search is surveyed the resolution of people's face window of search less than described examining, image to each yardstick, earlier with after its corresponding dwindling, people's face window with the rough detection search carries out the detection of people's face, eliminate non-face window, in the archeus image, people's face window of surveying search with examining carries out people's face to remaining people's face candidate window and detects, and determines people's face window again.
14, the real-time detection of people's face and lasting tracker in the video sequence as claimed in claim 12 is characterized in that, the histogram that described object tracking unit adopts comprises long-term histogram, short-term histogram and color histogram.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005101356688A CN100361138C (en) | 2005-12-31 | 2005-12-31 | Method and system of real time detecting and continuous tracing human face in video frequency sequence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005101356688A CN100361138C (en) | 2005-12-31 | 2005-12-31 | Method and system of real time detecting and continuous tracing human face in video frequency sequence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1794264A CN1794264A (en) | 2006-06-28 |
CN100361138C true CN100361138C (en) | 2008-01-09 |
Family
ID=36805689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2005101356688A Active CN100361138C (en) | 2005-12-31 | 2005-12-31 | Method and system of real time detecting and continuous tracing human face in video frequency sequence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100361138C (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111898581A (en) * | 2020-08-12 | 2020-11-06 | 成都佳华物链云科技有限公司 | Animal detection method, device, electronic equipment and readable storage medium |
Families Citing this family (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100397411C (en) * | 2006-08-21 | 2008-06-25 | 北京中星微电子有限公司 | People face track display method and system for real-time robust |
CN100426317C (en) * | 2006-09-27 | 2008-10-15 | 北京中星微电子有限公司 | Multiple attitude human face detection and track system and method |
CN100426318C (en) * | 2006-09-28 | 2008-10-15 | 北京中星微电子有限公司 | AAM-based object location method |
CN101178769B (en) * | 2007-12-10 | 2013-03-27 | 北京中星微电子有限公司 | Health protecting equipment and realization method thereof |
CN101221620B (en) * | 2007-12-20 | 2011-04-06 | 北京中星微电子有限公司 | Human face tracing method |
CN101499128B (en) * | 2008-01-30 | 2011-06-29 | 中国科学院自动化研究所 | 3D Face Action Detection and Tracking Method Based on Video Stream |
CN101339608B (en) * | 2008-08-15 | 2011-10-12 | 北京中星微电子有限公司 | Object tracking method and system based on detection |
CN101383001B (en) * | 2008-10-17 | 2010-06-02 | 中山大学 | A Fast and Accurate Frontal Face Discrimination Method |
NO329897B1 (en) * | 2008-12-19 | 2011-01-24 | Tandberg Telecom As | Procedure for faster face detection |
CN101447023B (en) * | 2008-12-23 | 2013-03-27 | 北京中星微电子有限公司 | Method and system for detecting human head |
CN101456501B (en) * | 2008-12-30 | 2014-05-21 | 北京中星微电子有限公司 | Method and device for controlling elevator buttons |
CN101577812B (en) * | 2009-03-06 | 2014-07-30 | 北京中星微电子有限公司 | Method and system for post monitoring |
TWI401963B (en) * | 2009-06-25 | 2013-07-11 | Pixart Imaging Inc | Dynamic image compression method for face detection |
CN101661554B (en) * | 2009-09-29 | 2012-02-01 | 哈尔滨工程大学 | Automatic identification method of frontal human body under long-distance video |
CN101706721B (en) * | 2009-12-21 | 2012-11-28 | 汉王科技股份有限公司 | Face detection method simulating radar scanning |
CN101794382B (en) * | 2010-03-12 | 2012-06-13 | 华中科技大学 | Method for counting passenger flow of buses in real time |
WO2011161307A1 (en) | 2010-06-23 | 2011-12-29 | Nokia Corporation | Method, apparatus and computer program product for tracking face portion |
US9628755B2 (en) * | 2010-10-14 | 2017-04-18 | Microsoft Technology Licensing, Llc | Automatically tracking user movement in a video chat application |
CN102004899B (en) * | 2010-11-03 | 2012-09-26 | 无锡中星微电子有限公司 | Human face identifying system and method |
CN102004905B (en) * | 2010-11-18 | 2012-11-21 | 无锡中星微电子有限公司 | Human face authentication method and device |
CN102004909A (en) * | 2010-11-30 | 2011-04-06 | 方正国际软件有限公司 | Method and system for processing identity information |
CN102110399B (en) * | 2011-02-28 | 2016-08-24 | 北京中星微电子有限公司 | A kind of assist the method for explanation, device and system thereof |
CN103020580B (en) * | 2011-09-23 | 2015-10-28 | 无锡中星微电子有限公司 | Fast face detecting method |
CN102722698B (en) * | 2012-05-17 | 2014-03-12 | 上海中原电子技术工程有限公司 | Method and system for detecting and tracking multi-pose face |
CN103218603B (en) * | 2013-04-03 | 2016-06-01 | 哈尔滨工业大学深圳研究生院 | A kind of face automatic marking method and system |
CN103310466B (en) * | 2013-06-28 | 2016-02-17 | 安科智慧城市技术(中国)有限公司 | A kind of monotrack method and implement device thereof |
CN104866805B (en) * | 2014-02-20 | 2020-09-11 | 腾讯科技(深圳)有限公司 | Method and device for real-time tracking of human face |
CN104036237B (en) * | 2014-05-28 | 2017-10-10 | 中国人民解放军海军总医院 | The detection method of rotation face based on on-line prediction |
SG10201504080WA (en) * | 2015-05-25 | 2016-12-29 | Trakomatic Pte Ltd | Method and System for Facial Recognition |
CN106326817B (en) * | 2015-07-03 | 2021-08-03 | 佳能株式会社 | Method and apparatus for detecting object from image |
CN106469289A (en) * | 2015-08-16 | 2017-03-01 | 联芯科技有限公司 | Facial image sex-screening method and system |
CN206214373U (en) * | 2016-03-07 | 2017-06-06 | 维看公司 | Object detection from visual information to blind person, analysis and prompt system for providing |
TW201743241A (en) | 2016-06-01 | 2017-12-16 | 原相科技股份有限公司 | Portable electronic device and operation method thereof |
CN118312941A (en) * | 2016-06-15 | 2024-07-09 | 原相科技股份有限公司 | Portable electronic device |
CN106339693A (en) * | 2016-09-12 | 2017-01-18 | 华中科技大学 | Positioning method of face characteristic point under natural condition |
CN106407966B (en) * | 2016-11-28 | 2019-10-18 | 南京理工大学 | A face recognition method applied to attendance |
CN106919903B (en) * | 2017-01-19 | 2019-12-17 | 中国科学院软件研究所 | A Robust Deep Learning-Based Method for Continuous Emotion Tracking |
CN107292284B (en) * | 2017-07-14 | 2020-02-28 | 成都通甲优博科技有限责任公司 | Target re-detection method and device and unmanned aerial vehicle |
CN107633208B (en) * | 2017-08-17 | 2018-12-18 | 平安科技(深圳)有限公司 | Electronic device, the method for face tracking and storage medium |
CN108564037B (en) * | 2018-04-15 | 2021-06-08 | 南京明辉创鑫电子科技有限公司 | Salutation posture detection and correction method |
CN109614841B (en) * | 2018-04-26 | 2023-04-18 | 杭州智诺科技股份有限公司 | Rapid face detection method in embedded system |
CN109145771B (en) * | 2018-08-01 | 2020-11-20 | 武汉普利商用机器有限公司 | Face snapshot method and device |
CN109271848B (en) * | 2018-08-01 | 2022-04-15 | 深圳市天阿智能科技有限责任公司 | Face detection method, face detection device and storage medium |
CN109543534B (en) * | 2018-10-22 | 2020-09-01 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Method and device for re-detecting lost target in target tracking |
CN109670451A (en) * | 2018-12-20 | 2019-04-23 | 天津天地伟业信息系统集成有限公司 | Automatic face recognition tracking |
CN109946229B (en) * | 2019-02-25 | 2024-07-16 | 南京文易特电子科技有限公司 | Intelligent digital double-pull-wire detection system and detection method for cigarette strip package |
CN110288632A (en) * | 2019-05-15 | 2019-09-27 | 北京旷视科技有限公司 | A kind of image processing method, device, terminal and storage medium |
CN110287778B (en) * | 2019-05-15 | 2021-09-10 | 北京旷视科技有限公司 | Image processing method and device, terminal and storage medium |
CN112101063A (en) * | 2019-06-17 | 2020-12-18 | 福建天晴数码有限公司 | Skew face detection method and computer-readable storage medium |
CN110555867B (en) * | 2019-09-05 | 2023-07-07 | 杭州智爱时刻科技有限公司 | Multi-target object tracking method integrating object capturing and identifying technology |
CN112749603B (en) * | 2019-10-31 | 2024-09-17 | 上海商汤智能科技有限公司 | Living body detection method, living body detection device, electronic equipment and storage medium |
CN112153334B (en) * | 2020-09-15 | 2023-02-21 | 公安部第三研究所 | Intelligent video box device for security management and corresponding intelligent video analysis method |
CN117437678A (en) * | 2023-11-01 | 2024-01-23 | 烟台持久钟表有限公司 | Front face duration statistics method, system, device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6301370B1 (en) * | 1998-04-13 | 2001-10-09 | Eyematic Interfaces, Inc. | Face recognition from video images |
US20040109584A1 (en) * | 2002-09-18 | 2004-06-10 | Canon Kabushiki Kaisha | Method for tracking facial features in a video sequence |
WO2004051551A1 (en) * | 2002-11-29 | 2004-06-17 | Sony United Kingdom Limited | Face detection and tracking |
WO2005096213A1 (en) * | 2004-03-05 | 2005-10-13 | Thomson Licensing | Face recognition system and method |
-
2005
- 2005-12-31 CN CNB2005101356688A patent/CN100361138C/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6301370B1 (en) * | 1998-04-13 | 2001-10-09 | Eyematic Interfaces, Inc. | Face recognition from video images |
US20040109584A1 (en) * | 2002-09-18 | 2004-06-10 | Canon Kabushiki Kaisha | Method for tracking facial features in a video sequence |
WO2004051551A1 (en) * | 2002-11-29 | 2004-06-17 | Sony United Kingdom Limited | Face detection and tracking |
WO2005096213A1 (en) * | 2004-03-05 | 2005-10-13 | Thomson Licensing | Face recognition system and method |
Non-Patent Citations (3)
Title |
---|
一种新的Adaboost快速训练算法. 王海川,张立明.复旦学报(自然科学版),第43卷第1期. 2004 * |
动态人脸表情识别技术研究. 应伟,第27-28页,第36-37页,中国优秀博硕士学位论文全文数据库. 2005 * |
采用Adaboost算法进行面部表情识别. 杨国亮,王志良,任金霞.计算机应用,第25卷第4期. 2005 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111898581A (en) * | 2020-08-12 | 2020-11-06 | 成都佳华物链云科技有限公司 | Animal detection method, device, electronic equipment and readable storage medium |
CN111898581B (en) * | 2020-08-12 | 2024-05-17 | 成都佳华物链云科技有限公司 | Animal detection method, apparatus, electronic device, and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN1794264A (en) | 2006-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100361138C (en) | Method and system of real time detecting and continuous tracing human face in video frequency sequence | |
CN102214291B (en) | Method for quickly and accurately detecting and tracking human face based on video sequence | |
Torralba | Contextual priming for object detection | |
US7953253B2 (en) | Face detection on mobile devices | |
CN100458831C (en) | Human face model training module and method, human face real-time certification system and method | |
CN100440246C (en) | Positioning method for human face characteristic point | |
CN102136075B (en) | Multiple-viewing-angle human face detecting method and device thereof under complex scene | |
CN103632132A (en) | Face detection and recognition method based on skin color segmentation and template matching | |
CN101236599A (en) | Face recognition detection device based on multi-camera information fusion | |
CN109902590A (en) | Pedestrian re-identification method based on distance learning of deep multi-view features | |
CN109800624A (en) | A kind of multi-object tracking method identified again based on pedestrian | |
CN102799901A (en) | Method for multi-angle face detection | |
CN102902986A (en) | Automatic gender identification system and method | |
Rouhi et al. | A review on feature extraction techniques in face recognition | |
CN102938065A (en) | Facial feature extraction method and face recognition method based on large-scale image data | |
CN105893946A (en) | Front face image detection method | |
CN103150546A (en) | Video face identification method and device | |
CN108681737A (en) | A kind of complex illumination hypograph feature extracting method | |
CN102880864A (en) | Method for snap-shooting human face from streaming media file | |
Vij et al. | A survey on various face detecting and tracking techniques in video sequences | |
Harika et al. | Image Overlays on a video frame Using HOG algorithm | |
Tsang et al. | Combined AdaBoost and gradientfaces for face detection under illumination problems | |
Curran et al. | The use of neural networks in real-time face detection | |
Karungaru et al. | Face recognition in colour images using neural networks and genetic algorithms | |
Xu et al. | A novel multi-view face detection method based on improved real adaboost algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20180410 Address after: 100191 Xueyuan Road, Haidian District, Haidian District, Beijing, No. 607, No. six Patentee after: Beijing Vimicro AI Chip Technology Co Ltd Address before: 100083, Haidian District, Xueyuan Road, Beijing No. 35, Nanjing Ning building, 15 Floor Patentee before: Beijing Vimicro Corporation |