CN105488478A - Face recognition system and method - Google Patents
Face recognition system and method Download PDFInfo
- Publication number
- CN105488478A CN105488478A CN201510872357.3A CN201510872357A CN105488478A CN 105488478 A CN105488478 A CN 105488478A CN 201510872357 A CN201510872357 A CN 201510872357A CN 105488478 A CN105488478 A CN 105488478A
- Authority
- CN
- China
- Prior art keywords
- face
- video
- detected
- user
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention relates to a face recognition system and method. The system at least comprises a data input module, a face analysis module and a data output module; the data input module provides an image sequence to be detected for the face analysis module; with the help of a deep learning technology, the face analysis module performs face recognition, obtains similar faces through a KD tree, performs parallel retrieval, comparison and analysis based on a plurality of sub-databases, and combines analysis results; and the data output module overlaps a face detection frame, face relevant statistical information and original videos and outputs to the front end to display. The invention further provides a method for realizing the system. The system disclosed by the invention has the characteristics of being automatic, rapid and precise while recognizing faces.
Description
Technical field
The disclosure relates to field of video monitoring, particularly a kind of face identification system and method.
Background technology
At present a lot of industry is in order to absorb, safeguarding more how valuable, potential honored guest (VIP) client, more and more pay attention to visitant customer provide more high-quality, serve targetedly.Traditional industries can use the forms such as vip card to distinguish client, but carry card, report the modes such as card number and not humane, facilitation.And face recognition technology is a kind of technology of carrying out identification according to the characteristic information of people face, there is the advantages such as untouchable, concurrency, non-imposed, intuitive.
Summary of the invention
For above-mentioned subproblem, present disclose provides a kind of face identification system and method, described system carries out recognition of face by degree of deep learning art, a set of perfect video monitoring face identification system is provided, utilizes described system can provide service support for bank VIP identification, shops welcome, monitoring scene recognition of face; Time face being detected if utilize in system, place and duration, can realize passenger flow statistics further, offers help for improving service or inquiring about special user.In addition, the disclosure additionally provides a kind of method realizing disclosure system.
A kind of face identification system, described system at least comprises: data input module, human face analysis module and data outputting module;
Described data input module is used for send human face analysis module to for detected image sequence;
Described data outputting module comprises image output unit and/or message subscribing unit;
Described image output unit is used for the face obtained in human face analysis module to identify, and by information superposition relevant for this face on original video;
Described message subscribing unit is used for sending event message toward terminal subscribes user;
Wherein, described human face analysis module, for detecting the face analyzed and identify in detected image sequence, at least comprises following unit:
U100, Face datection tracking cell: to the image received, the face in detecting and tracking image, and carry out Quality estimation, selects some frames of meeting the demands as key frame, passes to face alignment unit;
U200, face alignment unit: receive described key frame and extract the face characteristic of each frame, search in User Information Database and select multiple similar face characteristic to compare;
Wherein: described face characteristic uses multidimensional characteristic vectors to represent;
Described User Information Database allow the single M of having open facial image use the first identical identifier mark stored in.
Conveniently realize native system, in one embodiment, provide a kind of method realizing disclosure system, that is: a kind of face identification method, described method at least comprises:
S100, carry out detecting and tracking to face: the wish detected image sequence of reception, to the image for detecting, the face in detecting and tracking image, and carry out Quality estimation, selects some frames of meeting the demands as key frame for compare of analysis; Described wish detected image sequence is some two field pictures in a certain time interval;
S200, face to be compared: the face characteristic extracting described key frame, search in User Information Database and select multiple similar face characteristic to compare;
Wherein: described face characteristic uses multidimensional characteristic vectors to represent;
The single M of having of described User Information Database permission opens facial image and uses the first identical identifier mark to comprise image output unit and/or message subscribing unit stored in described data outputting module;
S300, Output rusults: the face detected in key frame is used marking frame mark, and by the information superposition relevant to this face on original video; And/or the event message of subscription is sent to terminal user.
Embodiment
In a basic embodiment, provide a kind of face identification system, described system at least comprises: data input module, human face analysis module and data outputting module;
Described data input module is used for send human face analysis module to for detected image sequence;
Described data outputting module comprises image output unit and/or message subscribing unit;
Described image output unit is used for the face obtained in human face analysis module to identify, and by information superposition relevant for this face on original video;
Described message subscribing unit is used for sending event message toward terminal subscribes user;
Wherein, described human face analysis module, for detecting the face analyzed and identify in detected image sequence, at least comprises following unit:
U100, Face datection tracking cell: to the image received, the face in detecting and tracking image, and carry out Quality estimation, selects some frames of meeting the demands as key frame, passes to face alignment unit;
U200, face alignment unit: receive described key frame and extract the face characteristic of each frame, search in User Information Database and select multiple similar face characteristic to compare;
Wherein: described face characteristic uses multidimensional characteristic vectors to represent;
Described User Information Database allow the single M of having open facial image use the first identical identifier mark stored in.
In this embodiment, the output of data input module is facial image, and its input can be multimeshed network video source, image sequence, offline video or real-time video, as long as can obtain the image with face after treatment.When detecting image, can all images received be detected, also can preferably detect image.Described can be the some two field pictures gathered in a certain time interval for detected image sequence, also can be people is the some two field pictures selected.In one embodiment, system carries out a Face datection every 6 frames.When detecting image, extract face location, the face key point information in image, described face key point information can comprise the positional information such as end, the corners of the mouth, nose of canthus, eyebrow.When described image sequence is single frames, this image is originally as key frame; When described image sequence is multiframe, from this sequence, select the measured N frame of matter as key frame.Wherein, the judgement of quality by after giving a mark to aftermentioned index, can choose the high front N frame of score as key frame.Described index comprises face picture sharpness, size, real human face, blocks, illumination etc.Described face characteristic is represented by multidimensional characteristic vectors, in one embodiment, uses about 180 dimensional feature vectors to represent face characteristic.To face being detected, follow the tracks of in subsequent frames.When searching, by N group face characteristic integrally, in User Information Database, retrieving similar face, selecting the highest several faces of score as returning results.And the result of compare of analysis and described video assemble Dispatching Unit and original image can be sent to image output unit by http agreement or message server.In one embodiment, when at guarded region identification face database face, system makes an announcement immediately or reports to the police, and message comprises and sends the information such as its special personnel such as recognition result such as VIP, suspicious people, place time, face picture paid close attention to toward subscriber.
In one embodiment, give a kind of method for Quality estimation described in U100, namely comprise the steps:
S1010, to each facial image detected, first judge that whether two spacing meet setting requirement, if meet the demands, perform step S1011; Otherwise, give up the facial image that this detects;
Whether the face confidence score of the facial image that S1011, calculating detect meets setting requirement, if meet the demands, performs step S1012; Otherwise, give up the facial image that this detects;
S1012, calculating positive face score and whether meet setting requirement, then judging that as met this frame can be used in identifying face; Otherwise, give up the facial image that this detects.
In one embodiment, a kind of implementation specifically selecting key frame is provided.In this embodiment, follow the tracks of to single the face captured, according to two spacing >25, face confidence score >0.95, positive face score, judges that whether this frame is for identifying.Further, additionally provide the method realizing selecting key frame by program in this embodiment, namely to each image being tracked as same face, internal maintenance key frame container, capacity is 10.During beginning, if discontented 10 frames, then every frame is all stored in container; After full 10 frames, be suitable for the frame identified, and and last stored in frame number interval be greater than 10, then replace the poorest frame of known quality; Record the frame number that the single image being tracked as same face has been processed, if frame number is greater than 20, then terminate to follow the tracks of.
In one embodiment, give one and comprise the steps for detecting and tracking in described U100, the steps include:
S101, carrying out a Face datection every some frames, when face being detected, the face usage flag frame meeting quality requirements being marked the part comprising face;
Whether the face area of S102, judge mark overlaps with the face area detected, when registration meets predetermined threshold value, then thinks that with the face detected be same face, then enters step S103; Otherwise think that the face of current markers is new face, follow the tracks of and terminate;
S103, in indicia framing, carry out face alignment to the face of mark, detect face key point position, calculate the outer area-encasing rectangle of face key point, what detect before replacing it thinks for the image in the indicia framing of same face.
In this embodiment, usage flag frame marks the part comprising face, and the part marked can be head, more excellent, can also comprise shoulder, in the mark mode comprising shoulder, can improve discrimination.No matter adopt which kind of mode, the calculating of registration all can be measured by degree of confidence, when the degree of confidence calculated reaches certain scope, can think that two objects are same target.And the scope that should reach can be determined by the mode of test.
Preferably, use many storehouses and parallel search, that is: the User Information Database in described U200 comprises multiple subdata base, then look for described in and carry out parallel search based on multiple subdata base.Compare of analysis is then compared on the basis of result for retrieval, then combined analysis result.This mode not only supports a large amount of facial image to import User Information Database, does not strengthen retrieval time again simultaneously.Each subdata base imports a certain amount of facial image, and single multiple facial images import identical database.When retrieving, adopting multi-threaded parallel to retrieve the mode of each database in one embodiment, then merging the result of multiple subdata base according to the result of comparative analysis.
In one embodiment, give the acquisition methods of face characteristic in the facial image about warehouse-in, that is: the extraction in described U200 uses DeepId degree of deep learning algorithm to extract face characteristic.This mode obtains face characteristic and can contribute to identifying face accurately.In one embodiment, use this extraction face characteristic extracting mode, extract the proper vector of about 180 dimensions.
Use multidimensional characteristic vectors to represent based on face characteristic, in one embodiment, give a kind of when searching similar features vector, reduce number of comparisons speed-up ratio to process approach, that is: the similar face characteristic in described U200 is obtained by following step:
S2011, set up KD tree: when searching, set up KD tree search K neighbour, K >=M;
S2012, traversal KD tree: when traveling through KD tree, every layer of one dimension chosen in face characteristic compares, and to determine the branch that lower one deck is retrieved, finally determines the multiple face characteristics similar to key frame.
This mode by setting up KD characteristic key tree, when searching for similar features, realized by traversal trie tree, in order to reduce comparison number of times, the feature choosing one dimension at every layer compares, to determine the branch that lower one deck will be retrieved.
In one embodiment, in described step S102, when the face judging new mark is same face with the face detected, the second identical identifier is used to identify the facial image of this new mark and the face that detected; And the compare of analysis in described U200 comprises the steps:
S201, to the M two field picture with identical second identifier mark, positive face, sharpness computation quality score q according to whether
i, i ∈ [1, M];
S202, to the every two field picture in M two field picture, from face database, retrieve comparison respectively find out the most similar N number of user, corresponding similarity is S
i, userj, i ∈ [1, M], j ∈ [1, N];
S203, to M two field picture retrieval comparison obtain K user altogether, calculate the score of the similarity of each user in this K user,
S204, basis
to K user by descending sort, choose several the most similar users.
Under this alignments, if User Information Database comprises multiple subdata base, the mode of the final recognition result of multiple acquisition can be had.Such as after carrying out parallel search to multiple subdata base, carry out choosing after then the similarity of the most similar all user sorts by step S202 ~ S204 returning results to each subdata base.For another example, score sequence several face characteristics preceding in this subdata base are returned to each subdata base, then re-using Similarity value to the face characteristic returned to sort, selecting the facial image corresponding to several face characteristics preceding under current sequence as returning results.
Optionally, after compare of analysis, described face alignment unit also realizes following operation:
S2031, use degree of deep learning method carry out face character calculating;
S2032, judge detect face whether be present in User Information Database; If be present in User Information Database, then face character result is upgraded; Otherwise recognition result and face character result of calculation are stored together.
Whether described face character comprises user's sex, age, wears glasses, the appearance attribute such as cap, mouth mask.Increasing the system storing face character can when externally providing search function, increase retrieval dimension, temporally, for detecting face can filter with warehouse-in human face similarity value, appearance attribute, place, reducing range of search, accelerate retrieval rate, retrieval accuracy is provided.
Optionally, store face character result of calculation basis on can also enclose timing statistics point, place for each result, that is: described face character result of calculation also comprise acquisition image time time point and place.When whether this occurred providing Data support in certain region for locating certain face.In one embodiment, system is that the special personnel such as VIP or suspicious people sets up User Information Database separately, when user inquires about this kind of personnel, can directly and the face characteristic of the facial image of this library storage compare, locate certain face easily and fast and when whether occurred in certain region.
In one embodiment, described system has stream of people's statistical function, that is: described face alignment unit also realizes following operation:
The number of face detected in S2030, statistics certain hour, the time of the appearance of each face and duration.In this embodiment, system can by current detection to face and certain hour in the face that has been detected compare, if be judged as same person, then think that the number of times that this people occurs is 1 time; The different face numbers occurred in a certain time interval can be obtained like this, also can obtain in several time intervals simultaneously, the time of the number of times that same face occurs, each appearance and duration.In order to realize such statistical function, system can set up temporary User Information database for the face detected, this database can temporally regularly be set up.System also can set up visitor's feature database specially.When system is used for shops, above-mentioned passenger flow statistics can be carried out for shops.
In one embodiment, during for facial image source for video, when described video can be multimeshed network video source, offline video or real-time video, for obtaining video frame images, described data input module also comprises video decoding unit, and described data outputting module also comprises video assembling Dispatching Unit;
Described video decoding unit for reading live video stream or local video file, and parses video frame images;
Described video assembling Dispatching Unit is used for the former video frame images having superposed Face datection frame and the information relevant to this face to be re-encoded as video.
In this embodiment, when described data input module sends for detected image sequence to human face analysis module, can be realized by the mode of queue.
In one embodiment, during video assembling, the information such as the Face datection frame that human face analysis module is obtained, the user name that identifies, be added in original video frame, then video is re-encoded as by x264 form, after assembling, broadcasted away by live555, provide browser or other clients to play.
In one embodiment, described video decoding unit reads rtsp video flowing or local video file, by vlc Video Decoder, explains video frame images, uses buffer queue, image is sent to human face analysis module.
Preferably, described system also comprises camera, then described data input module also comprises video configuration unit, and described video configuration unit is for configuring the monitoring parameter of video channel scene.In one embodiment, need to configure video channel address.In this manner, the system of realization can be applied to real-time monitoring, and Real time identification is carried out to the face in monitoring.Additional alarm function on basis of the present disclosure, the recognition of face monitor service of complete set can be provided for the scene having security to need, such as in the stream of people gateway that public security is paid close attention to, automatically face snap is carried out to holdee, automatic identification personnel external appearance characteristic, automatically carry out fast automatic comparison with warehouse-in a suspect, if find suspicious people, provide alarm.For another example, community gate inhibition's security is applied to.
In one embodiment, described system can realize real-time display video and recognition result; When user needing for identifying VIP customer recognition that it serves, the VIP client entering shops can also be identified, and provide prompting; Help user to add up every day and enter client's number of shops, add up time and number of times that each client enters shops; By preserving the client's face picture entering shop door, calculating and preserving and comprise user's sex, age, whether wear glasses, the appearance attribute such as cap, mouth mask, preserve into shop time and duration, facilitate user to carry out querying condition filtration to client; Display enters the customer information of shops recently, comprises and captures face picture, access time, place, access times etc.
In one embodiment, described system provides video monitoring service for public security system, at guarded region, detecting the face entered, and compares with the suspicious people of warehouse-in, when finding the personnel for monitoring, providing prompting.Described prompting can be the array configuration of following a kind of or any various ways: static text, pattern or dynamically word, dynamic pattern, sound.
Conveniently realize native system, in one embodiment, provide a kind of method realizing disclosure system, that is: a kind of face identification method, described method at least comprises:
S100, carry out detecting and tracking to face: the wish detected image sequence of reception, to the image for detecting, the face in detecting and tracking image, and carry out Quality estimation, selects some frames of meeting the demands as key frame for compare of analysis; Described wish detected image sequence is some two field pictures in a certain time interval;
S200, face to be compared: the face characteristic extracting described key frame, search in User Information Database and select multiple similar face characteristic to compare;
Wherein: described face characteristic uses multidimensional characteristic vectors to represent;
The single M of having of described User Information Database permission opens facial image and uses the first identical identifier mark to comprise image output unit and/or message subscribing unit stored in described data outputting module;
S300, Output rusults: the face detected in key frame is used marking frame mark, and by the information superposition relevant to this face on original video; And/or the event message of subscription is sent to terminal user.
In this embodiment, the output of data input module is facial image, and its input can be multimeshed network video source, image sequence, offline video or real-time video, as long as can obtain the image with face after treatment.When detecting image, can all images received be detected, also can preferably detect image.In one embodiment, system carries out a Face datection every 6 frames.When detecting image, extract face location, the face key point information in image, described face key point information can comprise the positional information such as end, the corners of the mouth, nose of canthus, eyebrow.When described image sequence is single frames, this image is originally as key frame; When described image sequence is multiframe, from this sequence, select the measured N frame of matter as key frame.Wherein, the judgement of quality by after giving a mark to aftermentioned index, can choose the high front N frame of score as key frame.Described index comprises face picture sharpness, size, real human face, blocks, illumination etc.Described face characteristic is represented by multidimensional characteristic vectors, in one embodiment, uses about 180 dimensional feature vectors to represent face characteristic.To face being detected, follow the tracks of in subsequent frames.When searching, by N group face characteristic integrally, in User Information Database, retrieving similar face, selecting the highest several faces of score as returning results.And the result of compare of analysis and described video assemble Dispatching Unit and original image can be sent to image output unit by http agreement or message server.In one embodiment, when at guarded region identification face database face, system makes an announcement immediately or reports to the police, and message comprises and sends the information such as its special personnel such as recognition result such as VIP, suspicious people, place time, face picture paid close attention to toward subscriber.
In one embodiment, give a kind of method for Quality estimation described in S100, namely comprise the steps:
S1010, to each facial image detected, first judge that whether two spacing meet setting requirement, if meet the demands, perform step S1011; Otherwise, give up the facial image that this detects;
Whether the face confidence score of the facial image that S1011, calculating detect meets setting requirement, if meet the demands, performs step S1012; Otherwise, give up the facial image that this detects;
S1012, calculating positive face score and whether meet setting requirement, then judging that as met this frame can be used in identifying face; Otherwise, give up the facial image that this detects.
In one embodiment, a kind of implementation specifically selecting key frame is provided.In this embodiment, follow the tracks of to single the face captured, according to two spacing >25, face confidence score >0.95, positive face score, judges that whether this frame is for identifying.Further, additionally provide the method realizing selecting key frame by program in this embodiment, namely to each image being tracked as same face, internal maintenance key frame container, capacity is 10.During beginning, if discontented 10 frames, then every frame is all stored in container; After full 10 frames, be suitable for the frame identified, and and last stored in frame number interval be greater than 10, then replace the poorest frame of known quality; Record the frame number that the single image being tracked as same face has been processed, if frame number is greater than 20, then terminate to follow the tracks of.
In one embodiment, give a kind of implementation method for detecting and tracking in described S100, its step comprises:
S101, carrying out a Face datection every some frames, when face being detected, the face usage flag frame meeting quality requirements being marked the part comprising face;
Whether the face area of S102, the new mark of judgement overlaps with the face area detected, when registration meets predetermined threshold value, then thinks that with the face detected be same face, then enters step S103; Otherwise think that the face of current markers is new face, follow the tracks of and terminate;
S103, in indicia framing, carry out face alignment to the face of new mark, detect face key point position, calculate the outer area-encasing rectangle of face key point, what detect before replacing it thinks for the image in the indicia framing of same face.
In this embodiment, usage flag frame marks the part comprising face, and the part marked can be head, more excellent, can also comprise shoulder, in the mark mode comprising shoulder, can improve discrimination.No matter adopt which kind of mode, the calculating of registration all can be measured by degree of confidence, when the degree of confidence calculated reaches certain scope, can think that two objects are same target.And the scope that should reach can be determined by the mode of test.
Preferably, use many storehouses and parallel search, that is: the User Information Database in described S200 comprises multiple subdata base, and the User Information Database in described U200 comprises multiple subdata base, then look for described in and carry out parallel search based on multiple subdata base.Compare of analysis is then compared on the basis of result for retrieval, then combined analysis result.This mode not only supports a large amount of facial image to import User Information Database, does not strengthen retrieval time again simultaneously.Each subdata base imports a certain amount of facial image, and single multiple facial images import identical database.When retrieving, adopting multi-threaded parallel to retrieve the mode of each database in one embodiment, then merging the result of multiple subdata base according to the result of comparative analysis.
In one embodiment, give the acquisition methods of face characteristic in the facial image about warehouse-in, that is: the extraction in described S200 uses DeepId degree of deep learning algorithm to extract face characteristic.In one embodiment, use this extraction face characteristic extracting mode, extract the proper vector of about 180 dimensions.
Use multidimensional characteristic vectors to represent based on face characteristic, in one embodiment, give a kind of when searching similar features vector, reduce number of comparisons speed-up ratio to process approach, that is: similar in described S200 face characteristic is obtained by following step:
S2011, set up KD tree: searching in similar face characteristic process, setting up KD by search K neighbour and set, K >=M;
S2012, traversal KD tree: when traveling through KD tree, every layer of one dimension chosen in face characteristic compares, and to determine the branch that lower one deck is retrieved, finally determines the multiple face characteristics similar to key frame.
In one embodiment, in described step S102, when the face judging new mark is same face with the face detected, the second identical identifier is used to identify the facial image of this new mark and the face that detected; And the compare of analysis in described S200 comprises the steps:
S201, the M two field picture that identical second identifier is identified, face, sharpness computation quality score q according to whether just
i, i ∈ [1, M];
S202, to the every two field picture in M two field picture, from face database, retrieve comparison respectively find out the most similar N number of user, corresponding similarity is S
i, userj, i ∈ [1, M], j ∈ [1, N];
S203, to M two field picture retrieval comparison obtain K user altogether, calculate the score of the similarity of each user in this K user,
S204, basis
to K user by descending sort, choose several the most similar users.
Under this alignments, if User Information Database comprises multiple subdata base, the mode of the final recognition result of multiple acquisition can be had.Such as after carrying out parallel search to multiple subdata base, carry out choosing after then the similarity of the most similar all user sorts by step S202 ~ S204 returning results to each subdata base.For another example, score sequence several face characteristics preceding in this subdata base are returned to each subdata base, then re-using Similarity value to the face characteristic returned to sort, selecting the facial image corresponding to several face characteristics preceding under current sequence as returning results.
Optionally, after compare of analysis, described step S200 also realizes following operation:
S2031, use degree of deep learning method carry out face character calculating;
S2032, judge detect face whether be present in User Information Database; If be present in User Information Database, then face character result is upgraded; Otherwise recognition result and face character result of calculation are stored together.
Whether described face character comprises user's sex, age, wears glasses, the appearance attribute such as cap, mouth mask.Increasing the system storing face character can when externally providing search function, increase retrieval dimension, temporally, for detecting face can filter with warehouse-in human face similarity value, appearance attribute, place, reducing range of search, accelerate retrieval rate, retrieval accuracy is provided.
Optionally, store face character result of calculation basis on can also enclose timing statistics point, place for each result, that is: described face character result of calculation also comprise first acquisition facial image time time point and place.When whether this occurred providing Data support in certain region for locating certain face.In one embodiment, system is that the special personnel such as VIP or suspicious people sets up User Information Database separately, when user inquires about this kind of personnel, can directly and the face characteristic of the facial image of this library storage compare, locate certain face easily and fast and when whether occurred in certain region.
In one embodiment, described method comprises stream of people's statistical function, that is: before step S2031, described step S200 also realizes following operation:
The time that the number of the face detected in S2030, statistics certain hour, each face occur and duration.
In one embodiment, during for facial image source for video, when described video can be multimeshed network video source, offline video or real-time video, for obtaining video frame images, then described step S100 is before reception is for detected image sequence, described step S100 also comprises reading live video stream or local video file, and parses video frame images sequence;
Then in step S300, use marking frame mark at the face will detected in key frame, and by after in the information superposition relevant to this face to original video, described step S300 also comprises the former video frame images sequence having superposed Face datection frame and the information relevant to this face is re-encoded as video.
In one embodiment, step S100 carries out detecting and tracking by the mode of queue successively to the image for detected image sequence.
In one embodiment, conveniently obtain live video stream, gather video by camera, then described method also comprises the camera configuration video channel scene monitoring parameter for gathering live video stream.In this manner, the system of realization can be applied to real-time monitoring, and Real time identification is carried out to the face in monitoring.Additional alarm function on basis of the present disclosure, the recognition of face monitor service of complete set can be provided for the scene having security to need, such as in the stream of people gateway that public security is paid close attention to, automatically face snap is carried out to holdee, automatic identification personnel external appearance characteristic, automatically carry out fast automatic comparison with warehouse-in a suspect, if find suspicious people, provide alarm.For another example, community gate inhibition's security is applied to.
In one embodiment, described method can realize real-time display video and recognition result; When user needing for identifying VIP customer recognition that it serves, the VIP client entering shops can also be identified, and provide prompting; Help user to add up every day and enter client's number of shops, add up time and number of times that each client enters shops; By preserving the client's face picture entering shop door, calculating and preserving and comprise user's sex, age, whether wear glasses, the appearance attribute such as cap, mouth mask, preserve into shop time and duration, facilitate user to carry out querying condition filtration to client; Display enters the customer information of shops recently, comprises and captures face picture, access time, place, access times etc.
In one embodiment, the system that described method realizes provides video monitoring service for public security system, at guarded region, detecting the face entered, and compares with the suspicious people of warehouse-in, when finding the personnel for monitoring, providing prompting.Described prompting can be the array configuration of following a kind of or any various ways: static text, pattern or dynamically word, dynamic pattern, sound.
Face recognition technology of the present disclosure can identify visitant customer fast by recognition of face and associate relevant customer information database.Also can will often patronize, valuable potential client is served and recommendation specific aim product as potential visitant customer emphasis by data statistics.In addition, additional alarm function on basis of the present disclosure, the recognition of face monitor service of complete set can also be provided for public security, in the stream of people gateway that public security is paid close attention to, automatically face snap is carried out to holdee, automatic identification personnel external appearance characteristic, automatically carry out fast automatic comparison with warehouse-in a suspect, if find suspicious people, provide alarm.
Be described in detail the disclosure above, apply specific case herein and set forth principle of the present disclosure and embodiment, the explanation of above embodiment just understands method of the present disclosure and core concept thereof for helping; Meanwhile, for those skilled in the art, according to thought of the present disclosure, all will change in specific embodiments and applications, in sum, this description should not be construed as restriction of the present disclosure.
Claims (24)
1. a face identification system, is characterized in that, described system at least comprises:
Data input module, human face analysis module and data outputting module;
Described data input module is used for send human face analysis module to for detected image sequence;
Described data outputting module comprises image output unit and/or message subscribing unit;
Described image output unit is used for the face obtained in human face analysis module to identify, and by information superposition relevant for this face on original video;
Described message subscribing unit is used for sending event message toward terminal subscribes user;
Wherein, described human face analysis module, for detecting the face analyzed and identify in detected image sequence, at least comprises following unit:
U100, Face datection tracking cell: to the image received, the face in detecting and tracking image, and carry out Quality estimation, selects some frames of meeting the demands as key frame, passes to face alignment unit;
U200, face alignment unit: receive described key frame and extract the face characteristic of each frame, search in User Information Database and select multiple similar face characteristic to compare;
Wherein: described face characteristic uses multidimensional characteristic vectors to represent;
Described User Information Database allow the single M of having open facial image use the first identical identifier mark stored in.
2. system according to claim 1, is characterized in that, preferably, described in U100, Quality estimation comprises the steps:
S1010, to each facial image detected, first judge that whether two spacing meet setting requirement, if meet the demands, perform step S1011; Otherwise, give up the facial image that this detects;
Whether the face confidence score of the facial image that S1011, calculating detect meets setting requirement, if meet the demands, performs step S1012; Otherwise, give up the facial image that this detects;
S1012, calculating positive face score and whether meet setting requirement, as met, then judging that this frame can be used in identifying face; Otherwise, give up the facial image that this detects.
3. system according to claim 1, is characterized in that, in described U100, detecting and tracking comprises the steps:
S101, carrying out a Face datection every some frames, when face being detected, the face usage flag frame meeting quality requirements being marked the part comprising face;
Whether the face area of S102, judge mark overlaps with the face area detected, when registration meets predetermined threshold value, then thinks that with the face detected be same face, then enters step S103; Otherwise think that the face of current markers is new face, follow the tracks of and terminate;
S103, in indicia framing, carry out face alignment to the face of mark, detect face key point position, calculate the outer area-encasing rectangle of face key point, what detect before replacing it thinks for the image in the indicia framing of same face.
4. system according to claim 1, is characterized in that, the User Information Database in described U200 comprises multiple subdata base, described in look for and carry out parallel search based on multiple subdata base.
5. system according to claim 1, is characterized in that, the extraction in described U200 uses DeepId degree of deep learning algorithm to extract face characteristic.
6. system according to claim 1, is characterized in that, face characteristic similar in described U200 is obtained by following step:
S2011, set up KD tree: searching in similar face characteristic process, setting up KD by search K neighbour and set, K >=M;
S2012, traversal KD tree: when traveling through KD tree, every layer of one dimension chosen in face characteristic compares, and to determine the branch that lower one deck is retrieved, finally determines the multiple face characteristics similar to key frame.
7. system according to claim 3, is characterized in that:
In described step S102, when the face marked is same face with the face to have detected, the second identical identifier is used to identify the facial image of this mark and the face that detected;
And the compare of analysis in described U200 comprises the steps:
S201, to the M two field picture with identical second identifier mark, positive face, sharpness computation quality score q according to whether
i, i ∈ [1, M];
S202, to the every two field picture in M two field picture, from face database, retrieve comparison respectively find out the most similar N number of user, corresponding similarity is S
i, userj, i ∈ [1, M], j ∈ [1, N];
S203, to M two field picture retrieval comparison obtain K user altogether, calculate the score of the similarity of each user in this K user,
S204, basis
to K user by descending sort, choose several the most similar users.
8. system according to claim 7, is characterized in that, after compare of analysis, described face alignment unit also realizes following operation:
S2031, use degree of deep learning method carry out face character calculating;
S2032, judge detect face whether be present in User Information Database; If be present in User Information Database, then face character result is upgraded; Otherwise recognition result and face character result of calculation are stored together.
9. system according to claim 8, is characterized in that, described face character result of calculation also comprise first obtain facial image time time point and place.
10. system according to claim 8, is characterized in that, before step S2031, described face alignment unit also realizes following operation:
The time that the number of the face detected in S2030, statistics certain hour, each face occur and duration.
11. systems according to claim 1, is characterized in that, described data input module also comprises video decoding unit, and described data outputting module also comprises video assembling Dispatching Unit;
Described video decoding unit for reading live video stream or local video file, and parses video frame images sequence;
Described video assembling Dispatching Unit is used for the former video frame images sequence having superposed Face datection frame and the information relevant to this face to be re-encoded as video.
12. systems according to claim 11, it is characterized in that, described system also comprises camera, described data input module also comprises video configuration unit, and described video configuration unit is for configuring the monitoring parameter of video channel scene.
13. 1 kinds of face identification methods, is characterized in that, described method at least comprises:
S100, carry out detecting and tracking to face: receive for detected image sequence, to the image for detecting, the face in detecting and tracking image, and carry out Quality estimation, selects some frames of meeting the demands as key frame for compare of analysis; Described wish detected image sequence is some two field pictures in a certain time interval;
S200, face to be compared: the face characteristic extracting described key frame, search in User Information Database and select multiple similar face characteristic to compare;
Wherein: described face characteristic uses multidimensional characteristic vectors to represent;
The single M of having of described User Information Database permission opens facial image and uses the first identical identifier mark to comprise image output unit and/or message subscribing unit stored in described data outputting module;
S300, Output rusults: the face detected in key frame is used marking frame mark, and by the information superposition relevant to this face on original video; And/or send event message to terminal subscribes user.
14. methods according to claim 13, it is characterized in that, described in S100, Quality estimation comprises the steps:
S1010, to each facial image detected, first judge that whether two spacing meet setting requirement, if meet the demands, perform step S1011; Otherwise, give up the facial image that this detects;
Whether the face confidence score of the facial image that S1011, calculating detect meets setting requirement, if meet the demands, performs step S1012; Otherwise, give up the facial image that this detects;
S1012, calculating positive face score and whether meet setting requirement, then judging that as met this frame can be used in identifying face; Otherwise, give up the facial image that this detects.
15. methods according to claim 13, is characterized in that, in described S100, detecting and tracking comprises the steps:
S101, carrying out a Face datection every some frames, when face being detected, the face usage flag frame meeting quality requirements being marked the part comprising face;
Whether the face area of S102, judge mark overlaps with the face area detected, when registration meets predetermined threshold value, then thinks that with the face detected be same face, then enters step S103; Otherwise think that the face of current markers is new face, follow the tracks of and terminate;
S103, in indicia framing, carry out face alignment to the face of mark, detect face key point position, calculate the outer area-encasing rectangle of face key point, what detect before replacing it thinks for the image in the indicia framing of same face.
16. methods according to claim 13, is characterized in that, the User Information Database in described S200 comprises multiple subdata base, described in look for and carry out parallel search based on multiple subdata base.
17. methods according to claim 13, is characterized in that, the extraction in described S200 uses DeepId degree of deep learning algorithm to extract face characteristic.
18. methods according to claim 13, is characterized in that, face characteristic similar in described S200 is obtained by following step:
S2011, set up KD tree: searching in similar face characteristic process, setting up KD by search K neighbour and set, K >=M;
S2012, traversal KD tree: when traveling through KD tree, every layer of one dimension chosen in face characteristic compares, and to determine the branch that lower one deck is retrieved, finally determines the multiple face characteristics similar to key frame.
19. methods according to claim 15, is characterized in that:
In described step S102, when the face judging new mark is same face with the face detected, the second identical identifier is used to identify the facial image of this new mark and the face that detected;
And the compare of analysis in described S200 comprises the steps:
S201, to the M two field picture with identical second identifier mark, positive face, sharpness computation quality score q according to whether
i, i ∈ [1, M];
S202, to the every two field picture in M two field picture, from face database, retrieve comparison respectively find out the most similar N number of user, corresponding similarity is S
i, userj, i ∈ [1, M], j ∈ [1, N];
S203, to M two field picture retrieval comparison obtain K user altogether, calculate the score of the similarity of each user in this K user,
S204, basis
to K user by descending sort, choose several the most similar users.
20. methods according to claim 19, is characterized in that, after compare of analysis, described step S200 also realizes following operation:
S2031, use degree of deep learning method carry out face character calculating;
S2032, judge detect face whether be present in User Information Database; If be present in User Information Database, then face character result is upgraded; Otherwise recognition result and face character result of calculation are stored together.
21. methods according to claim 20, is characterized in that, described face character result of calculation also comprise first obtain facial image time time point and place.
22. methods according to claim 20, is characterized in that, before step S2031, described step S200 also realizes following operation:
The time that the number of the face detected in S2030, statistics certain hour, each face occur and duration.
23. methods according to claim 13, is characterized in that:
Described step S100 is before reception is for detected image sequence, and described step S100 also comprises reading live video stream or local video file, and parses video frame images sequence;
Then in step S300, use marking frame mark at the face will detected in key frame, and by after in the information superposition relevant to this face to original video, described step S300 also comprises the former video frame images sequence having superposed Face datection frame and the information relevant to this face is re-encoded as video.
24. methods according to claim 23, is characterized in that, described real-time video flows through camera and obtains, and described method also comprises the camera configuration video channel scene monitoring parameter for gathering live video stream.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510872357.3A CN105488478B (en) | 2015-12-02 | 2015-12-02 | Face recognition system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510872357.3A CN105488478B (en) | 2015-12-02 | 2015-12-02 | Face recognition system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105488478A true CN105488478A (en) | 2016-04-13 |
CN105488478B CN105488478B (en) | 2020-04-07 |
Family
ID=55675450
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510872357.3A Active CN105488478B (en) | 2015-12-02 | 2015-12-02 | Face recognition system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105488478B (en) |
Cited By (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106327546A (en) * | 2016-08-24 | 2017-01-11 | 北京旷视科技有限公司 | Face detection algorithm test method and device |
CN106682650A (en) * | 2017-01-26 | 2017-05-17 | 北京中科神探科技有限公司 | Mobile terminal face recognition method and system based on technology of embedded deep learning |
CN106845356A (en) * | 2016-12-24 | 2017-06-13 | 深圳云天励飞技术有限公司 | A kind of method of recognition of face, client, server and system |
CN106845355A (en) * | 2016-12-24 | 2017-06-13 | 深圳云天励飞技术有限公司 | A kind of method of recognition of face, server and system |
CN106878670A (en) * | 2016-12-24 | 2017-06-20 | 深圳云天励飞技术有限公司 | A kind of method for processing video frequency and device |
CN106919917A (en) * | 2017-02-24 | 2017-07-04 | 北京中科神探科技有限公司 | Face comparison method |
CN106961595A (en) * | 2017-03-21 | 2017-07-18 | 深圳市科漫达智能管理科技有限公司 | A kind of video frequency monitoring method and video monitoring system based on augmented reality |
CN107133568A (en) * | 2017-03-31 | 2017-09-05 | 浙江零跑科技有限公司 | A kind of speed limit prompting and hypervelocity alarm method based on vehicle-mounted forward sight camera |
CN107292240A (en) * | 2017-05-24 | 2017-10-24 | 深圳市深网视界科技有限公司 | It is a kind of that people's method and system are looked for based on face and human bioequivalence |
CN107358141A (en) * | 2016-05-10 | 2017-11-17 | 阿里巴巴集团控股有限公司 | The method and device of data identification |
CN107679613A (en) * | 2017-09-30 | 2018-02-09 | 同观科技(深圳)有限公司 | A kind of statistical method of personal information, device, terminal device and storage medium |
CN107784294A (en) * | 2017-11-15 | 2018-03-09 | 武汉烽火众智数字技术有限责任公司 | A kind of persona face detection method based on deep learning |
CN108038422A (en) * | 2017-11-21 | 2018-05-15 | 平安科技(深圳)有限公司 | Camera device, the method for recognition of face and computer-readable recording medium |
CN108124157A (en) * | 2017-12-22 | 2018-06-05 | 北京旷视科技有限公司 | Information interacting method, apparatus and system |
CN108228742A (en) * | 2017-12-15 | 2018-06-29 | 深圳市商汤科技有限公司 | Face duplicate checking method and apparatus, electronic equipment, medium, program |
CN108229320A (en) * | 2017-11-29 | 2018-06-29 | 北京市商汤科技开发有限公司 | Select frame method and device, electronic equipment, program and medium |
CN108241853A (en) * | 2017-12-28 | 2018-07-03 | 深圳英飞拓科技股份有限公司 | A kind of video frequency monitoring method, system and terminal device |
CN108345851A (en) * | 2018-02-02 | 2018-07-31 | 成都睿码科技有限责任公司 | A method of based on recognition of face analyzing personal hobby |
CN108399247A (en) * | 2018-03-01 | 2018-08-14 | 深圳羚羊极速科技有限公司 | A kind of generation method of virtual identity mark |
CN108446681A (en) * | 2018-05-10 | 2018-08-24 | 深圳云天励飞技术有限公司 | Pedestrian's analysis method, device, terminal and storage medium |
CN108647581A (en) * | 2018-04-18 | 2018-10-12 | 深圳市商汤科技有限公司 | Information processing method, device and storage medium |
CN108805046A (en) * | 2018-05-25 | 2018-11-13 | 京东方科技集团股份有限公司 | For the method for facial match, unit and storage medium |
CN108805040A (en) * | 2018-05-24 | 2018-11-13 | 复旦大学 | It is a kind of that face recognition algorithms are blocked based on piecemeal |
CN108875556A (en) * | 2018-04-25 | 2018-11-23 | 北京旷视科技有限公司 | Method, apparatus, system and the computer storage medium veritified for the testimony of a witness |
CN108875488A (en) * | 2017-09-29 | 2018-11-23 | 北京旷视科技有限公司 | Method for tracing object, object tracking device and computer readable storage medium |
CN109033924A (en) * | 2017-06-08 | 2018-12-18 | 北京君正集成电路股份有限公司 | The method and device of humanoid detection in a kind of video |
CN109034036A (en) * | 2018-07-19 | 2018-12-18 | 青岛伴星智能科技有限公司 | A kind of video analysis method, Method of Teaching Quality Evaluation and system, computer readable storage medium |
CN109145707A (en) * | 2018-06-20 | 2019-01-04 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109190527A (en) * | 2018-08-20 | 2019-01-11 | 合肥智圣新创信息技术有限公司 | A kind of garden personnel track portrait system monitored based on block chain and screen |
CN109344686A (en) * | 2018-08-06 | 2019-02-15 | 广州开瑞信息科技有限公司 | A kind of intelligent face recognition system |
CN109344765A (en) * | 2018-09-28 | 2019-02-15 | 广州云从人工智能技术有限公司 | A kind of intelligent analysis method entering shop personnel analysis for chain shops |
CN109584208A (en) * | 2018-10-23 | 2019-04-05 | 西安交通大学 | A kind of method of inspection for industrial structure defect intelligent recognition model |
CN109606376A (en) * | 2018-11-22 | 2019-04-12 | 海南易乐物联科技有限公司 | A kind of safe driving Activity recognition system based on vehicle intelligent terminal |
CN109635693A (en) * | 2018-12-03 | 2019-04-16 | 武汉烽火众智数字技术有限责任公司 | A kind of face image detection method and device |
CN109711369A (en) * | 2018-12-29 | 2019-05-03 | 深圳英飞拓智能技术有限公司 | Pedestrian count method, apparatus, system, computer equipment and storage medium |
CN109726680A (en) * | 2018-12-28 | 2019-05-07 | 东方网力科技股份有限公司 | Face recognition method, device, system and electronic equipment |
CN109784231A (en) * | 2018-12-28 | 2019-05-21 | 广东中安金狮科创有限公司 | Safeguard information management method, device and storage medium |
CN109801394A (en) * | 2018-12-29 | 2019-05-24 | 南京天溯自动化控制系统有限公司 | A kind of staff's Work attendance method and device, electronic equipment and readable storage medium storing program for executing |
CN110009662A (en) * | 2019-04-02 | 2019-07-12 | 北京迈格威科技有限公司 | Method, apparatus, electronic device, and computer-readable storage medium for face tracking |
CN110008793A (en) * | 2018-01-05 | 2019-07-12 | 中国移动通信有限公司研究院 | Face identification method, device and equipment |
CN110298213A (en) * | 2018-03-22 | 2019-10-01 | 北京深鉴智能科技有限公司 | Video analytic system and method |
CN110321857A (en) * | 2019-07-08 | 2019-10-11 | 苏州万店掌网络科技有限公司 | Accurate objective group analysis method based on edge calculations technology |
CN110475503A (en) * | 2017-03-30 | 2019-11-19 | 富士胶片株式会社 | The working method of medical image processing device and endoscopic system and medical image processing device |
CN110580425A (en) * | 2018-06-07 | 2019-12-17 | 北京华泰科捷信息技术股份有限公司 | Human face tracking snapshot and attribute analysis acquisition device and method based on AI chip |
WO2020001175A1 (en) * | 2018-06-26 | 2020-01-02 | Wildfaces Technology Limited | Method and apparatus for facilitating identification |
CN111126119A (en) * | 2018-11-01 | 2020-05-08 | 百度在线网络技术(北京)有限公司 | Method and device for counting user behaviors arriving at store based on face recognition |
CN111161206A (en) * | 2018-11-07 | 2020-05-15 | 杭州海康威视数字技术股份有限公司 | Image capturing method, monitoring camera and monitoring system |
CN112329665A (en) * | 2020-11-10 | 2021-02-05 | 上海大学 | A face capture system |
CN112329635A (en) * | 2020-11-06 | 2021-02-05 | 北京文安智能技术股份有限公司 | Method and device for counting store passenger flow |
CN112579809A (en) * | 2019-09-27 | 2021-03-30 | 深圳云天励飞技术有限公司 | Data processing method and related device |
CN112926542A (en) * | 2021-04-09 | 2021-06-08 | 博众精工科技股份有限公司 | Performance detection method and device, electronic equipment and storage medium |
CN113886682A (en) * | 2021-09-10 | 2022-01-04 | 平安科技(深圳)有限公司 | Information pushing method and system under shoulder and neck movement scene and storage medium |
US11361586B2 (en) | 2017-06-15 | 2022-06-14 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method for sending warning information, storage medium and terminal |
CN115187915A (en) * | 2022-09-07 | 2022-10-14 | 苏州万店掌网络科技有限公司 | Passenger flow analysis method, device, equipment and medium |
CN116912925A (en) * | 2023-09-14 | 2023-10-20 | 齐鲁空天信息研究院 | Face recognition method, device, electronic equipment and medium |
US20230368502A1 (en) * | 2022-05-11 | 2023-11-16 | Verizon Patent And Licensing Inc. | System and method for facial recognition |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003187352A (en) * | 2001-12-14 | 2003-07-04 | Nippon Signal Co Ltd:The | System for detecting specified person |
CN1428718A (en) * | 2002-06-21 | 2003-07-09 | 成都银晨网讯科技有限公司 | Airport outgoing passenger intelligent identity identification method and system |
CN101089875A (en) * | 2006-06-15 | 2007-12-19 | 株式会社东芝 | Face authentication apparatus, face authentication method, and entrance and exit management apparatus |
CN101404094A (en) * | 2008-11-28 | 2009-04-08 | 中国电信股份有限公司 | Video monitoring and warning method and system |
CN101404107A (en) * | 2008-11-19 | 2009-04-08 | 公安部第三研究所 | Internet bar monitoring and warning system based on human face recognition technology |
CN101502088A (en) * | 2006-10-11 | 2009-08-05 | 思科技术公司 | Interaction based on facial recognition of conference participants |
-
2015
- 2015-12-02 CN CN201510872357.3A patent/CN105488478B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003187352A (en) * | 2001-12-14 | 2003-07-04 | Nippon Signal Co Ltd:The | System for detecting specified person |
CN1428718A (en) * | 2002-06-21 | 2003-07-09 | 成都银晨网讯科技有限公司 | Airport outgoing passenger intelligent identity identification method and system |
CN101089875A (en) * | 2006-06-15 | 2007-12-19 | 株式会社东芝 | Face authentication apparatus, face authentication method, and entrance and exit management apparatus |
CN101502088A (en) * | 2006-10-11 | 2009-08-05 | 思科技术公司 | Interaction based on facial recognition of conference participants |
CN101404107A (en) * | 2008-11-19 | 2009-04-08 | 公安部第三研究所 | Internet bar monitoring and warning system based on human face recognition technology |
CN101404094A (en) * | 2008-11-28 | 2009-04-08 | 中国电信股份有限公司 | Video monitoring and warning method and system |
Non-Patent Citations (3)
Title |
---|
李珍: "《基于特征匹配的目标识别与定位方法研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
郭沛: "《人脸图像中的眼镜去除及区域复原》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
骆超: "《低功耗嵌入式实时人脸识别系统》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (77)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358141A (en) * | 2016-05-10 | 2017-11-17 | 阿里巴巴集团控股有限公司 | The method and device of data identification |
CN107358141B (en) * | 2016-05-10 | 2020-10-23 | 阿里巴巴集团控股有限公司 | Data identification method and device |
CN106327546A (en) * | 2016-08-24 | 2017-01-11 | 北京旷视科技有限公司 | Face detection algorithm test method and device |
CN106845355A (en) * | 2016-12-24 | 2017-06-13 | 深圳云天励飞技术有限公司 | A kind of method of recognition of face, server and system |
CN106878670A (en) * | 2016-12-24 | 2017-06-20 | 深圳云天励飞技术有限公司 | A kind of method for processing video frequency and device |
CN106845356A (en) * | 2016-12-24 | 2017-06-13 | 深圳云天励飞技术有限公司 | A kind of method of recognition of face, client, server and system |
CN106878670B (en) * | 2016-12-24 | 2018-04-20 | 深圳云天励飞技术有限公司 | A kind of method for processing video frequency and device |
CN106682650A (en) * | 2017-01-26 | 2017-05-17 | 北京中科神探科技有限公司 | Mobile terminal face recognition method and system based on technology of embedded deep learning |
CN106919917A (en) * | 2017-02-24 | 2017-07-04 | 北京中科神探科技有限公司 | Face comparison method |
CN106961595A (en) * | 2017-03-21 | 2017-07-18 | 深圳市科漫达智能管理科技有限公司 | A kind of video frequency monitoring method and video monitoring system based on augmented reality |
CN110475503A (en) * | 2017-03-30 | 2019-11-19 | 富士胶片株式会社 | The working method of medical image processing device and endoscopic system and medical image processing device |
US11412917B2 (en) | 2017-03-30 | 2022-08-16 | Fujifilm Corporation | Medical image processor, endoscope system, and method of operating medical image processor |
CN107133568A (en) * | 2017-03-31 | 2017-09-05 | 浙江零跑科技有限公司 | A kind of speed limit prompting and hypervelocity alarm method based on vehicle-mounted forward sight camera |
CN107292240A (en) * | 2017-05-24 | 2017-10-24 | 深圳市深网视界科技有限公司 | It is a kind of that people's method and system are looked for based on face and human bioequivalence |
CN107292240B (en) * | 2017-05-24 | 2020-09-18 | 深圳市深网视界科技有限公司 | Person finding method and system based on face and body recognition |
CN109033924A (en) * | 2017-06-08 | 2018-12-18 | 北京君正集成电路股份有限公司 | The method and device of humanoid detection in a kind of video |
US11361586B2 (en) | 2017-06-15 | 2022-06-14 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method for sending warning information, storage medium and terminal |
CN108875488B (en) * | 2017-09-29 | 2021-08-06 | 北京旷视科技有限公司 | Object tracking method, object tracking apparatus, and computer-readable storage medium |
CN108875488A (en) * | 2017-09-29 | 2018-11-23 | 北京旷视科技有限公司 | Method for tracing object, object tracking device and computer readable storage medium |
CN107679613A (en) * | 2017-09-30 | 2018-02-09 | 同观科技(深圳)有限公司 | A kind of statistical method of personal information, device, terminal device and storage medium |
CN107784294B (en) * | 2017-11-15 | 2021-06-11 | 武汉烽火众智数字技术有限责任公司 | Face detection and tracking method based on deep learning |
CN107784294A (en) * | 2017-11-15 | 2018-03-09 | 武汉烽火众智数字技术有限责任公司 | A kind of persona face detection method based on deep learning |
CN108038422B (en) * | 2017-11-21 | 2021-12-21 | 平安科技(深圳)有限公司 | Camera device, face recognition method and computer-readable storage medium |
CN108038422A (en) * | 2017-11-21 | 2018-05-15 | 平安科技(深圳)有限公司 | Camera device, the method for recognition of face and computer-readable recording medium |
CN108229320A (en) * | 2017-11-29 | 2018-06-29 | 北京市商汤科技开发有限公司 | Select frame method and device, electronic equipment, program and medium |
CN108228742A (en) * | 2017-12-15 | 2018-06-29 | 深圳市商汤科技有限公司 | Face duplicate checking method and apparatus, electronic equipment, medium, program |
CN108124157A (en) * | 2017-12-22 | 2018-06-05 | 北京旷视科技有限公司 | Information interacting method, apparatus and system |
CN108124157B (en) * | 2017-12-22 | 2020-08-07 | 北京旷视科技有限公司 | Information interaction method, device and system |
CN108241853A (en) * | 2017-12-28 | 2018-07-03 | 深圳英飞拓科技股份有限公司 | A kind of video frequency monitoring method, system and terminal device |
CN110008793A (en) * | 2018-01-05 | 2019-07-12 | 中国移动通信有限公司研究院 | Face identification method, device and equipment |
CN108345851A (en) * | 2018-02-02 | 2018-07-31 | 成都睿码科技有限责任公司 | A method of based on recognition of face analyzing personal hobby |
CN108399247A (en) * | 2018-03-01 | 2018-08-14 | 深圳羚羊极速科技有限公司 | A kind of generation method of virtual identity mark |
CN110298213B (en) * | 2018-03-22 | 2021-07-30 | 赛灵思电子科技(北京)有限公司 | Video analysis system and method |
CN110298213A (en) * | 2018-03-22 | 2019-10-01 | 北京深鉴智能科技有限公司 | Video analytic system and method |
CN108647581A (en) * | 2018-04-18 | 2018-10-12 | 深圳市商汤科技有限公司 | Information processing method, device and storage medium |
CN108875556A (en) * | 2018-04-25 | 2018-11-23 | 北京旷视科技有限公司 | Method, apparatus, system and the computer storage medium veritified for the testimony of a witness |
CN108446681A (en) * | 2018-05-10 | 2018-08-24 | 深圳云天励飞技术有限公司 | Pedestrian's analysis method, device, terminal and storage medium |
CN108805040A (en) * | 2018-05-24 | 2018-11-13 | 复旦大学 | It is a kind of that face recognition algorithms are blocked based on piecemeal |
CN108805046B (en) * | 2018-05-25 | 2022-11-04 | 京东方科技集团股份有限公司 | Method, apparatus, device and storage medium for face matching |
CN108805046A (en) * | 2018-05-25 | 2018-11-13 | 京东方科技集团股份有限公司 | For the method for facial match, unit and storage medium |
CN110580425A (en) * | 2018-06-07 | 2019-12-17 | 北京华泰科捷信息技术股份有限公司 | Human face tracking snapshot and attribute analysis acquisition device and method based on AI chip |
CN109145707B (en) * | 2018-06-20 | 2021-09-14 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109145707A (en) * | 2018-06-20 | 2019-01-04 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2020001175A1 (en) * | 2018-06-26 | 2020-01-02 | Wildfaces Technology Limited | Method and apparatus for facilitating identification |
US11403880B2 (en) | 2018-06-26 | 2022-08-02 | Wildfaces Technology Limited | Method and apparatus for facilitating identification |
CN109034036B (en) * | 2018-07-19 | 2020-09-01 | 青岛伴星智能科技有限公司 | Video analysis method, teaching quality assessment method and system and computer-readable storage medium |
CN109034036A (en) * | 2018-07-19 | 2018-12-18 | 青岛伴星智能科技有限公司 | A kind of video analysis method, Method of Teaching Quality Evaluation and system, computer readable storage medium |
CN109344686A (en) * | 2018-08-06 | 2019-02-15 | 广州开瑞信息科技有限公司 | A kind of intelligent face recognition system |
CN109190527A (en) * | 2018-08-20 | 2019-01-11 | 合肥智圣新创信息技术有限公司 | A kind of garden personnel track portrait system monitored based on block chain and screen |
CN109344765A (en) * | 2018-09-28 | 2019-02-15 | 广州云从人工智能技术有限公司 | A kind of intelligent analysis method entering shop personnel analysis for chain shops |
CN109584208A (en) * | 2018-10-23 | 2019-04-05 | 西安交通大学 | A kind of method of inspection for industrial structure defect intelligent recognition model |
CN111126119A (en) * | 2018-11-01 | 2020-05-08 | 百度在线网络技术(北京)有限公司 | Method and device for counting user behaviors arriving at store based on face recognition |
CN111126119B (en) * | 2018-11-01 | 2024-08-20 | 百度在线网络技术(北京)有限公司 | Face recognition-based store user behavior statistics method and device |
CN111161206A (en) * | 2018-11-07 | 2020-05-15 | 杭州海康威视数字技术股份有限公司 | Image capturing method, monitoring camera and monitoring system |
CN109606376A (en) * | 2018-11-22 | 2019-04-12 | 海南易乐物联科技有限公司 | A kind of safe driving Activity recognition system based on vehicle intelligent terminal |
CN109635693B (en) * | 2018-12-03 | 2023-03-31 | 武汉烽火众智数字技术有限责任公司 | Front face image detection method and device |
CN109635693A (en) * | 2018-12-03 | 2019-04-16 | 武汉烽火众智数字技术有限责任公司 | A kind of face image detection method and device |
CN109726680A (en) * | 2018-12-28 | 2019-05-07 | 东方网力科技股份有限公司 | Face recognition method, device, system and electronic equipment |
CN109784231A (en) * | 2018-12-28 | 2019-05-21 | 广东中安金狮科创有限公司 | Safeguard information management method, device and storage medium |
CN109801394A (en) * | 2018-12-29 | 2019-05-24 | 南京天溯自动化控制系统有限公司 | A kind of staff's Work attendance method and device, electronic equipment and readable storage medium storing program for executing |
CN109801394B (en) * | 2018-12-29 | 2021-07-30 | 南京天溯自动化控制系统有限公司 | Staff attendance checking method and device, electronic equipment and readable storage medium |
CN109711369A (en) * | 2018-12-29 | 2019-05-03 | 深圳英飞拓智能技术有限公司 | Pedestrian count method, apparatus, system, computer equipment and storage medium |
CN110009662B (en) * | 2019-04-02 | 2021-09-17 | 北京迈格威科技有限公司 | Face tracking method and device, electronic equipment and computer readable storage medium |
CN110009662A (en) * | 2019-04-02 | 2019-07-12 | 北京迈格威科技有限公司 | Method, apparatus, electronic device, and computer-readable storage medium for face tracking |
CN110321857B (en) * | 2019-07-08 | 2021-08-17 | 苏州万店掌网络科技有限公司 | Accurate customer group analysis method based on edge computing technology |
CN110321857A (en) * | 2019-07-08 | 2019-10-11 | 苏州万店掌网络科技有限公司 | Accurate objective group analysis method based on edge calculations technology |
CN112579809A (en) * | 2019-09-27 | 2021-03-30 | 深圳云天励飞技术有限公司 | Data processing method and related device |
CN112329635B (en) * | 2020-11-06 | 2022-04-29 | 北京文安智能技术股份有限公司 | Method and device for counting store passenger flow |
CN112329635A (en) * | 2020-11-06 | 2021-02-05 | 北京文安智能技术股份有限公司 | Method and device for counting store passenger flow |
CN112329665A (en) * | 2020-11-10 | 2021-02-05 | 上海大学 | A face capture system |
CN112926542B (en) * | 2021-04-09 | 2024-04-30 | 博众精工科技股份有限公司 | Sex detection method and device, electronic equipment and storage medium |
CN112926542A (en) * | 2021-04-09 | 2021-06-08 | 博众精工科技股份有限公司 | Performance detection method and device, electronic equipment and storage medium |
CN113886682A (en) * | 2021-09-10 | 2022-01-04 | 平安科技(深圳)有限公司 | Information pushing method and system under shoulder and neck movement scene and storage medium |
CN113886682B (en) * | 2021-09-10 | 2024-09-27 | 平安科技(深圳)有限公司 | Information pushing method, system and storage medium in shoulder and neck movement scene |
US20230368502A1 (en) * | 2022-05-11 | 2023-11-16 | Verizon Patent And Licensing Inc. | System and method for facial recognition |
CN115187915A (en) * | 2022-09-07 | 2022-10-14 | 苏州万店掌网络科技有限公司 | Passenger flow analysis method, device, equipment and medium |
CN116912925A (en) * | 2023-09-14 | 2023-10-20 | 齐鲁空天信息研究院 | Face recognition method, device, electronic equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN105488478B (en) | 2020-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105488478A (en) | Face recognition system and method | |
CN205451095U (en) | A face -identifying device | |
CN105574506A (en) | Intelligent face tracking system and method based on depth learning and large-scale clustering | |
CN109344787B (en) | A specific target tracking method based on face recognition and pedestrian re-identification | |
US10779037B2 (en) | Method and system for identifying relevant media content | |
US10546186B2 (en) | Object tracking and best shot detection system | |
US8869198B2 (en) | Producing video bits for space time video summary | |
CN102999640B (en) | Based on the video of semantic reasoning and structural description and image indexing system and method | |
US20120207356A1 (en) | Targeted content acquisition using image analysis | |
CN107563343B (en) | FaceID database self-improvement method based on face recognition technology | |
CN103678417B (en) | Human-machine interaction data treating method and apparatus | |
US8737688B2 (en) | Targeted content acquisition using image analysis | |
US20210357624A1 (en) | Information processing method and device, and storage medium | |
US20130148898A1 (en) | Clustering objects detected in video | |
CN109344271A (en) | Video portrait records handling method and its system | |
CN104317918B (en) | Abnormal behaviour analysis and warning system based on compound big data GIS | |
CN111476183A (en) | Passenger flow information processing method and device | |
WO2016162963A1 (en) | Image search device, system, and method | |
CN106203458A (en) | Crowd's video analysis method and system | |
CN109344765A (en) | A kind of intelligent analysis method entering shop personnel analysis for chain shops | |
CN109492604A (en) | Faceform's characteristic statistics analysis system | |
CN105589974A (en) | Surveillance video retrieval method and system based on Hadoop platform | |
CN108804527A (en) | Based on wechat region circle of friends data analysis system and method | |
WO2021102760A1 (en) | Method and apparatus for analyzing behavior of person, and electronic device | |
CN109902681B (en) | User group relation determining method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |