CN109871834A - Information processing method and device - Google Patents
Information processing method and device Download PDFInfo
- Publication number
- CN109871834A CN109871834A CN201910211760.XA CN201910211760A CN109871834A CN 109871834 A CN109871834 A CN 109871834A CN 201910211760 A CN201910211760 A CN 201910211760A CN 109871834 A CN109871834 A CN 109871834A
- Authority
- CN
- China
- Prior art keywords
- video
- video frame
- target face
- indicate
- response
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Collating Specific Patterns (AREA)
Abstract
Embodiment of the disclosure discloses information processing method and device.One specific embodiment of this method includes: to capture the video of target face object in response to detecting authentication trigger signal;At least two video frames are extracted from video;Two video frames at least two video frames are matched, to obtain video frame matching result, wherein video frame matching result, which is used to indicate, carries out whether the corresponding face of matched two video frames belongs to the same personage;And carry out the corresponding face of matched two video frames in response to determination video frame matching result obtained instruction and belong to the same personage, be based on video, generate be used to indicate target face object whether be living body faces result information.This embodiment improves the accuracy and reliabilities of result information generated, help to realize more structurally sound authentication.
Description
Technical field
Embodiment of the disclosure is related to field of computer technology more particularly to information processing method and device.
Background technique
With the development of face recognition technology, people can realize that account is logged in, payment, unlocked by " brush face "
Deng.This brings convenience for people's lives, but there is also risks at the same time.Since machine can identify face, then
It can certainly identify facial image.Accordingly, it is possible to lawless people occur using the risk of other people facial image camouflage identity.
Summary of the invention
Embodiment of the disclosure proposes information processing method and device.
In a first aspect, embodiment of the disclosure provides a kind of information processing method, this method comprises: in response to detecting
Authentication trigger signal captures the video of target face object;At least two video frames are extracted from video;To at least two
Two video frames in video frame are matched, to obtain video frame matching result, wherein video frame matching result is used to indicate
Carry out whether the corresponding face of matched two video frames belongs to the same personage;And in response to determining video frame matching
As a result instruction carries out the corresponding face of matched two video frames and belongs to the same personage, is based on video, generates for referring to
Show target face object whether be living body faces result information.
In some embodiments, this method further include: in response to detecting authentication trigger signal, export preset dynamic
It instructs, wherein action command is used to indicate target face object and executes movement;And it is based on video, generation is used to indicate mesh
Mark face object whether be living body faces result information, comprising: be based on video, it is dynamic to determine whether target face object performs
Make;In response to determining that target face object performs movement, the result for being used to indicate that target face object is living body faces is generated
Information;In response to determining that target face object is not carried out movement, it is non-living body faces that generation, which is used to indicate target face object,
Result information.
In some embodiments, be based on video, generate be used to indicate target face object whether be living body faces result
Information includes: the In vivo detection model that the video frame input of video is trained in advance, to obtain result information.
In some embodiments, In vivo detection model is obtained by machine learning.
In some embodiments, In vivo detection model is silent In vivo detection model.
In some embodiments, this method further include: indicate that target face object is non-live in response to definitive result information
Body face exports the first prompt information for characterizing authentication failure.
In some embodiments, this method further include: indicate that target face object is living body in response to definitive result information
Face selects video frame as authentication image corresponding to target face object from least two video frames;By identity
Authentication image is sent to server-side;And matching result is obtained from server-side, wherein matching result is used to indicate authentication figure
It seem no to match with pre-stored user's facial image.
In some embodiments, two video frames at least two video frames are matched, to obtain video frame
With result, comprising: two video frames near few two video frames input human face recognition model trained in advance respectively, to obtain
Obtain two feature vectors;Determine the similarity between two feature vectors obtained;It is less than or waits in response to determining similarity
In preset threshold, generates and be used to indicate the view that the corresponding face of matched two video frames of progress is not belonging to the same personage
Frequency frame matching result;In response to determining that similarity is greater than preset threshold, generation, which is used to indicate, carries out matched two video frames point
Not corresponding face belongs to the video frame matching result of the same personage.
In some embodiments, human face recognition model is obtained by machine learning.
In some embodiments, this method further include: in response to determination video frame matching result obtained including being used for
It indicates to carry out the video frame matching result that the corresponding face of matched two video frames is not belonging to the same personage, output is used
In the second prompt information of characterization authentication failure.
Second aspect, embodiment of the disclosure provide a kind of information processing unit, which includes: capturing unit, quilt
It is configured to detect authentication trigger signal, captures the video of target face object;Extraction unit, be configured to from
At least two video frames are extracted in video;Matching unit is configured to carry out two video frames at least two video frames
Matching, to obtain video frame matching result, wherein video frame matching result, which is used to indicate, carries out matched two video frames difference
Whether corresponding face belongs to the same personage;Generation unit is configured in response to determine video frame matching knot obtained
Fruit instruction carries out the corresponding face of matched two video frames and belongs to the same personage, is based on video, generation is used to indicate
Target face object whether be living body faces result information.
The third aspect, embodiment of the disclosure provide a kind of terminal device, comprising: one or more processors;Storage
Device is stored thereon with one or more programs;Camera is configured to acquire video;When one or more programs are by one
Or multiple processors execute, so that the method that one or more processors realize any embodiment in above- mentioned information processing method.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program,
The method of any embodiment in above- mentioned information processing method is realized when the program is executed by processor.
The information processing method and device that embodiment of the disclosure provides, by response to detecting authentication triggering letter
Number, the video of target face object is captured, at least two video frames are then extracted from video, and at least two video frames
In two video frames matched, to obtain video frame matching result, wherein video frame matching result be used to indicate progress
Whether the corresponding face of two video frames matched belongs to the same personage, then in response to determining that video frame matching result refers to
Show that carrying out the corresponding face of matched two video frames belongs to the same personage, is based on video, generation is used to indicate target
Whether face object is the result information of living body faces, to can detecte in the video of capture before generating result information
Whether face belongs to the same personage, when determination belongs to the same personage, then the step of executing for generating result information, with
This, can be improved the accuracy and reliability of result information generated, help to realize more structurally sound authentication.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the information processing method of the disclosure;
Fig. 3 is the schematic diagram of an application scenarios of information processing method according to an embodiment of the present disclosure;
Fig. 4 is the flow chart according to another embodiment of the information processing method of the disclosure;
Fig. 5 is the structural schematic diagram according to one embodiment of the information processing unit of the disclosure;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the terminal device of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the exemplary system of the embodiment of the information processing method or information processing unit of the disclosure
System framework 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications, such as payment class software, purchase can be installed on terminal device 101,102,103
Species application, web browser applications, searching class application, instant messaging tools, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with camera, including but not limited to smart phone, tablet computer, e-book reading
(Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player
Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression
Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is
When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with
To provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do herein
It is specific to limit.
Server 105 can be to provide the server of various services, such as send identity to terminal device 101,102,103
Authenticate the background server of trigger signal.Background server can send authentication trigger signal to terminal device, so as to end
End equipment analyze etc. processing to data such as the authentication trigger signals received, obtains processing result (such as result letter
Breath).
It should be noted that information processing method provided by embodiment of the disclosure generally by terminal device 101,102,
103 execute, and correspondingly, information processing unit is generally positioned in terminal device 101,102,103.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)
It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.Used data during generating result information
It does not need in the case where long-range obtain, above system framework can not include network and server, and only include terminal device.
With continued reference to Fig. 2, the process 200 of one embodiment of the information processing method according to the disclosure is shown.The letter
Cease processing method, comprising the following steps:
Step 201, in response to detecting authentication trigger signal, the video of target face object is captured.
In the present embodiment, the executing subject (such as terminal device shown in FIG. 1 101,102,103) of information processing method
Target face can be captured in response to detecting authentication trigger signal by wired connection mode or radio connection
The video of object.Wherein, authentication trigger signal is the signal for triggering authentication operation.Authentication operation is use
The personage corresponding to the captured target face object of certification whether be pre-registered user operation.Specifically, identity
Certification trigger signal can for user execute trigger action (such as click in above-mentioned executing subject authentication triggering press
Button) caused by signal, or electronic equipment (such as the server shown in FIG. 1 with the communication connection of above-mentioned executing subject
105) signal sent.Particularly, above-mentioned executing subject can use camera and carry out continuing shooting, in response to taking target
Face object generates authentication trigger signal.
In practice, when carrying out authentication by recognition of face, pre-registered use is utilized in order to take precautions against lawless people
The conjecture face (such as facial image) at family carries out authentication, and then realizes identity theft, in general, face matching can carried out
The preceding target face object to for carrying out authentication carries out In vivo detection, whether can determine target face object with this
For living body faces.
In the present embodiment, above-mentioned executing subject can capture the video of target face object, for authenticating target face
The identity of object.
Step 202, at least two video frames are extracted from video, and to two video frames at least two video frames
It is matched, to obtain video frame matching result.
In the present embodiment, based on video obtained in step 201, above-mentioned executing subject can be extracted at least from video
Two video frames, and two video frames at least two video frames are matched, to obtain video frame matching result.This
In, video frame matching result, which is used to indicate, carries out whether the corresponding face of matched two video frames belongs to the same person
Object.For example, video frame matching result, which can be, carries out the matched probability of matched two video frames;In another example video frame matching knot
Fruit can be a Boolean, which, which is 1, indicates to carry out matched two video frames, indicate to mismatch for 0, on the contrary
?.In some embodiments, it is also based on video frame matching result and generates other information to be presented to the user, can wrap
It includes but is not limited at least one of following: text, number, symbol, image, audio.
Specifically, above-mentioned executing subject can using various methods from certification with extracting at least two video frames in video.
For example, can be extracted using the method extracted at random, alternatively, can be extracted from sequence of frames of video corresponding to certification video
Positioned at the video frame (such as the video frame to sort at first with last) of predeterminated position.It should be noted that being extracted
At least two video frames include facial image.
It is appreciated that the face for belonging to same personage face characteristic having the same, in turn, for above-mentioned at least two
Every two video frame in video frame, above-mentioned executing subject can be to the progress of face characteristic corresponding to the two video frames
Match, to determine whether face corresponding to the two video frames belongs to the same personage, obtains video frame matching result.
In some optional implementations of the present embodiment, above-mentioned executing subject can be by following steps at least two
Two video frames in a video frame are matched, to obtain video frame matching result:
Step 2021, two video frames at least two video frames are inputted to recognition of face mould trained in advance respectively
Type, to obtain two feature vectors.
Wherein, two feature vectors are corresponded with two video frames.Feature vector is for characterizing corresponding to video frame
Face characteristic.Human face recognition model is used to characterize the corresponding relationship of feature vector corresponding to facial image and facial image.Tool
Body, it is in advance based on as an example, human face recognition model can be technical staff to a large amount of facial image and facial image institute
The statistics of corresponding feature vector and the corresponding relationship for pre-establishing, being stored with multiple facial images with corresponding feature vector
Table.
In some optional implementations of the present embodiment, human face recognition model can be to be obtained by machine learning
's.
Specifically, as an example, human face recognition model can be passed through by above-mentioned executing subject or other electronic equipments it is following
Step training obtains: firstly, obtaining training sample set, wherein training sample includes sample facial image and for sample face
The predetermined sampling feature vectors of image, sampling feature vectors are for characterizing face characteristic corresponding to sample facial image.
Then, using machine learning method, the sample facial image for including using the training sample that training sample is concentrated is as input, by institute
Sampling feature vectors corresponding to the sample facial image of input obtain human face recognition model as desired output, training.
Step 2022, the similarity between two feature vectors obtained is determined.
Wherein, similarity can be used for characterizing the similarity degree of two feature vectors.Specifically, between two feature vectors
Similarity can be characterized with the distance between two feature vectors, can also with the cosine value of the angle between two vectors come
Characterization.
Step 2023, in response to determining that similarity is less than or equal to preset threshold, generation is used to indicate progress matched two
The corresponding face of a video frame is not belonging to the video frame matching result of the same personage;In response to determining that it is pre- that similarity is greater than
If threshold value, generation, which is used to indicate, carries out the video frame matching that the corresponding face of two video frames of matching belongs to the same personage
As a result.
In practice, preset threshold is the minimum value of predetermined similarity.It is appreciated that when the phase of two feature vectors
When being less than or equal to preset threshold like degree, illustrate that two video frames corresponding to the two feature vectors differ greatly, similar journey
Degree does not meet preset requirement, and then can determine that the corresponding face of the two video frames is not belonging to the same personage;It is similar
, when the similarity of two feature vectors is greater than preset threshold, illustrate two videos corresponding to the two feature vectors
Frame difference is smaller, and similarity degree meets preset requirement, and then can determine that the corresponding face of the two video frames belongs to together
One personage.
In some optional implementations of the present embodiment, after obtaining video frame matching result, above-mentioned executing subject
The corresponding face of matched two video frames can be carried out not in response to determination video frame matching result instruction obtained
Belong to the same personage, exports the second prompt information for characterizing authentication failure.Wherein, the second prompt information can wrap
It includes but is not limited at least one of following: text, number, symbol, image, audio, video.In practice, above-mentioned executing subject can be with
Second prompt information is exported to the personnel that authentication is carried out to this, so that the personnel get the knot of this authentication
Fruit.
Step 203, in response to determining that the instruction of video frame matching result carries out the corresponding people of matched two video frames
Face belongs to the same personage, be based on video, generate be used to indicate target face object whether be living body faces result information.
In the present embodiment, above-mentioned executing subject can be in response to determining that the instruction of video frame matching result carries out matched two
The corresponding face of a video frame belongs to the same personage, and based on video obtained in step 201, generation is used to indicate target
Face object whether be living body faces result information.In some embodiments, it can use the mode of machine learning to generate
Indicate target face object whether be living body faces result information.For example, result information can be target face object to live
The probability of body face;In another example result information can be a Boolean, which, which is 1, indicates that target face object is living
Body face is expressed as non-living body face for 0, and vice versa.In some embodiments, it is also based on result information and generates it
His information is including but not limited at least one of following: text, number, symbol, image, audio to be presented to the user.
It should be noted that above-mentioned executing subject can specifically respond when obtaining at least two video frame matching results
In determining that at least two video frames matching result obtained indicates to carry out the corresponding face of matched two video frames
Belong to the same personage, be based on video, generate be used to indicate target face object whether be living body faces result information.
In practice, when carrying out In vivo detection, it is understood that there may be the case where taking at least two target face objects, this meeting
It has a negative impact to the testing result of In vivo detection, therefore, herein, above-mentioned executing subject can be obtained in response to determination
The instruction of video frame matching result carry out the corresponding face of matched two video frames and belong to the same personage, based on view
Frequently, generate be used to indicate captured target face object whether be living body faces result information, so as to improve generated
The accuracy and reliability of result information.
In some optional implementations of the present embodiment, above-mentioned executing subject may also respond to detect that identity is recognized
Trigger signal is demonstrate,proved, exports preset action command, wherein action command is used to indicate target face object and executes movement;And
Above-mentioned executing subject can be based on video, be used to indicate whether target face object is living body faces by following steps generation
Result information: above-mentioned executing subject can be based on video, determine whether target face object performs movement;In response to determining mesh
Mark face object performs movement, generates the result information for being used to indicate that target face object is living body faces;In response to determination
Target face object is not carried out movement, and generation is used to indicate the result information that target face object is non-living body faces.
In this implementation, action command is used to indicate target face object and executes movement.Specifically, action command can
Think the instruction of various forms (such as can be voice, image or text etc.).As an example, action command can be text
" blink ".Above-mentioned executing subject can be instructed with output action, execute movement so that target face object is based on action command.
Specifically, above-mentioned executing subject can identify video, to determine whether target face object executes movement
The indicated movement of instruction is generated in response to determining that target face object performs movement indicated by action command for referring to
Show that target face object is the result information (such as Boolean " 1 ") of living body faces;In response to determining that target face object is not held
Movement indicated by row action command generates and is used to indicate result information that target face object is non-living body faces (such as cloth
Value of " 0 ").
It is appreciated that in shooting process, target face object can be based on when target face object is living body faces
Action command executes movement;When target face object is non-living body faces, in shooting process, target face object can not base
Movement is executed in action command.It in turn, herein, can be by determining whether target face object executes indicated by action command
Movement determine whether target face object is living body faces.
In some optional implementations of the present embodiment, it is based on video, above-mentioned executing subject can be by the view of video
Frequency frame input In vivo detection model trained in advance, to obtain result information.Herein, for inputting In vivo detection mould in video
The video frame of type can be arbitrary.Particularly, the video frame extracted in step 202 directly can be inputted into In vivo detection mould
Type.
In this implementation, In vivo detection model is used to characterize pair of result information corresponding to video frame and video frame
It should be related to.Specifically, as an example, In vivo detection model can be technical staff is in advance based on to a large amount of video frame and video
The statistics of result information corresponding to frame and pre-establish, be stored with multiple video frames it is corresponding with corresponding result information close
It is table.
In some optional implementations of the present embodiment, In vivo detection model can be to be obtained by machine learning
's.In some embodiments, In vivo detection model is silent In vivo detection model, can predict the figure based on single image
Face as in is to come from true living body, or come from image (such as image of living body faces).
Specifically, as an example, silent In vivo detection model can be led to by above-mentioned executing subject or other electronic equipments
It crosses following steps training to obtain: firstly, obtaining training sample set, wherein training sample includes sample facial image and for sample
The sample results information that this facial image marks in advance, sample results information are used to indicate sample corresponding to sample facial image
Whether face object is living body faces.For example, training sample may include positive sample and negative sample, positive sample is from true living
The facial image of body, and negative sample be from non-living body facial image (such as to facial image carry out reimaging it is obtained
Image).Then, using machine learning method, the sample facial image for including using the training sample that training sample is concentrated is as defeated
Enter, using sample results information corresponding to the sample facial image inputted as desired output, training obtains silent living body inspection
Survey model.
In some optional implementations of the present embodiment, after generating result information, above-mentioned executing subject can be with
It is non-living body faces in response to definitive result information instruction target face object, exports first for characterizing authentication failure
Prompt information.Wherein, the first prompt information can include but is not limited at least one of following: text, number, symbol, image, sound
Frequently, video.In practice, the first prompt information can be exported the personnel that authentication is carried out to this by above-mentioned executing subject, with
Just the personnel get the result of this authentication.It should be noted that the first prompt information can be with above-mentioned second prompt
Information is identical.
With continued reference to the schematic diagram that Fig. 3, Fig. 3 are according to the application scenarios of the information processing method of the present embodiment.?
In the application scenarios of Fig. 3, mobile phone 301 can capture target face object in response to detecting authentication trigger signal 302
303 video 304.Then, mobile phone 301 can extract two video frames, respectively video frame 3041 and video from video 304
Frame 3042.Then, mobile phone 301 can match video frame 3041 and video frame 3042, to obtain video frame matching result
305.Finally, mobile phone 301 can be in response to determining that the instruction of video frame matching result 305 carries out matched video frame 3041,3042
Corresponding face belongs to the same personage, is based on video 304, and generation is used to indicate whether target face object 303 is living
The result information 306 of body face.
Currently, in authentication scene, in order to reduce identity theft risk, it will usually to face object to be certified into
Row In vivo detection.But in practical application scene, in fact it could happen that when carrying out movement In vivo detection, take at least two targets
The case where face object, at this point, machine is likely difficult to determine which target face object to carry out authentication based on, so that
There is error in authentication result;Also, occurring a kind of identity theft phenomenon on the market is identity theft person itself executes action live
The movement that physical examination needs to be implemented during surveying, and in executing action process, switching shows the conjecture face (example of stolen user
Such as facial image), at this point, if if the facial image of the stolen user of displaying is extracted being used to carry out authentication,
Identity theft then may be implemented.Therefore, in view of the above-mentioned problems, in the presence of the accuracy and reliability for further increasing authentication
Demand.
The method provided by the above embodiment of the disclosure is by capturing target in response to detecting authentication trigger signal
The video of face object then extracts at least two video frames from video, and to two views at least two video frames
Frequency frame is matched, to obtain video frame matching result, wherein video frame matching result, which is used to indicate, carries out matched two views
Whether the corresponding face of frequency frame belongs to the same personage, and in response to determining that the instruction of video frame matching result is matched
The corresponding face of two video frames belong to the same personage, be based on video, generation is used to indicate target face object and is
The no result information for living body faces, so as to when determining that target face object belongs to the same personage, then based on view
Frequency generate result information, with this, using the disclosure method provided by the above embodiment electronic equipment compared to existing use
In the electronic equipment for carrying out In vivo detection, the shadow that In vivo detection takes at least two target face objects in the process can reduce
It rings, generates more accurate, reliable result information and set in turn using the electronics of the method provided by the above embodiment of the disclosure
It is standby to can have more accurate, reliable In vivo detection function, facilitate the method provided by the above embodiment using the disclosure
Electronic equipment generate and export more accurate identity authentication result.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of information processing method.The information processing
The process 400 of method, comprising the following steps:
Step 401, in response to detecting authentication trigger signal, the video of target face object is captured.
In the present embodiment, the executing subject (such as terminal device shown in FIG. 1 101,102,103) of information processing method
Target face can be captured in response to detecting authentication trigger signal by wired connection mode or radio connection
The video of object.Wherein, authentication trigger signal is the signal for triggering authentication operation.
Step 402, at least two video frames are extracted from video, and to two video frames at least two video frames
It is matched, to obtain video frame matching result.
In the present embodiment, based on video obtained in step 401, above-mentioned executing subject can be extracted at least from video
Two video frames, and two video frames at least two video frames are matched, to obtain video frame matching result.This
In, video frame matching result, which is used to indicate, carries out whether the corresponding face of matched two video frames belongs to the same person
Object.
Step 403, in response to determining that the instruction of video frame matching result carries out the corresponding people of matched two video frames
Face belongs to the same personage, be based on video, generate be used to indicate target face object whether be living body faces result information.
In the present embodiment, above-mentioned executing subject can be in response to determining that the instruction of video frame matching result carries out matched two
The corresponding face of a video frame belongs to the same personage, and based on video obtained in step 401, generation, which is used to indicate, is clapped
The target face object taken the photograph whether be living body faces result information.
Above-mentioned steps 401, step 402, step 403 respectively with step 201, step 202, the step in previous embodiment
203 is consistent, and the description above with respect to step 201, step 202 and step 203 is also applied for step 401, step 402 and step
403, details are not described herein again.
It step 404, is living body faces in response to definitive result information instruction target face object, from least two video frames
It is middle to select video frame as authentication image corresponding to target face object.
In the present embodiment, above-mentioned executing subject can indicate that target face object is living body in response to definitive result information
Face selects video frame as authentication image corresponding to target face object from least two video frames.Wherein, body
Part authentication image is whether to belong to pre-registered user for carrying out authentication with the target face object for determining captured
Image.
Specifically, above-mentioned executing subject can select authentication figure using various methods from least two video frames
Picture.For example, randomly selected method choice can be used;Recognize alternatively, can choose the highest video frame of clarity as identity
Demonstrate,prove image.
Step 405, authentication image is sent to server-side.
In the present embodiment, authentication image can be sent to server-side by above-mentioned executing subject.Server-side be with it is upper
State authentication image and pre-stored user's face that executing subject communicates to connect, for sending to above-mentioned executing subject
Image carries out matched server-side.Wherein, user's facial image is the facial image of pre-registered user.
Step 406, matching result is obtained from server-side.
In the present embodiment, above-mentioned executing subject can obtain matching result from above-mentioned server-side.Herein, matching result
It is used to indicate whether authentication image matches with pre-stored user's facial image.For example, matching result can be body
Part authentication image and the matched probability of user's facial image;In another example matching result can be a Boolean, which is 1
It then indicates authentication image and user's facial image, indicates authentication image and user's facial image for 0, otherwise also
So.In some embodiments, it is also based on matching result and generates other information to be presented to the user, may include but unlimited
In at least one of following: text, number, symbol, image, audio, video.
It is appreciated that when matching result instruction authentication image and pre-stored user's facial image match,
Then authentication success;When matching result instruction authentication image identity authentication image and pre-stored user's facial image
When mismatch, then authentication fails.
Specifically, server-side can using various methods to authentication image and pre-stored user's facial image into
Row matching, obtains matching result.For example, it may be determined that the similarity of authentication image and user's facial image, in response to true
Similarity is determined more than or equal to preset similarity threshold, generates that instruction authentication image and user's facial image match
With as a result, not generating instruction authentication image and user's facial image not in response to determining that similarity is less than similarity threshold
The matching result matched.Wherein, similarity is the numerical value for characterizing the similarity degree of authentication image and user's facial image,
Similarity is bigger, characterizes authentication image and the similarity degree of user's facial image is higher.
Figure 4, it is seen that compared with the corresponding embodiment of Fig. 2, the process of the information processing method in the present embodiment
400 highlight in response to definitive result information instruction target face object be living body faces, selected from least two video frames
Video frame utilizes authentication image and pre-stored use as authentication image corresponding to target face object
Family facial image, the step of matching to target face object and user.The scheme of the present embodiment description can be true as a result,
Face matching is carried out in the case where target face object is made as living body faces, with this, increases and carries out the matched condition of face,
Help to reduce the load for carrying out the matched server-side of face, improves the matching speed of server-side;Also, from least two
Authentication image is extracted in video frame compared with re-shooting after carrying out In vivo detection and obtaining authentication image, can have
Faster image acquisition speed, further, it is also possible to improve the target face object for carrying out In vivo detection and authentication image institute
The consistency of corresponding target face object, to help to realize more structurally sound authentication.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides a kind of information processing apparatus
The one embodiment set, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively
In kind electronic equipment.
As shown in figure 5, the information processing unit 500 of the present embodiment includes: capturing unit 501, extraction unit 502, matching
Unit 503 and generation unit 504.Wherein, capturing unit 501 is configured in response to detect authentication trigger signal, catches
Obtain the video of target face object;Extraction unit 502 is configured to extract at least two video frames from video;Matching unit
503 are configured to match two video frames at least two video frames, to obtain video frame matching result, wherein
Video frame matching result, which is used to indicate, carries out whether the corresponding face of matched two video frames belongs to the same personage;It is raw
It is configured in response to determine that the instruction of video frame matching result carries out the corresponding people of matched two video frames at unit 504
Face belongs to the same personage, be based on video, generate be used to indicate target face object whether be living body faces result information.
In the present embodiment, the capturing unit 501 of information processing unit 500 can in response to by wired connection mode or
Person's radio connection detects authentication trigger signal, captures the video of target face object.Herein, authentication is grasped
As for authenticate personage corresponding to captured target face object whether be pre-registered user operation.Video is used
In the identity of the captured target face object of certification.
In the present embodiment, the video obtained based on capturing unit 501, extraction unit 502 can be extracted from video to
Few two video frames.
In the present embodiment, at least two video frames extracted based on extraction unit 502, matching unit 503 can be to extremely
Two video frames in few two video frames are matched, to obtain video frame matching result.Wherein, video frame matching result is used
Carry out whether the corresponding face of matched two video frames belongs to the same personage in instruction.
In the present embodiment, generation unit 504 can be in response to determining that the instruction of video frame matching result carries out matched two
The corresponding face of a video frame belongs to the same personage, and based on the video that step 501 obtains, generation is used to indicate target person
Face object whether be living body faces result information.
In some optional implementations of the present embodiment, device 500 can also include: instruction output unit (in figure
It is not shown), it is configured in response to detect authentication trigger signal, exports preset action command, wherein action command
It is used to indicate target face object and executes movement;And generation unit 504 may include: that the first determining module (is not shown in figure
Out), it is configured to determine whether target face object performs movement based on video;First generation module (not shown),
It is configured in response to determine that target face object performs movement, generating and being used to indicate target face object is living body faces
Result information;Second generation module (not shown) is configured in response to determine that target face object is not carried out movement, raw
At being used to indicate the result information that target face object is non-living body faces.
In some optional implementations of the present embodiment, generation unit 504 can be further configured to: by video
The trained in advance In vivo detection model of video frame input, to obtain result information.
In some optional implementations of the present embodiment, In vivo detection model is obtained by machine learning.
In some optional implementations of the present embodiment, In vivo detection model is silent In vivo detection model.
In some optional implementations of the present embodiment, device 500 can also include: the first output unit (in figure
It is not shown), being configured in response to definitive result information instruction target face object is non-living body faces, is exported for characterizing body
First prompt information of part authentification failure.
In some optional implementations of the present embodiment, device 500 can also include: that selecting unit (is not shown in figure
Out), being configured in response to definitive result information instruction target face object is living body faces, is selected from least two video frames
Video frame is selected as authentication image corresponding to target face object;Transmission unit (not shown), be configured to by
Authentication image is sent to server-side;Acquiring unit (not shown) is configured to obtain matching result from server-side,
In, matching result is used to indicate whether authentication image matches with pre-stored user's facial image.
In some optional implementations of the present embodiment, matching unit 503 can be with: input module (does not show in figure
Out), it is configured to respectively input two video frames at least two video frames human face recognition model trained in advance, to obtain
Obtain two feature vectors;Second determining module (not shown) is configured to determine between two feature vectors obtained
Similarity;Third generation module (not shown) is configured in response to determine that similarity is less than or equal to preset threshold,
It generates to be used to indicate and carries out the video frame matching knot that the corresponding face of matched two video frames is not belonging to the same personage
Fruit;In response to determining that similarity is greater than preset threshold, generation, which is used to indicate, carries out the corresponding people of matched two video frames
Face belongs to the video frame matching result of the same personage.
In some optional implementations of the present embodiment, human face recognition model is obtained by machine learning.
In some optional implementations of the present embodiment, device 500 can also include: the second output unit (in figure
It is not shown), it is configured in response to determine that video frame matching result obtained includes being used to indicate to carry out matched two views
The corresponding face of frequency frame is not belonging to the video frame matching result of the same personage, exports for characterizing authentication failure
Second prompt information.
It is understood that all units recorded in the device 500 and each step phase in the method with reference to Fig. 2 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 500 and its
In include unit, details are not described herein.
The device provided by the above embodiment 500 of the disclosure is by capturing in response to detecting authentication trigger signal
The video of target face object then extracts at least two video frames from video, and to two at least two video frames
A video frame is matched, to obtain video frame matching result, wherein video frame matching result is used to indicate progress matched two
Whether the corresponding face of a video frame belongs to the same personage, then in response to determining that the instruction of video frame matching result carries out
The corresponding face of matched two video frames belongs to the same personage, is based on video, and generation is used to indicate target face pair
The no result information for living body faces is liked, so that before generating result information, the face that can detecte in the video of capture is
No to belong to the same personage, when determination belongs to the same personage, then the step of executing for generating result information can with this
To improve the accuracy and reliability of result information generated, more structurally sound authentication is helped to realize.
Below with reference to Fig. 6, it illustrates the terminal device (end of example as shown in figure 1 for being suitable for being used to realize the embodiment of the present disclosure
The structural schematic diagram of end equipment 101,102,103) 600.Terminal device in the embodiment of the present disclosure can include but is not limited to all
As mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP are (portable
Formula multimedia player), the mobile terminal and such as number TV, desk-top meter of car-mounted terminal (such as vehicle mounted guidance terminal) etc.
The fixed terminal of calculation machine etc..Terminal device shown in Fig. 6 is only an example, should not function to the embodiment of the present disclosure and
Use scope brings any restrictions.
As shown in fig. 6, terminal device 600 may include processing unit (such as central processing unit, graphics processor etc.)
601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608
Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with terminal device
Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM603 are connected with each other by bus 604.
Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device
609, which can permit terminal device 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool
There is the terminal device 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608
It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the embodiment of the present disclosure is executed
Method in the above-mentioned function that limits.
It should be noted that computer-readable medium described in the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned terminal device;It is also possible to individualism, and not
It is fitted into the terminal device.Above-mentioned computer-readable medium carries one or more program, when said one or more
When a program is executed by the terminal device, so that the terminal device: in response to detecting authentication trigger signal, capturing target
The video of face object;At least two video frames are extracted from video;Two video frames at least two video frames are carried out
Matching, to obtain video frame matching result, wherein video frame matching result, which is used to indicate, carries out matched two video frames difference
Whether corresponding face belongs to the same personage;In response to determining that the instruction of video frame matching result carries out matched two video frames
Corresponding face belongs to the same personage, is based on video, and generation is used to indicate whether target face object is living body faces
Result information.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, catching
It obtains unit and is also described as " unit of the video of capture target face object ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (13)
1. a kind of information processing method, comprising:
In response to detecting authentication trigger signal, the video of target face object is captured;
At least two video frames are extracted from the video;
Two video frames at least two video frame are matched, to obtain video frame matching result, wherein described
Video frame matching result, which is used to indicate, carries out whether the corresponding face of matched two video frames belongs to the same personage;With
And
The corresponding face of matched two video frames is carried out in response to the determination video frame matching result instruction to belong to together
One personage, be based on the video, generate be used to indicate the target face object whether be living body faces result information.
2. according to the method described in claim 1, wherein, the method also includes:
In response to detecting the authentication trigger signal, preset action command is exported, wherein the action command is used for
Indicate that the target face object executes movement;And
Based on the video, generates and be used to indicate whether the target face object is that the result informations of living body faces includes:
Based on the video, determine whether the target face object performs the movement;
The movement is performed in response to the determination target face object, generation is used to indicate the target face object to live
The result information of body face;
It is not carried out the movement in response to the determination target face object, generation is used to indicate the target face object right and wrong
The result information of living body faces.
3. generation is used to indicate the target face object according to the method described in claim 1, wherein, being based on the video
It whether is that the result informations of living body faces includes:
By the video frame input of video In vivo detection model trained in advance, to obtain the result information.
4. according to the method described in claim 3, wherein, the In vivo detection model is obtained by machine learning.
5. according to the method described in claim 3, wherein, the In vivo detection model is silent In vivo detection model.
6. according to the method described in claim 1, wherein, the method also includes:
Indicate that the target face object is non-living body faces in response to the determination result information, output is recognized for characterizing identity
Demonstrate,prove the first prompt information of failure.
7. according to the method described in claim 1, wherein, the method also includes:
Indicate that the target face object is living body faces in response to the determination result information, from least two video frame
It is middle to select video frame as authentication image corresponding to the target face object;
The authentication image is sent to server-side;And
Obtain matching result from the server-side, wherein the matching result be used to indicate the authentication image whether with
Pre-stored user's facial image matches.
8. according to the method described in claim 1, wherein, two video frames at least two video frame are carried out
Match, includes: to obtain video frame matching result
Two video frames at least two video frame are inputted to human face recognition model trained in advance respectively, to obtain two
A feature vector;
Determine the similarity between two feature vectors obtained;
It is less than or equal to preset threshold in response to the determination similarity, generation, which is used to indicate, carries out matched two video frames point
Not corresponding face is not belonging to the video frame matching result of the same personage;
It is greater than preset threshold in response to the determination similarity, generation is used to indicate matched two video frames of progress and respectively corresponds
Face belong to the video frame matching result of the same personage.
9. according to the method described in claim 8, wherein, the human face recognition model is obtained by machine learning.
10. method described in one of -9 according to claim 1, wherein the method also includes:
The corresponding face of matched two video frames is carried out not in response to determination video frame matching result instruction obtained
Belong to the same personage, exports the second prompt information for characterizing authentication failure.
11. a kind of information processing unit, comprising:
Capturing unit is configured in response to detect authentication trigger signal, captures the video of target face object;
Extraction unit is configured to extract at least two video frames from the video;
Matching unit is configured to match two video frames at least two video frame, to obtain video frame
Matching result, wherein the video frame matching result, which is used to indicate the corresponding face of matched two video frames of progress, is
It is no to belong to the same personage;
Generation unit is configured in response to determine that the video frame matching result instruction carries out matched two video frames difference
Corresponding face belongs to the same personage, is based on the video, and generation is used to indicate whether the target face object is living body
The result information of face.
12. a kind of terminal device, comprising:
One or more processors;
Storage device is stored thereon with one or more programs;
Camera is configured to acquire video;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-10.
13. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Method as described in any in claim 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910211760.XA CN109871834A (en) | 2019-03-20 | 2019-03-20 | Information processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910211760.XA CN109871834A (en) | 2019-03-20 | 2019-03-20 | Information processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109871834A true CN109871834A (en) | 2019-06-11 |
Family
ID=66920844
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910211760.XA Pending CN109871834A (en) | 2019-03-20 | 2019-03-20 | Information processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109871834A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276313A (en) * | 2019-06-25 | 2019-09-24 | 网易(杭州)网络有限公司 | Identity identifying method, identification authentication system, medium and calculating equipment |
CN110472491A (en) * | 2019-07-05 | 2019-11-19 | 深圳壹账通智能科技有限公司 | Abnormal face detecting method, abnormality recognition method, device, equipment and medium |
CN110609921A (en) * | 2019-08-30 | 2019-12-24 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN110826486A (en) * | 2019-11-05 | 2020-02-21 | 拉卡拉支付股份有限公司 | Face recognition auxiliary detection method and device |
CN111046804A (en) * | 2019-12-13 | 2020-04-21 | 北京旷视科技有限公司 | Living body detection method, device, electronic device, and readable storage medium |
CN112084858A (en) * | 2020-08-05 | 2020-12-15 | 广州虎牙科技有限公司 | Object recognition method and device, electronic equipment and storage medium |
CN112150514A (en) * | 2020-09-29 | 2020-12-29 | 上海眼控科技股份有限公司 | Pedestrian trajectory tracking method, device and equipment of video and storage medium |
CN113344550A (en) * | 2021-06-30 | 2021-09-03 | 西安力传智能技术有限公司 | Flow processing method, device, equipment and storage medium |
CN113762969A (en) * | 2021-04-23 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Information processing method, information processing device, computer equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751110A (en) * | 2013-12-31 | 2015-07-01 | 汉王科技股份有限公司 | Bio-assay detection method and device |
CN105893920A (en) * | 2015-01-26 | 2016-08-24 | 阿里巴巴集团控股有限公司 | Human face vivo detection method and device |
CN108494778A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Identity identifying method and device |
CN108875546A (en) * | 2018-04-13 | 2018-11-23 | 北京旷视科技有限公司 | Face auth method, system and storage medium |
-
2019
- 2019-03-20 CN CN201910211760.XA patent/CN109871834A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751110A (en) * | 2013-12-31 | 2015-07-01 | 汉王科技股份有限公司 | Bio-assay detection method and device |
CN105893920A (en) * | 2015-01-26 | 2016-08-24 | 阿里巴巴集团控股有限公司 | Human face vivo detection method and device |
CN108494778A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Identity identifying method and device |
CN108875546A (en) * | 2018-04-13 | 2018-11-23 | 北京旷视科技有限公司 | Face auth method, system and storage medium |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276313A (en) * | 2019-06-25 | 2019-09-24 | 网易(杭州)网络有限公司 | Identity identifying method, identification authentication system, medium and calculating equipment |
CN110472491A (en) * | 2019-07-05 | 2019-11-19 | 深圳壹账通智能科技有限公司 | Abnormal face detecting method, abnormality recognition method, device, equipment and medium |
WO2021004112A1 (en) * | 2019-07-05 | 2021-01-14 | 深圳壹账通智能科技有限公司 | Anomalous face detection method, anomaly identification method, device, apparatus, and medium |
CN110609921A (en) * | 2019-08-30 | 2019-12-24 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN110609921B (en) * | 2019-08-30 | 2022-08-19 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN110826486A (en) * | 2019-11-05 | 2020-02-21 | 拉卡拉支付股份有限公司 | Face recognition auxiliary detection method and device |
CN111046804A (en) * | 2019-12-13 | 2020-04-21 | 北京旷视科技有限公司 | Living body detection method, device, electronic device, and readable storage medium |
WO2022028425A1 (en) * | 2020-08-05 | 2022-02-10 | 广州虎牙科技有限公司 | Object recognition method and apparatus, electronic device and storage medium |
CN112084858A (en) * | 2020-08-05 | 2020-12-15 | 广州虎牙科技有限公司 | Object recognition method and device, electronic equipment and storage medium |
CN112084858B (en) * | 2020-08-05 | 2025-01-24 | 广州虎牙科技有限公司 | Object recognition method and device, electronic device and storage medium |
CN112150514A (en) * | 2020-09-29 | 2020-12-29 | 上海眼控科技股份有限公司 | Pedestrian trajectory tracking method, device and equipment of video and storage medium |
CN113762969A (en) * | 2021-04-23 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Information processing method, information processing device, computer equipment and storage medium |
CN113762969B (en) * | 2021-04-23 | 2023-08-08 | 腾讯科技(深圳)有限公司 | Information processing method, apparatus, computer device, and storage medium |
CN113344550B (en) * | 2021-06-30 | 2023-11-28 | 西安力传智能技术有限公司 | Flow processing method, device, equipment and storage medium |
CN113344550A (en) * | 2021-06-30 | 2021-09-03 | 西安力传智能技术有限公司 | Flow processing method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109871834A (en) | Information processing method and device | |
US11887369B2 (en) | Systems and methods for generating media content | |
CN109977839A (en) | Information processing method and device | |
CN109934191A (en) | Information processing method and device | |
CN109993150B (en) | Method and device for identifying age | |
CN107038784B (en) | Safe verification method and device | |
CN109086719A (en) | Method and apparatus for output data | |
CN108494778A (en) | Identity identifying method and device | |
CN108171211A (en) | Biopsy method and device | |
CN110188719A (en) | Method for tracking target and device | |
CN110059624A (en) | Method and apparatus for detecting living body | |
CN110060441A (en) | Method and apparatus for terminal anti-theft | |
CN108549848A (en) | Method and apparatus for output information | |
CN108521516A (en) | Control method and device for terminal device | |
CN108509611A (en) | Method and apparatus for pushed information | |
CN110110666A (en) | Object detection method and device | |
CN110046571B (en) | Method and device for identifying age | |
CN108600250A (en) | Authentication method | |
CN111738199B (en) | Image information verification method, device, computing device and medium | |
CN109934142A (en) | Method and apparatus for generating the feature vector of video | |
CN110008926A (en) | The method and apparatus at age for identification | |
CN109919220A (en) | Method and apparatus for generating the feature vector of video | |
CN110188660A (en) | The method and apparatus at age for identification | |
CN109829431A (en) | Method and apparatus for generating information | |
CN109241344A (en) | Method and apparatus for handling information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |