CN120856376A - Identity authentication method and related device - Google Patents
Identity authentication method and related deviceInfo
- Publication number
- CN120856376A CN120856376A CN202510886324.8A CN202510886324A CN120856376A CN 120856376 A CN120856376 A CN 120856376A CN 202510886324 A CN202510886324 A CN 202510886324A CN 120856376 A CN120856376 A CN 120856376A
- Authority
- CN
- China
- Prior art keywords
- information
- task request
- ith
- identity
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0861—Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/16—Threshold monitoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/32—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
- H04L9/3226—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
- H04L9/3231—Biological data, e.g. fingerprint, voice or retina
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Collating Specific Patterns (AREA)
Abstract
The application discloses an identity authentication method and a related device, which are used for responding to a task request from a target account aiming at a task to be executed, extracting information from the task request to obtain information to be verified in multiple dimensions, mining identity related information in multiple dimensions of a user who makes the task request by utilizing the task request, and have richer content and can embody the identity information of the user. Based on the information to be verified in multiple dimensions, the identity confidence calculation aiming at the target account is carried out to obtain the matching confidence of the task request, if the verification result of the task request is determined to be verification passing according to the matching confidence, the task to be executed corresponding to the task request is executed, and the identity recognition is carried out based on the identity related information in multiple dimensions with richer contents, so that the security verification accuracy of the task to be executed is improved, no additional identity verification operation is needed, real-time and non-perception identity verification is realized, the user operation is simplified, and the security and convenience are balanced better.
Description
Technical Field
The present application relates to the field of data processing, and in particular, to an identity authentication method and related device.
Background
With the development of artificial intelligence (ARTIFICIAL INTELLIGENCE, AI), artificial intelligence-based agents can act as virtual assistants to perform a number of tasks on behalf of users, such as conducting highly secure sensitive operations such as payment transactions, transfer of money, distribution of red packets, financial product purchases, etc. Before performing these operations, the agent needs to confirm whether the operation task request is from a legal user, but not from another person, so as to ensure account security, i.e. user identity authentication is required before performing the operations.
Specifically, the agent acquires the task request from the target account, and needs to determine that the task request is from the user corresponding to the target account, where the determining manner may be, for example, inputting additional verification information such as a password, a verification code, or biometric information. However, the input of such additional authentication information typically requires additional operations by the user, reducing the convenience benefits of the agent.
Disclosure of Invention
In order to solve the technical problems, the application provides an identity authentication method and a related device, which utilize a task request to mine identity related information of multiple dimensions of a user corresponding to the task request, and compared with the identity related information of a single dimension, the identity related information of the user is richer in content, so that the identity information of the user can be reflected. The identity recognition is performed based on the identity related information with a plurality of dimensionalities with richer contents, so that the security verification accuracy of the task to be executed is improved, the identity recognition can be performed based on the task request without additional identity verification operation, the real-time and non-perception identity verification is realized, the user operation is simplified, and the security and the convenience are balanced better.
The embodiment of the application discloses the following technical scheme:
In one aspect, the present application provides an identity authentication method, the method comprising:
Responding to a task request from a target account, and extracting information from the task request to obtain information to be verified in multiple dimensions, wherein the multiple dimensions at least comprise a voice voiceprint dimension and a language behavior dimension or at least comprise a text input behavior dimension and a language behavior dimension;
Based on the information to be verified in multiple dimensions, identity confidence calculation aiming at the target account is carried out, and matching confidence of the task request is obtained;
And if the verification result of the task request is determined to pass through verification according to the matching confidence, executing the task to be executed corresponding to the task request.
In another aspect, the present application provides an identity authentication device, the device comprising:
the system comprises an information extraction unit, a target account, a storage unit and a storage unit, wherein the information extraction unit is used for responding to a task request from the target account, extracting information from the task request to obtain information to be verified in a plurality of dimensions, and the dimensions at least comprise voice voiceprint dimensions and language behavior dimensions or at least comprise text input behavior dimensions and language behavior dimensions;
The confidence coefficient calculating unit is used for calculating the identity confidence coefficient aiming at the target account based on the information to be verified in the multiple dimensions to obtain the matching confidence coefficient of the task request;
And the task execution unit is used for executing the task to be executed corresponding to the task request if the verification result of the task request is determined to be verification passing according to the matching confidence coefficient.
Optionally, the information extraction unit includes:
the information acquisition unit is used for responding to a task request from a target account and acquiring interaction content from the target account within a preset time period before the task request;
And the information extraction subunit is used for respectively extracting information from the task request and the interactive content to obtain information to be verified in multiple dimensions.
Optionally, the information extraction subunit includes:
A voice extraction unit for extracting information of the voice input information aiming at the task request and the voice input information in the interactive content to obtain voice voiceprint information of voice voiceprint dimension and voice language behavior information of language behavior dimension, so that the information to be verified comprises the voice voiceprint information and the voice language behavior information, and/or,
The word extraction unit is used for extracting information of the word input information aiming at the task request and the word input information in the interactive content to obtain word input behavior information of word input behavior dimension and word language behavior information of language behavior dimension, so that the information to be verified comprises the word input behavior information and the word language behavior information.
Optionally, the confidence coefficient calculating unit includes:
The feature extraction unit is used for respectively extracting features of the information to be verified in multiple dimensions to obtain multiple features to be verified;
the feature mapping unit is used for mapping the feature space of the plurality of features to be verified to obtain a plurality of standard features;
and the confidence coefficient calculating subunit is used for calculating the identity confidence coefficient aiming at the target account based on the plurality of standard features to obtain the matching confidence coefficient of the task request.
Optionally, the confidence calculating subunit is specifically configured to:
The identity confidence coefficient calculation aiming at the target account is respectively carried out based on the plurality of standard features to obtain a plurality of initial confidence coefficients corresponding to the plurality of standard features, the plurality of initial confidence coefficients are weighted and averaged according to the weight of the plurality of dimensions to obtain the matching confidence coefficient of the task request, or,
And carrying out identity confidence calculation aiming at the target account based on the comprehensive features to obtain the matching confidence of the task request.
Optionally, the apparatus further includes:
The history information acquisition unit is used for acquiring history interaction information from the target account, wherein the history interaction information comprises history voice input information and history text input information;
the historical information extraction unit is used for extracting information from the historical interaction information to obtain training information with multiple dimensions;
The system comprises a training information acquisition unit, a model construction unit and a feature processing model, wherein the training information acquisition unit is used for acquiring training information of a task request, the model construction unit is used for constructing a feature processing model based on the training information, and the feature processing model is used for carrying out identity confidence calculation aiming at the target account based on the information to be verified with multiple dimensions to obtain the matching confidence of the task request.
Optionally, the apparatus further includes:
The guiding unit is used for providing preset guiding content so that the historical voice input information comprises voice input information corresponding to the preset guiding content, and the historical text input information comprises text input information corresponding to the preset guiding content.
Optionally, the apparatus further includes:
The model updating unit is used for responding to the ith satisfying updating condition and updating the feature processing model for the ith time according to the ith interaction information from the target account or the ith interaction information and the ith feedback information, wherein i is a positive integer, and the ith satisfying updating condition comprises at least one of the condition that the non-updated time length satisfies the preset time length, the false judgment of the verification result and the condition that the quantity of the interaction information satisfies the preset quantity.
Optionally, the model updating unit includes:
The parameter determining unit is used for responding to the ith meeting of the updating condition, and calculating and obtaining the ith target parameter of the characteristic processing model according to the ith interaction information from the target account or the ith interaction information and the ith feedback information;
The weighting unit is used for carrying out weighted average on the ith-1 parameter and the ith target parameter of the characteristic processing model to obtain the ith parameter of the characteristic processing model, wherein the 0 th parameter is the original parameter of the characteristic processing model;
And the updating unit is used for carrying out the ith updating on the characteristic processing model according to the ith parameter.
Optionally, the method further comprises:
the threshold determining unit is used for determining a confidence threshold corresponding to the task request according to the request category of the task request;
And the verification result determining unit is used for determining that the verification result of the task request is verification passing if the matching confidence coefficient and the confidence coefficient threshold meet the verification passing condition.
Optionally, the apparatus further includes:
the instruction providing unit is used for providing an additional verification information input instruction if the matching confidence coefficient and the confidence coefficient threshold meet additional verification conditions and the verification result of the task request is determined to be that additional verification is needed;
and the refusing execution unit is used for refusing to execute the task to be executed if the matching confidence coefficient and the confidence coefficient threshold value meet the verification failure condition and the verification result of the task request is determined to be verification failure.
Optionally, the apparatus further includes:
The display unit is used for responding to the task request and displaying the dimension information of the plurality of dimensions, and/or displaying the matching confidence, and/or displaying the verification result, and/or displaying the execution result of the task to be executed.
In another aspect, the application provides a computer device comprising a processor and a memory:
The memory is used for storing a computer program and transmitting the computer program to the processor;
the processor is configured to execute the identity authentication method according to the above aspect according to instructions in the computer program.
In another aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium is used to store a computer program, where the computer program is used to execute the identity authentication method described in the above aspect.
In another aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run on a computer device, causes the computer device to perform the identity authentication method.
According to the technical scheme, the task request is subjected to information extraction to obtain the to-be-verified information with multiple dimensions in response to the task request from the target account for the task to be executed, and the multiple dimensions at least comprise the voice voiceprint dimension and the language behavior dimension or at least comprise the text input behavior dimension and the language behavior dimension, so that the task request can be utilized to mine identity related information of the multiple dimensions of the user corresponding to the task request, and the identity related information has richer content compared with the identity related information of a single dimension, so that the identity information of the user can be reflected more. Based on the information to be verified in multiple dimensions, identity confidence calculation aiming at the target account can be performed, and matching confidence of the task request is obtained, so that probability that a user who makes the task request is a user corresponding to the target account is indicated. If the verification result of the task request is determined to be verification passing according to the matching confidence, the verification result indicates that the authentication passes, the task to be executed corresponding to the task request can be executed, so that the identity recognition is performed based on the identity related information with multiple dimensions with richer contents, the security verification accuracy of the task to be executed is improved, the identity recognition can be performed based on the task request without additional identity verification operation, the real-time and non-perception identity verification is realized, the user operation is simplified, and the security and the convenience are balanced better.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of an identity authentication method according to an embodiment of the present application;
FIG. 2 is a flowchart of an identity authentication method according to an embodiment of the present application;
fig. 3 is a schematic diagram of an authentication flow provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a task request according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another task request acquisition according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an authentication architecture according to an embodiment of the present application;
Fig. 7-16 are schematic flow diagrams of various identity authentications according to embodiments of the present application;
FIG. 17 is a schematic diagram of a complete process according to an embodiment of the present application;
fig. 18-20 are schematic flow diagrams of various identity authentications according to embodiments of the present application;
FIG. 21 is a block diagram of an identity authentication device according to an embodiment of the present application;
Fig. 22 is a block diagram of a terminal device according to an embodiment of the present application;
fig. 23 is a block diagram of a server according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
Currently, an agent acquires a task request from a target account, and needs to determine that the task request comes from a user corresponding to the target account, and the determination mode can be, for example, inputting additional verification information such as a password, a verification code or biological identification information. However, the input of such additional authentication information typically requires additional operations by the user, reducing the convenience benefits of the agent.
In order to solve the technical problems, the application provides an identity authentication method and a related device, which utilize a task request to mine identity related information of multiple dimensions of a user who makes a task request, and compared with the identity related information of a single dimension, the identity related information of the user has richer content, so that the identity information of the user can be reflected more. The identity recognition is performed based on the identity related information with a plurality of dimensionalities with richer contents, so that the security verification accuracy of the task to be executed is improved, the identity recognition can be performed based on the task request without additional identity verification operation, the real-time and non-perception identity verification is realized, the user operation is simplified, and the security and the convenience are balanced better.
The identity authentication method provided by the embodiment of the application can be implemented through computer equipment, wherein the computer equipment can be terminal equipment or a server, and the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server for providing cloud computing service. Terminal devices include, but are not limited to, cell phones, computers, intelligent voice interaction devices, intelligent home appliances, vehicle terminals, aircraft, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
It will be appreciated that in the specific embodiments of the present application, data relating to voice input information, text input information, user information, etc. is referred to, and when the above embodiments of the present application are applied to specific products or technologies, user permission or consent is required, and the collection, use and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions.
In order to facilitate understanding of the technical scheme provided by the application, an identity authentication method provided by the embodiment of the application will be described below in conjunction with an actual application scenario.
Fig. 1 is a schematic diagram of an application scenario of an identity authentication method according to an embodiment of the present application, where the scenario includes a server 10 and a terminal device 20, and an application program for identity authentication is installed in the terminal device 20, where the server 10 and the terminal device 20 corresponding to the application program interact with each other through a network. The server 10 or the terminal device 20 may be used as the aforementioned computer device for executing the authentication method, the terminal device 20 is used for interacting with a user, and after logging in a target account, acquiring a task request corresponding to the target account to trigger an authentication task, and may also be used for displaying a verification result of the task request, an execution result of a task to be executed, and the like. The terminal device 20 will be described below as an example of the aforementioned computer device.
In response to a task request from a target account for a task to be executed, the terminal device 20 may extract information from the task request to obtain information to be verified in multiple dimensions, where the multiple dimensions include at least a voice voiceprint dimension and a language behavior dimension, or at least a text input behavior dimension and a language behavior dimension, that is, the multiple dimensions include a language behavior dimension, and may further include at least one of the text input behavior dimension and the voice voiceprint dimension, so that identity related information in multiple dimensions of a user corresponding to the task request may be mined by using the task request, and the identity related information in multiple dimensions has richer content compared with identity related information in a single dimension, so that identity information of the user may be more reflected.
Based on the information to be verified in multiple dimensions, the terminal device 20 can perform identity confidence calculation for the target account to obtain the matching confidence of the task request, thereby indicating the probability that the user who makes the task request is the user corresponding to the target account.
If the verification result of the task request is determined to be verification passing according to the matching confidence, the terminal device 20 can execute the task to be executed corresponding to the task request if the authentication passes, so that the identity recognition is performed based on the identity related information with multiple dimensions with richer contents, the security verification accuracy of the task to be executed is improved, the identity recognition can be performed based on the task request without additional identity verification operation, real-time and unaware identity verification is realized, the user operation is simplified, and the security and convenience are balanced better.
Fig. 2 is a flowchart of an identity authentication method according to an embodiment of the present application, where in this embodiment, a server is used as the terminal device to describe the identity authentication method, the identity authentication method may include:
s101, responding to a task request from a target account, and extracting information from the task request to obtain information to be verified in multiple dimensions.
With the development of artificial intelligence, an agent based on artificial intelligence may act as a virtual assistant to perform a number of tasks on behalf of a user, such as performing highly secure operations such as payment transactions, transfer of money, distribution of red packets, purchase of financial products, etc. Artificial intelligence based agents, for example, can be developed based on large model technology that is capable of natural language interactions with users and performing specific tasks on behalf of the users. The agent may be provided in a terminal device through which it interacts with the user.
Before performing these operations, the agent needs to confirm whether the operation task request is from a legal user, but not from another person, so as to ensure account security, i.e. user identity authentication is required before performing the operations. Specifically, the agent acquires the task request from the target account, and needs to determine that the task request is from the user corresponding to the target account, where the determining manner may be, for example, inputting additional verification information such as a password, a verification code, or biometric information. The additional verification information may be a preset password or personal identification code (Personal Identification Number, PIN), a disposable verification code, and biometric information such as fingerprint, face, iris, voiceprint, etc.
Referring to fig. 3, a schematic diagram of an authentication flow provided in an embodiment of the present application is shown, in which a user inputs login information such as an account password through a login interface to perform login authentication, enters an authorized state if the login authentication is passed, and returns to the login interface if the login authentication is failed. In the authorized state, if the execution task of the common operation is acquired, the operation can be executed without re-verification, if the execution task of the sensitive operation is acquired, the secondary verification is needed, the secondary verification is used for acquiring additional verification information, if the secondary verification passes, the operation is executed, and if the secondary verification fails, the sensitive operation is refused to be executed, so that the account safety is ensured.
However, the input of such additional authentication information typically requires additional operations by the user, reducing the convenience benefits of the agent. In addition, the additional operation needs to pause the current dialogue between the user and the intelligent agent to execute the verification operation, break the natural and smooth interaction flow between the user and the intelligent agent, so that the interaction experience is relatively split, the additional verification information such as passwords or verification codes is easy to increase the cognitive burden of the user, the convenience advantage of the intelligent agent is reduced, and the single biological feature identification information (such as voiceprint identification only) is easy to be deceived by high-quality voice cloning or deep counterfeiting technology, so that the security risk exists.
In the embodiment of the application, after logging in the target account, the user can realize interaction with the intelligent agent in a voice or text mode by utilizing the target account, the intelligent agent system background performs identity verification on the user, and whether to perform task execution is determined according to the identity verification result. The interaction information from the target account can be various requests, the intelligent agent can judge the request type according to the request type of the various requests, the request type is used for indicating the security sensitivity level, whether the request is sensitive operation or not is determined according to the request type, if yes, the request is a task request related to the sensitive operation in the follow-up description, an authentication process can be triggered, identity authentication is carried out based on the task request, if no, the request is a common request, and the request can be directly processed under the condition of no identity authentication.
Specifically, if the request is determined to be a task request, the task request corresponds to a task request from the target account, where the task request is used to request to execute a corresponding operation to be executed, where the operation to be executed may be a high security sensitive operation such as payment transaction, transfer, red package issue, financial product purchase, etc., or may include a medium security sensitive operation such as modifying user information, etc., or may also include a low security sensitive operation such as inquiring amount, transfer detail, etc. The request category can be a payment category, a transfer category, a data modification category, a query category and the like, the payment category can also comprise a large payment category, a small payment category and the like, the transfer category can also comprise a large transfer category and a small transfer category, or the request category can also be a high security sensitivity category, a medium security sensitivity category, a low security sensitivity category and the like. In general, payment and transfer categories correspond to high security sensitivity categories, data modification categories correspond to medium security sensitivity categories, and query categories correspond to low security sensitivity categories.
The task request may include voice input information, text input information, or both. The voice input information is used for reflecting voice input content and voice voiceprint information, and the voice input content and the voice voiceprint information can be obtained by extracting the voice input information. The voice input content is used for reflecting voice language behavior information (representing unique habit of using natural language by the user), such as vocabulary selection preference, sentence structure habit, expression habit characteristics, special word (such as word and gas word) frequency and the like, the voice voiceprint information is used for reflecting unique voice modes formed when the user speaks, including biological characteristics formed by pitch, timbre, tone and the like, and the voice voiceprint information can be represented by voiceprint indexes such as voice frequency spectrum information, mel Frequency Cepstrum Coefficient (MFCC), fundamental frequency contour, tone change and the like extracted from original audio.
The text input information is used for reflecting text input content and text input behavior information, and the text input information and the text input behavior information can be obtained by extracting the text input information. The text input content is used for reflecting text language behavior information of a user, such as word selection preference, sentence structure habit, expression habit characteristics, special word frequency and the like during text expression, and the text input behavior information is behavior data of the user during text input, and represents unique habits, such as key interval, typing rhythm, key pause mode, key strength, input speed, error correction mode and the like, displayed by the user during text input on a keyboard or a touch screen.
The task request may be obtained through an interactive interface, for example, through a text input box or a voice input control in the interactive interface, etc. The task request is taken as interaction information from the target account, and the corresponding voice input content or text input content can be displayed in the form of chat bubbles in the interaction interface. The agent can be used as a chat object in the interactive interface, and the chat object corresponds to notification information output to the target account, and the notification information can exist in the form of chat bubbles.
Referring to fig. 4, a schematic diagram of a task request is provided for an embodiment of the present application, the content of the task request is "help me to make a three transfer 300 yuan", which is shown in a first chat bubble 21, the task request only includes voice input information, i.e. a user can instruct an agent to help the user transfer to friends through voice, and referring to fig. 5, another schematic diagram of a task request is provided for an embodiment of the present application, the content of the task request is "see my investment financial product, and help me to purchase a product with highest profit at rest", which is shown in the first chat bubble 21, the task request only includes text input information, i.e. the user can instruct the agent to help to inquire personal account information through text and perform investment financial operation.
Referring to fig. 6, a schematic diagram of an identity authentication architecture according to an embodiment of the present application may include a data acquisition layer, where a task request may be acquired through the data acquisition layer, where the data acquisition layer includes a voice acquisition module, a text content acquisition module, and an input behavior acquisition module.
The voice input information can be acquired through a voice acquisition module, an original audio signal can be acquired through voice acquisition through a microphone of the mobile device based on acquisition metadata (such as a sampling rate, sampling duration and the like), the original audio signal can be used as voice input information, the original audio signal can also be preprocessed to obtain the voice input information, preprocessing such as denoising processing and the like can be performed on the original audio signal, the voice input information can simultaneously have voice input content and voice voiceprint information, the voice input content can be acquired through a text content acquisition module, the voice input content can be acquired through voice recognition on the voice input information and is used for extracting voice language behavior information, the text input content can be acquired through a text content acquisition module and is used for extracting text language behavior information, and the text input behavior information can be acquired through an input behavior acquisition module and is used for recording behavior data of a user in a text input process.
In response to a task request from a target account for a task to be executed, information extraction can be performed on the task request to obtain information to be verified in multiple dimensions, wherein the multiple dimensions at least comprise a voice voiceprint dimension and a language behavior dimension, or at least comprise a text input behavior dimension and a language behavior dimension, namely the multiple dimensions comprise the language behavior dimension, and at least one of the text input behavior dimension and the voice voiceprint dimension, so that the task request can be utilized to mine identity related information of the multiple dimensions of a user corresponding to the task request, and the information is richer than the identity related information of a single dimension, so that the identity information of the user can be reflected more.
Specifically, for voice input information in a task request, information extraction can be performed on the voice input information to obtain voice voiceprint information of a voice voiceprint dimension and voice language behavior information of a language behavior dimension, so that information to be verified comprises the voice voiceprint information and the voice language behavior information, which respectively correspond to the voice voiceprint dimension and the language behavior dimension. The information extraction is respectively carried out on the voice input information and the text input information, so that the information to be verified with higher accuracy can be obtained, when the task request only comprises the voice input information or the text input information, the two-dimensional information to be verified can be obtained, and when the task request simultaneously comprises the voice input information and the text input information, the three-dimensional information to be verified can be obtained.
In the embodiment of the application, the information extraction can be performed by combining other interactive contents, because the information quantity of the task request is relatively less, and the information extraction is performed by combining other contents, thereby being beneficial to obtaining richer contents. Referring to fig. 7, a flow chart of identity authentication provided in this embodiment of the present application may be specifically shown in S101, in which, in response to a task request from a target account, the interaction content from the target account in a preset time period before the task request is obtained, S1011, information extraction is performed on the task request and the interaction content respectively to obtain information to be verified in multiple dimensions, so that the time interval between the interaction content and the task request is shorter, the correlation between the interaction content and the task request is higher, and the probability of belonging to the same user is higher, which may be used for auxiliary identity authentication to improve the information amount on which the auxiliary authentication is based. The duration of the preset time period may be, for example, 30 minutes.
In the embodiment, referring to fig. 8, a flowchart of still another identity authentication provided by the embodiment of the present application may be specifically shown in fig. 8, where S1012 a, for a task request and voice input information in the interactive content, performs information extraction on the voice input information to obtain voice voiceprint information and voice language behavior information of a language behavior dimension, so that the information to be verified includes the voice voiceprint information and the voice language behavior information, and corresponds to the voice voiceprint dimension and the language behavior dimension, respectively, and S10B, for a task request and text input information in the interactive content, performs information extraction on the text input information to obtain text input behavior information of a text input behavior dimension and text language behavior information of a language behavior dimension, so that the information to be verified includes the text input behavior information and the text language behavior information, and corresponds to the text input behavior dimension and the language behavior dimension, respectively. Thus, the information extraction is respectively carried out on the voice input information and the text input information, the information to be verified with higher accuracy can be obtained, when the task request and the interactive content only comprise the voice input information or the text input information, the two-dimensional information to be verified can be obtained, and when the task request and the interactive content simultaneously comprise the voice input information and the text input information, the three-dimensional information to be verified can be obtained.
In the process of extracting information from text input information, referring to fig. 9, a flowchart of still another identity authentication provided in an embodiment of the present application is shown, where S10B may include S10B1, extracting information from text input information to obtain text language behavior information in language behavior dimension, and S10B2, extracting information from text input information to obtain text input behavior information in text input behavior dimension. Referring to fig. 6, the authentication architecture may include an information extraction layer for extracting information from the task request and the interactive content. The information extraction layer may include an input behavior information extraction module for performing S10B2, and a text content information extraction module for performing S10B1.
In the process of extracting information from voice input information, referring to fig. 9, S10A may include S10A1, extracting information from voice input information to obtain voice voiceprint information in voice voiceprint dimension, S10A2, converting text from voice input information to obtain voice converted text, and S10A3, extracting information from voice converted text to obtain voice language behavior information in language behavior dimension. Referring to fig. 6, the information extraction layer may further include a voice voiceprint information extraction module for performing S10A1, and text content information extraction modules for performing S10A2 and S10A3.
The interactive content may include voice input information, text input information, and of course, both voice input information and text input information. The voice input information is used for reflecting voice input content and voice voiceprint information, and the voice input content and the voice voiceprint information can be obtained by extracting the voice input information. The text input information is used for reflecting text input content and text input behavior information, and the text input information and the text input behavior information can be obtained by extracting the text input information.
In response to the task request, a notification message corresponding to the task request may also be presented, and referring to fig. 4 and 5, the content of the notification message is presented in the second chat bubble 22. In response to the task request, dimensional information of multiple dimensions may also be presented, for example, dimensional information of voice voiceprint dimensions and language behavior dimensions may be presented for voice input information, and dimensional information of text input behavior dimensions and language behavior dimensions may be presented for text input information. Referring to fig. 4 and 5, the dimension information is shown in the third chat bubble 23, the dimension information in fig. 4 includes dimension information of a voice voiceprint dimension and a language behavior dimension, and the dimension information in fig. 5 includes a language behavior dimension and a text input behavior dimension. The user can conveniently acquire the authentication information aiming at the task request.
S102, based on information to be verified in multiple dimensions, identity confidence calculation aiming at a target account is performed, and matching confidence of a task request is obtained.
In the embodiment of the application, based on the information to be verified in multiple dimensions, the identity confidence degree calculation aiming at the target account can be performed to obtain the matching confidence degree of the task request, so that the probability that the user who makes the task request is the user corresponding to the target account, namely, the numerical index indicating the user identity credibility degree is indicated, and the numerical index is used for judging whether the current interactive user who sends the task request is a legal user or not.
In the processing of the information to be verified in multiple dimensions, referring to fig. 10, a flow chart of identity authentication provided for an embodiment of the present application is shown, where S102 may include S1021, performing feature extraction on the information to be verified in multiple dimensions to obtain multiple features to be verified, where the multiple features to be verified have deeper information, so as to summarize the information to be verified, S1022, performing feature space mapping on the multiple features to be verified to obtain multiple standard features, that is, implementing feature standardization of the multiple features to be verified, so that the multiple standard features may belong to the same feature space, where the multiple standard features belong to a unified measurement space, so as to facilitate comprehensive evaluation on the multiple standard features, and S1023, performing identity confidence calculation for a target account based on the multiple standard features, so as to obtain a matching confidence of a task request. The matching confidence coefficient calculation is carried out in the mode of feature extraction and feature fusion, the information to be verified in multiple dimensions can be fully utilized, the matching confidence coefficient with higher correlation is obtained, and the calculation of the matching confidence coefficient has higher accuracy and stability.
Specifically, referring to fig. 11, in a flowchart of still another identity authentication provided by the embodiment of the present application, in S1021, feature extraction is performed on text input behavior information to obtain text input behavior features, in S10G2, feature extraction is performed on language behavior information to obtain language behavior features, wherein the language behavior information includes speech language behavior information and text language behavior information, the language behavior features include speech language behavior features and text language behavior features, feature extraction is performed on the speech language behavior information to obtain speech language behavior features, feature extraction is performed on the text language behavior information to obtain text language behavior features, and in S10G3, feature extraction is performed on speech voiceprint information to obtain speech voiceprint features.
In S1022, the characteristic space mapping is carried out on the character input behavior characteristic to obtain the character input behavior standard characteristic, the characteristic space mapping is carried out on the language behavior characteristic to obtain the language behavior standard characteristic, the language behavior standard characteristic comprises a voice language behavior standard characteristic and a character language behavior standard characteristic which are respectively obtained by the voice language behavior characteristic and the character language behavior characteristic through the characteristic space mapping, and the characteristic space mapping is carried out on the voice voiceprint characteristic to obtain the voice voiceprint standard characteristic in S10H 3.
In the feature extraction stage, feature extraction of relevant dimensions can be performed through a dimension-specific sub-model, and referring to fig. 6, the identity authentication architecture can include a model processing layer, where the model processing layer includes a dimension-specific sub-model, and the dimension-specific sub-model includes a voice voiceprint extraction model, a language behavior extraction model, and a text input behavior extraction model. The voice voiceprint extraction model is used for extracting features of voice voiceprint information to obtain voice voiceprint features, the voice voiceprint extraction model is used for extracting features of voice language behavior information to obtain language voice behavior features, the language behavior extraction model is used for extracting features of text language behavior information to obtain text language behavior features, and the text input behavior extraction model is used for extracting features of text language behavior information to obtain text language behavior features, such as an N-gram model and the like.
In specific implementation, different dimension special sub-models can be activated according to the task request and the input mode of the interactive content. Specifically, if the task request and the interactive content include voice input information, a voice voiceprint extraction model and a language behavior extraction model can be activated so as to perform identity verification based on the voice voiceprint information and the voice language behavior information, and if the task request and the interactive content include text input information, a text input behavior extraction model and a language behavior extraction model can be activated so as to perform identity verification based on text language behavior characteristics and the text input behavior information. The flexible intelligence of activating the dimension specific sub-model according to the input mode ensures that the system can collect enough characteristic data for identity verification in any interaction mode.
In the stage of obtaining the matching confidence coefficient of the task request based on the multiple standard features, referring to fig. 12, a flow chart of another identity authentication provided by the embodiment of the application is shown, in which S1023 may specifically be S10C, performing identity confidence coefficient calculation for the target account based on the multiple standard features to obtain multiple initial confidence coefficients corresponding to the multiple standard features, and S10D, performing weighted average on the multiple initial confidence coefficients according to the weights of multiple dimensions to obtain the matching confidence coefficient of the task request, that is, performing identity confidence coefficient calculation respectively and then fusing the multiple confidence coefficients.
The voice print score may be obtained by performing an identity confidence calculation for the target account based on the text input behavior standard feature, and the voice print score may be obtained by performing an identity confidence calculation for the target account based on the language behavior standard feature, and the voice print score may be obtained by performing an identity confidence calculation for the target account based on the voice print standard feature, and the voice print score may be obtained by performing an initial confidence calculation for the voice print standard feature, and the voice print score may be obtained by performing an identity confidence calculation for the target account based on the voice print standard feature, and the voice print score may be obtained by performing an initial confidence calculation for the target account.
In S10D, the initial confidence of the j-th dimension of the three dimensions is denoted FeatureScore [ i ], the Weight of the j-th dimension is denoted Weight [ j ], and the matching confidence ConfidenceScore may be expressed as ConfidenceScore =Σ (FeatureScore [ j ]. Weight [ j ])/- Σ (Weight [ j ]). For example, the weight of the voice voiceprint dimension can be 0.5, the weight of the language behavior dimension can be 0.5, and the weight of the text input behavior dimension can be 0.5.
Or in the stage of obtaining the matching confidence coefficient of the task request based on the multiple standard features, referring to fig. 13, a flow chart of another identity authentication provided by the embodiment of the application is shown, in which S1023 may be specifically shown as S10E, the multiple standard features are fused according to the weights of multiple dimensions to obtain the comprehensive features, and S10F, the identity confidence coefficient calculation for the target account is performed based on the comprehensive features to obtain the matching confidence coefficient of the task request, that is, the feature fusion can be performed and then the identity confidence coefficient calculation is performed.
By means of the fusion of the confidence coefficient or the feature fusion, comprehensive evaluation of information of three dimensions of voice voiceprint, text input behaviors and language behaviors can be achieved, three-dimensional feature fusion is achieved, and the obtained matching confidence coefficient integrates information of multiple dimensions, so that the method is comprehensive and accurate. The weights of the dimensions can be preset according to the recognition accuracy of the dimensions, so that more reliable features occupy larger proportion in final judgment, and the matching confidence coefficient has higher accuracy.
Referring to fig. 6, the model processing layer may further include a feature normalization module and a three-dimensional feature fusion model. The feature standardization module is used for respectively carrying out feature space mapping on a plurality of features to be verified to obtain a plurality of standard features and providing input for the three-dimensional feature fusion model, wherein the three-dimensional feature fusion model is a correlation model among the three-dimensional features and is used for carrying out identity confidence calculation aiming at a target account based on the standard features to obtain the matching confidence of a task request, and the integrated learning method is adopted to fuse the features of the available dimensions to realize discrimination so as to realize cross verification among the features.
After the matching confidence coefficient is determined, the matching confidence coefficient can be displayed, so that a user can reasonably expect an identity authentication result according to the matching confidence coefficient. Referring to fig. 4 and 5, the confidence of the match is shown in the third chat bubble 23.
And S103, if the verification result of the task request is determined to be verification passing according to the matching confidence, executing the task to be executed corresponding to the task request.
In the embodiment of the application, the verification result of the task request can be determined according to the matching confidence, if the verification result is verification passing, the verification result indicates that the authentication passes, the task to be executed corresponding to the task request can be executed, so that the identity recognition is carried out based on the identity related information with a plurality of dimensionalities with richer contents, the security verification accuracy of the task to be executed is improved, the identity recognition can be carried out based on the task request without additional identity verification operation, the real-time and non-perception identity verification is realized, the user operation is simplified, and the security and convenience are balanced better.
The execution of the task to be executed can be realized by calling an interface corresponding to the task to be executed.
In the embodiment of the application, whether the verification result of the task request passes the verification can be determined according to the matching confidence coefficient and the confidence coefficient threshold value. For example, when the confidence of the match is greater than the confidence threshold, the validation result of the task request may be considered as validated. The confidence threshold is set in a key link for ensuring balance between system safety and user experience, and the confidence threshold can be a fixed threshold or a threshold determined according to a request category of a task request, and different request categories can correspond to different thresholds.
The request category can be a payment category, a transfer category, a data modification category, a query category and the like, the payment category can also comprise a large payment category, a small payment category and the like, the transfer category can also comprise a large transfer category and a small transfer category, or the request category can also be a high security sensitivity category, a medium security sensitivity category, a low security sensitivity category and the like. In general, payment and transfer categories correspond to high security sensitivity categories, which may set a higher confidence threshold, e.g., 90%, data modification categories correspond to medium security sensitivity categories, which may set a medium confidence threshold, e.g., 80%, and query categories correspond to low security sensitivity categories, which may set a lower confidence threshold, e.g., 65%.
In the embodiment of the application, referring to fig. 14, a schematic flow chart of another identity authentication provided by the embodiment of the application is shown, and before S103, the flow chart may further include S104, determining a confidence threshold corresponding to the task request according to a request type of the task request, S105, if the matching confidence and the confidence threshold satisfy verification passing conditions, for example, the matching confidence is greater than or equal to the confidence threshold, determining that the verification result of the task request is verification passing, and indicating that it is determined as a legal user, and executing the task to be executed. Therefore, different thresholds can be corresponding based on different request types, so that the authentication of different parts of the task requests is realized, the safe execution of the task requests in the high-safety sensitive type is ensured, and the convenient execution of the task requests in the low-safety sensitive type is ensured.
Further, referring to FIG. 14, after S104, the process may further include S106, if the match confidence and confidence threshold meet additional verification conditions, e.g., the match confidence is slightly less than the confidence threshold (e.g., less than the confidence threshold and greater than or equal to 60% of the confidence threshold), determining that the verification result of the task request is additional verification, indicating that the match confidence is insufficient, determining whether the user is a legitimate user, providing additional verification information input indication, e.g., providing a password, fingerprint, face information, verification code, etc., and executing the task to be executed when verification is determined to pass according to the additional verification information, and refusing to execute the task to be executed when verification is determined to fail according to the additional verification information.
After S104, the process may further include S107, where if the matching confidence and the confidence threshold meet the verification failure condition, for example, the matching confidence is far less than the confidence threshold (for example, less than 60% of the confidence threshold), determining that the verification result of the task request is verification failure, indicating that it is determined as an illegal user, and rejecting to execute the task to be executed. Therefore, the matching confidence and the confidence which do not meet the verification passing condition are respectively processed, and the safety and convenience of identity verification can be ensured.
After determining the verification result, the verification result may also be presented, which may be presented in the fourth chat bubble 24, as illustrated with reference to fig. 4 and 5. After the task to be executed is executed, the execution result of the task to be executed may be displayed, and referring to fig. 4 and 5, the execution result may be displayed in the fourth chat bubble 24. Of course, the execution progress of the task to be executed may also be displayed. Thus, the user can acquire the identity authentication result and the task execution result, and the user experience is improved.
Referring to fig. 6, the authentication architecture may include a decision control layer, which may include a security level classifier, a verification policy selector, and a result processor. The security level classifier is used for determining a request category according to a task request, the request category is used for indicating a security sensitivity level so as to determine a confidence coefficient threshold according to the request category, the verification policy selector is used for determining a verification result (such as verification passing, additional verification is needed and verification fails) according to the matched confidence coefficient and the confidence coefficient threshold, and determining a corresponding verification policy (such as executing a task to be executed, providing an additional verification information input instruction and refusing to execute the task to be executed) according to the verification result, and the result processor is used for executing the verification policy according to the verification policy.
In the embodiment of the application, the authentication is performed according to the task request or according to the task request and the interaction information, namely, the authentication is performed by analyzing the data generated by normal interaction of the user, no additional operation is needed, the imperceptible real-time authentication of 'side-use side-check' can be realized, the interaction smoothness is kept, the optimal balance of safety and convenience is realized, the user experience and the efficiency of executing the safety sensitive operation are obviously improved in a business application scene, the user task completion time is greatly shortened compared with the traditional secondary authentication mode according to the internal test data, the extremely high safety authentication accuracy rate can be maintained, and the identity authentication difficulty of an intelligent agent in executing the high safety operation is effectively solved. Identity verification is carried out based on the continuously acquired interaction information, which is equivalent to realizing continuous identity authentication, and can continuously verify the identity of a user, so that the safety risk brought by the equipment acquired by other people during use is avoided, and the safety is obviously improved.
In addition, multidimensional verification is carried out through fusion analysis of three-dimensional data, even if single-dimensional characteristics are insufficient, accurate judgment can be carried out on the basis of other dimensions, and compared with single-dimensional identity verification, the intelligent authentication method and the intelligent authentication system can effectively prevent fraudulent use, reduce fraudulent risks and provide reliable identity verification guarantee for intelligent agent to execute sensitive operation. Different confidence thresholds are set according to the security sensitivity, differentiation of verification requirements is achieved, and the security classification strategy can balance security and convenience.
In the embodiment of the application, the step S102 can be realized through the feature processing model, namely the feature processing model can be utilized to calculate the identity confidence coefficient aiming at the target account based on the information to be verified in multiple dimensions, so as to obtain the matching confidence coefficient of the task request. Specifically, the feature processing model may be a voice voiceprint extraction module, a text input behavior extraction module, a language behavior extraction module, a feature standardization module and a three-dimensional feature fusion model in fig. 6. The construction of the feature processing model may also be performed prior to the use of the feature processing model.
In the construction stage of the feature processing model, referring to fig. 15, a flow chart of identity authentication provided by the embodiment of the application may include S201, acquiring historical interaction information from a target account, where the historical interaction information may be historical voice input information and historical text input information, S202, extracting information from the historical interaction information to obtain training information with multiple dimensions, and S203, constructing the feature processing model based on the training information. Thus, an initial behavior baseline is established in advance based on the voice information and the text information of the target account, and the feature processing model can have the capability of recognizing the voice and the text of the target account. The historical interaction information needs to have a certain number to ensure enough information for constructing the feature processing model and further ensure the accuracy of the feature processing model, for example, at least 10 pieces of historical voice input information and at least 15 pieces of historical text input information. The historical interaction information can be subjected to sample cleaning and screening to filter out samples with large environmental noise and abnormal user states, so that the quality of the model is ensured.
Before the historical interaction information is obtained, the target account can be registered and logged in, and the target account can finish the traditional KYC (know your client Know Your Customer) authentication, such as a short message verification code or face recognition, so as to obtain basic information. The three-dimensional feature of the user can be informed to continuously collect the function for authentication, and the waiting triggering stage of the user authentication method can be entered after the user is authorized.
In the process of acquiring the history interaction information, referring to fig. 15, before S201, the method may further include providing a preset guidance content, so that the user provides the history interaction information according to the preset guidance content, for example, the history voice input information includes voice input information corresponding to the preset guidance content, and the history text input information includes text input information corresponding to the preset guidance content, so that the user basic feature is acquired through the guidance acquisition, and a sufficient amount of history interaction information with sufficient content can be ensured.
Referring to fig. 16, a flowchart of still another authentication provided by the embodiment of the present application is shown in S201, where S2011 obtains voice input information formed by reading preset guidance content, so that the voice input information corresponding to the preset guidance content may be, for example, audio for reading the preset guidance content, S2012 obtains text input information formed by inputting the preset guidance content, so that the text input information corresponding to the preset guidance content may be, for example, text input information for inputting the preset guidance content, and S2013 obtains text input information or voice input information formed by answering a question of the preset guidance content, so that the text input information may include text input information for answering a question corresponding to the preset guidance content, and the voice input information includes audio for answering a question corresponding to the preset guidance content.
In the process of acquiring the history interaction information, the interaction information of the user in the daily use process can be collected in a silent mode under the authorization of the user to serve as the history interaction information, and referring to fig. 16, before S201, S205, the process of collecting in a silent mode can be further included, so that the natural interaction behavior of the user is reflected, and the method is used for building a feature processing model. The step S201 may further include recording voice interaction information so that the history voice input information may be daily voice audio, and recording text interaction information so that the history text input information may be daily text input recording, S2015. The period of silence acquisition may be 3-7 days.
And extracting information from the historical interaction information to obtain training information with multiple dimensions, wherein the multiple dimensions can comprise three dimensions of voice voiceprint dimensions, language behavior dimensions and text input behavior dimensions. The voice voiceprint information of the voice voiceprint dimension can be represented by voiceprint indexes such as voice frequency spectrum information, mel Frequency Cepstrum Coefficient (MFCC), fundamental frequency contour, tone variation and the like extracted from the original audio, and can comprise 12-20 characteristic dimensions so as to enrich the information quantity of the voice voiceprint information. Language behavior information of the language behavior dimension may include vocabulary selection preference, sentence pattern structure habit, expression habit characteristics, special word frequency, and the like. The text input behavior information of the text input behavior dimension may include a key interval, a typing rhythm, a key pause mode, a key force, an input speed, an error correction mode, etc., which may include 10-15 feature dimensions to enrich the information amount of the text input behavior information.
Specifically, the training information of the voice voiceprint dimension, the language behavior dimension and the text input behavior dimension can comprise voice voiceprint information, language behavior information and text input behavior information, and the step S202 can comprise the steps of S2021, S2022, S2023, and S2023, wherein the voice voiceprint information is obtained by information extraction, the mosquito input behavior information is obtained by information extraction. The information extraction method in S202 may refer to the information extraction method in S101.
In the process of constructing the feature processing model, S203 may include performing feature extraction based on the voice voiceprint extraction model S2024, performing feature extraction based on the language behavior extraction model S2025, performing feature extraction based on the text input behavior extraction model S2026, and the process of feature extraction may refer to the description of S102. S203 may also include S2027, feature normalization, S2028, determining a confidence of the match, S2029, determining a confidence threshold. Thereby constructing the structure and parameters of the feature processing model in the foregoing manner.
In the training stage of the feature processing model, the feature processing model is not put into use, but prediction of an automatic verification result can be performed based on the feature processing model, at the moment, a traditional verification mode is used as a main verification means, and a verification result of the traditional verification mode can be used as feedback information of the automatic verification result, so that optimization of the feature processing model is performed.
Referring to fig. 17, a complete flow diagram is provided in an embodiment of the present application, where KYC verification may be completed after user registration in a model building stage, historical interaction information may be acquired by guiding sample acquisition, and after user authorization, historical interaction information may be further acquired by silent acquisition, so as to construct a feature processing model based on the historical interaction information.
After the feature processing model is obtained based on training information, a daily use stage of the feature processing model can be entered, and no-perception authentication of the user identity is realized through the processes of S101-S103. Referring to fig. 17, in the daily use stage, a task request may be acquired, if the task request corresponds to a sensitive operation, information extraction and calculation of matching confidence may be performed, if the matching confidence is high, authentication may be considered to pass, the sensitive operation may be performed, if the matching confidence is medium, additional verification information may be acquired at this time, if the verification passes, the operation may be performed, if the verification fails, the operation may be refused to be performed, if the matching confidence is low, authentication may be considered to fail, and the operation may be refused to be performed. If the task request does not correspond to a sensitive operation, the operation can be performed without user identity authentication.
In the daily use stage of the feature processing model, the feature processing model can be updated, so that the model can use natural changes (such as input habit and speaking mode change) of the behavior features of the user, and the misrejection rate caused by the changes of the behavior features of the user is reduced. Specifically, the feature processing model may be updated according to the interaction information in the daily use stage, or the feature processing model may be updated in combination with the interaction information and the feedback information, where the feedback information is, for example, an additional verification result for the additional verification information, and the additional verification result is used to truly indicate the identity information of the user, and is equivalent to the feedback information for the processing result of the feature processing model, and if the verification result corresponding to the additional verification information is passed, it is indicated that there is erroneous judgment in the feature processing model, and the feature processing model may be updated accordingly.
In a specific implementation, referring to fig. 18, after S103, may include S301, and in response to the ith meeting of the update condition, the feature processing model may be updated for the ith time according to the ith interaction information of the target account, or the ith interaction information and the ith feedback information. The ith satisfying the update condition comprises at least one of the condition that the non-updated time length satisfies the preset time length, the false judgment of the verification result, and the condition that the quantity of the interaction information satisfies the preset quantity. That is, the model update may be performed periodically, for example, once every seven days, or may be triggered according to the misjudgment condition, or may be triggered according to the number of interaction information, for example, the number of interaction information reaches 50, so that the feature processing model can be automatically updated in time, and the model can use the behavior change of the user. The priority of the interaction information corresponding to the misjudgment condition can be set to be higher, so that model updating is performed based on the misjudgment condition.
Specifically, referring to fig. 19, a schematic flow chart of another identity authentication provided in the embodiment of the present application may further determine an update condition before S301, and may include S3021, determining that the duration of non-update meets a preset duration condition, S3022, misjudging a verification result, S3023, determining that the number of interaction information meets a preset number condition, and if at least one of S3021, S3022 and S3023 meets the update condition, determining that the update condition is met. Before S301, S303 may further include, in response to the ith meeting the update condition, acquiring the ith interaction information of the target account, or the ith interaction information and the ith feedback information.
The ith interaction information and the ith feedback information can be newly generated or can be updated in a certain period of time, and the model can be updated by utilizing the newly generated ith interaction information and the ith feedback information, specifically, when the i is 1, the 1 st interaction information from the target account or the 1 st interaction information and the 1 st feedback information are updated according to the 1 st interaction information before the 1 st update condition is met, and for the integer i which is larger than 1, the ith interaction information from the target account or the ith interaction information and the ith feedback information are updated according to the i-1 th update and the i-th update condition, so that the model update is carried out according to the increment data before the 1 st update or the i-th update, and repeated calculation of the interaction information and the additional verification information is reduced.
Referring to fig. 6, the decision control layer comprises a feedback collector, the 1 st feedback information and the i th feedback information can be obtained through the feedback collector, the model processing layer comprises a model updating module, and the feature processing model can be controlled and updated through the model updating module.
The 1 st interaction information, the 1 st feedback information, the i th interaction information and the i th feedback information can be subjected to sample cleaning and screening so as to filter samples with large environmental noise and abnormal user states and ensure the quality of the model.
And extracting information from the ith interaction information and the ith feedback information to obtain incremental sample information with multiple dimensions, wherein the multiple dimensions can comprise three dimensions of voice voiceprint dimensions, language behavior dimensions and text input behavior dimensions. Referring to fig. 19, the step S301 may further include an information extraction process for at least one of the 1 st interaction information and the 1 st feedback information, the i-th interaction information and the i-th feedback information, and specifically includes step S3041, where the information is extracted to obtain voice voiceprint information, step S3042, where the information is extracted to obtain text input behavior information, and step S3043, where the information is extracted to obtain language behavior information. The specific manner of information extraction may be described with reference to S101.
The voice voiceprint information of the voice voiceprint dimension can be represented by voiceprint indexes such as voice frequency spectrum information, mel Frequency Cepstrum Coefficient (MFCC), fundamental frequency contour, tone variation and the like extracted from the original audio. Language behavior information of the language behavior dimension may include vocabulary selection preference, sentence pattern structure habit, expression habit characteristics, special word frequency, and the like. The text input behavior information for the text input behavior dimension may include key spacing, typing rhythm, key pause mode, key force, input speed, error correction mode, and the like.
In the process of the ith update of the feature processing model, referring to fig. 20, a flowchart of another identity authentication provided in this embodiment of the present application is shown, where S301 may include S3011, in response to the ith satisfaction of the update condition, calculating an ith target parameter of the feature processing model according to the ith interaction information from the target account or the ith interaction information and the ith feedback information, S3012, performing weighted average on the ith-1 parameter and the ith target parameter of the feature processing model to obtain the ith parameter of the feature processing model, where the ith parameter is an original parameter of the feature processing model, S3013, and then performing the ith update on the feature processing model according to the ith parameter to make the feature processing model have the ith parameter. The updating mechanism of the progressive characteristic processing model gradually merges new behavior characteristics corresponding to the increment sample (NEWSAMPLE) on the basis of keeping the original i-1 parameter, so that the model can smoothly adapt to the slow change of the habit of a user, the user experience can be continuously improved while the safety is kept, and the occurrence of false rejection is reduced.
Specifically, the i-th target parameter is NEWSAMPLE (I), the i-1-th parameter is the Model (i-1), and the i-th parameter is the Model (i), the Model (i) can be expressed as Model (i) =β×model (i-1) + (1- β) × NEWSAMPLE (I), wherein β is a retention factor, the update speed of the Model can be controlled, the Model is prevented from severely fluctuating, and the range of the retention factor is 0.7-0.9. This updating mechanism ensures that the system is able to adapt to the natural changes in user input behavior, speech characteristics or language habits over time, while avoiding abrupt changes in the model due to short-term anomalies.
After the ith parameter is determined, performance evaluation may also be performed on the feature processing model corresponding to the ith parameter, and S3013 may include S30A and S30B, and further may include S305, as shown in fig. 19. S30A, if the feature processing model corresponding to the ith parameter is better than the feature processing model corresponding to the ith-1 parameter, executing S30B, and if not, executing S305. In S30B, the feature processing model corresponding to the ith parameter may be used as a new feature processing model to complete updating, and if the feature processing model corresponding to the ith parameter is inferior to the feature processing model corresponding to the ith-1 parameter, S305 may be executed to roll back the parameter of the feature processing model to the ith-1 parameter, and adjust the retention factor to optimize the model updating process. During the model update process, update information may be recorded.
Based on the identity authentication method provided by the embodiment of the present application, the embodiment of the present application further provides an identity authentication device, and referring to fig. 21, a block diagram of the identity authentication device provided by the embodiment of the present application is shown, where the identity authentication device 1300 includes:
the information extraction unit 1301 is configured to respond to a task request from a target account, and extract information from the task request to obtain information to be verified in multiple dimensions, where the multiple dimensions at least include a voice voiceprint dimension and a language behavior dimension, or at least include a text input behavior dimension and a language behavior dimension;
A confidence coefficient calculating unit 1302, configured to perform an identity confidence coefficient calculation for the target account based on the information to be verified in the multiple dimensions, so as to obtain a matching confidence coefficient of the task request;
And the task execution unit 1303 is configured to execute a task to be executed corresponding to the task request if it is determined that the verification result of the task request is verification passing according to the matching confidence.
Optionally, the information extraction unit includes:
the information acquisition unit is used for responding to a task request from a target account and acquiring interaction content from the target account within a preset time period before the task request;
And the information extraction subunit is used for respectively extracting information from the task request and the interactive content to obtain information to be verified in multiple dimensions.
Optionally, the information extraction subunit includes:
A voice extraction unit for extracting information of the voice input information aiming at the task request and the voice input information in the interactive content to obtain voice voiceprint information of voice voiceprint dimension and voice language behavior information of language behavior dimension, so that the information to be verified comprises the voice voiceprint information and the voice language behavior information, and/or,
The word extraction unit is used for extracting information of the word input information aiming at the task request and the word input information in the interactive content to obtain word input behavior information of word input behavior dimension and word language behavior information of language behavior dimension, so that the information to be verified comprises the word input behavior information and the word language behavior information.
Optionally, the confidence coefficient calculating unit includes:
The feature extraction unit is used for respectively extracting features of the information to be verified in multiple dimensions to obtain multiple features to be verified;
the feature mapping unit is used for mapping the feature space of the plurality of features to be verified to obtain a plurality of standard features;
and the confidence coefficient calculating subunit is used for calculating the identity confidence coefficient aiming at the target account based on the plurality of standard features to obtain the matching confidence coefficient of the task request.
Optionally, the confidence calculating subunit is specifically configured to:
The identity confidence coefficient calculation aiming at the target account is respectively carried out based on the plurality of standard features to obtain a plurality of initial confidence coefficients corresponding to the plurality of standard features, the plurality of initial confidence coefficients are weighted and averaged according to the weight of the plurality of dimensions to obtain the matching confidence coefficient of the task request, or,
And carrying out identity confidence calculation aiming at the target account based on the comprehensive features to obtain the matching confidence of the task request.
Optionally, the apparatus further includes:
The history information acquisition unit is used for acquiring history interaction information from the target account, wherein the history interaction information comprises history voice input information and history text input information;
the historical information extraction unit is used for extracting information from the historical interaction information to obtain training information with multiple dimensions;
The system comprises a training information acquisition unit, a model construction unit and a feature processing model, wherein the training information acquisition unit is used for acquiring training information of a task request, the model construction unit is used for constructing a feature processing model based on the training information, and the feature processing model is used for carrying out identity confidence calculation aiming at the target account based on the information to be verified with multiple dimensions to obtain the matching confidence of the task request.
Optionally, the apparatus further includes:
The guiding unit is used for providing preset guiding content so that the historical voice input information comprises voice input information corresponding to the preset guiding content, and the historical text input information comprises text input information corresponding to the preset guiding content.
Optionally, the apparatus further includes:
The model updating unit is used for responding to the ith satisfying updating condition and updating the feature processing model for the ith time according to the ith interaction information from the target account or the ith interaction information and the ith feedback information, wherein i is a positive integer, and the ith satisfying updating condition comprises at least one of the condition that the non-updated time length satisfies the preset time length, the false judgment of the verification result and the condition that the quantity of the interaction information satisfies the preset quantity.
Optionally, the model updating unit includes:
The parameter determining unit is used for responding to the ith meeting of the updating condition, and calculating and obtaining the ith target parameter of the characteristic processing model according to the ith interaction information from the target account or the ith interaction information and the ith feedback information;
The weighting unit is used for carrying out weighted average on the ith-1 parameter and the ith target parameter of the characteristic processing model to obtain the ith parameter of the characteristic processing model, wherein the 0 th parameter is the original parameter of the characteristic processing model;
And the updating unit is used for carrying out the ith updating on the characteristic processing model according to the ith parameter.
Optionally, the method further comprises:
the threshold determining unit is used for determining a confidence threshold corresponding to the task request according to the request category of the task request;
And the verification result determining unit is used for determining that the verification result of the task request is verification passing if the matching confidence coefficient and the confidence coefficient threshold meet the verification passing condition.
Optionally, the apparatus further includes:
the instruction providing unit is used for providing an additional verification information input instruction if the matching confidence coefficient and the confidence coefficient threshold meet additional verification conditions and the verification result of the task request is determined to be that additional verification is needed;
and the refusing execution unit is used for refusing to execute the task to be executed if the matching confidence coefficient and the confidence coefficient threshold value meet the verification failure condition and the verification result of the task request is determined to be verification failure.
Optionally, the apparatus further includes:
The display unit is used for responding to the task request and displaying the dimension information of the plurality of dimensions, and/or displaying the matching confidence, and/or displaying the verification result, and/or displaying the execution result of the task to be executed.
According to the technical scheme, the task request is subjected to information extraction to obtain the to-be-verified information with multiple dimensions in response to the task request from the target account for the task to be executed, and the multiple dimensions at least comprise the voice voiceprint dimension and the language behavior dimension or at least comprise the text input behavior dimension and the language behavior dimension, so that the task request can be utilized to mine identity related information of the multiple dimensions of the user corresponding to the task request, and the identity related information has richer content compared with the identity related information of a single dimension, so that the identity information of the user can be reflected more. Based on the information to be verified in multiple dimensions, identity confidence calculation aiming at the target account can be performed, and matching confidence of the task request is obtained, so that probability that a user who makes the task request is a user corresponding to the target account is indicated. If the verification result of the task request is determined to be verification passing according to the matching confidence, the verification result indicates that the authentication passes, the task to be executed corresponding to the task request can be executed, so that the identity recognition is performed based on the identity related information with multiple dimensions with richer contents, the security verification accuracy of the task to be executed is improved, the identity recognition can be performed based on the task request without additional identity verification operation, the real-time and non-perception identity verification is realized, the user operation is simplified, and the security and the convenience are balanced better.
The embodiment of the application also provides a computer device, which is the computer device introduced above, and can comprise a terminal device or a server, and the identity authentication device can be configured in the computer device. The computer device is described below with reference to the accompanying drawings.
If the computer device is a terminal device, please refer to fig. 22, an embodiment of the present application provides a terminal device, which is exemplified by a mobile phone:
fig. 22 is a block diagram showing a part of the structure of a mobile phone related to a terminal device provided by an embodiment of the present application. Referring to fig. 22, the mobile phone includes a Radio Frequency (RF) circuit 1410, a memory 1420, an input unit 1430, a display unit 1440, a sensor 1450, an audio circuit 1460, a wireless fidelity (WiFi) module 1470, a processor 1480, and a power supply 1490. Those skilled in the art will appreciate that the handset configuration shown in fig. 22 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 22:
the RF circuit 1410 may be used for receiving and transmitting signals during a message or a call, specifically, receiving downlink information from a base station, processing the received downlink information by the processor 1480, and transmitting uplink data to the base station.
The memory 1420 may be used to store software programs and modules, and the processor 1480 performs various functional applications and data processing of the cellular phone by executing the software programs and modules stored in the memory 1420. The memory 1420 may mainly include a storage program area which may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), etc., and a storage data area which may store data created according to the use of the cellular phone (such as audio data, a phonebook, etc.), etc. In addition, memory 1420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the handset. In particular, the input unit 1430 may include a touch panel 1431 and other input devices 1432.
The display unit 1440 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 1440 may include a display panel 1441.
The handset can also include at least one sensor 1450, such as a light sensor, motion sensor, and other sensors.
Audio circuitry 1460, speaker 1461, microphone 1462 may provide an audio interface between the user and the handset.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 1470, so that wireless broadband Internet access is provided for the user.
The processor 1480 is a control center of the handset, connects various parts of the entire handset using various interfaces and lines, performs various functions of the handset and processes data by running or executing software programs and/or modules stored in the memory 1420, and invoking data stored in the memory 1420.
The handset also includes a power supply 1490 (e.g., a battery) that provides power to the various components.
In this embodiment, the processor 1480 included in the terminal apparatus also has the following functions:
Responding to a task request from a target account, and extracting information from the task request to obtain information to be verified in multiple dimensions, wherein the multiple dimensions at least comprise a voice voiceprint dimension and a language behavior dimension or at least comprise a text input behavior dimension and a language behavior dimension;
Based on the information to be verified in multiple dimensions, identity confidence calculation aiming at the target account is carried out, and matching confidence of the task request is obtained;
And if the verification result of the task request is determined to pass through verification according to the matching confidence, executing the task to be executed corresponding to the task request.
If the computer device is a server, as shown in fig. 23, fig. 23 is a block diagram of a server 1500 according to an embodiment of the present application, where the server 1500 may have a relatively large difference due to different configurations or performances, and may include one or more processors 1522, such as a central processing unit (Central Processing Units, abbreviated as CPU), a memory 1532, one or more storage media 1530 (such as one or more mass storage devices) storing application programs 1542 or data 1544. Wherein the memory 1532 and the storage medium 1530 may be transitory or persistent storage. The program stored on the storage medium 1530 may include one or more modules (not shown), each of which may include a series of instruction operations on the server. Still further, a processor 1522 may be provided in communication with the storage medium 1530, executing a series of instruction operations on the server 1500 in the storage medium 1530.
The Server 1500 may also include one or more power supplies 1526, one or more wired or wireless network interfaces 1550, one or more input/output interfaces 1558, and/or one or more operating systems 1541, such as a Windows Server TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM, and the like.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 23.
In addition, the embodiment of the application also provides a computer readable storage medium for storing a computer program for executing the method provided by the embodiment.
The present application also provides a computer program product comprising a computer program which, when run on a computer device, causes the computer device to perform the method provided by the above embodiments.
It will be appreciated by those of ordinary skill in the art that implementing all or part of the steps of the above method embodiments may be accomplished by hardware associated with a computer program, which may be stored in a computer readable storage medium, which when executed performs the steps comprising the above method embodiments, and that the computer readable storage medium may be at least one of a Read-only Memory (ROM), a RAM, a magnetic disk, or an optical disk, etc. various media in which the computer program may be stored.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, with reference to the description of the method embodiments in part. The apparatus and system embodiments described above are merely illustrative, in which elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be included in the scope of the present application. Further combinations of the present application may be made to provide further implementations based on the implementations provided in the above aspects. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.
Claims (16)
1. An identity authentication method, the method comprising:
Responding to a task request from a target account, and extracting information from the task request to obtain information to be verified in multiple dimensions, wherein the multiple dimensions at least comprise a voice voiceprint dimension and a language behavior dimension or at least comprise a text input behavior dimension and a language behavior dimension;
Based on the information to be verified in multiple dimensions, identity confidence calculation aiming at the target account is carried out, and matching confidence of the task request is obtained;
And if the verification result of the task request is determined to pass through verification according to the matching confidence, executing the task to be executed corresponding to the task request.
2. The method according to claim 1, wherein the extracting information from the task request to obtain the information to be verified in multiple dimensions in response to the task request from the target account includes:
Responding to a task request from a target account, and acquiring interactive content from the target account within a preset time period before the task request;
And respectively extracting information from the task request and the interactive content to obtain information to be verified in multiple dimensions.
3. The method of claim 2, wherein the extracting information from the task request and the interactive content to obtain the information to be verified in multiple dimensions includes:
extracting the voice input information aiming at the voice input information in the task request and the interactive content to obtain voice voiceprint information of voice voiceprint dimension and voice language behavior information of language behavior dimension, enabling the information to be verified to comprise the voice voiceprint information and the voice language behavior information, and/or,
And extracting information of the text input information aiming at the task request and the text input information in the interactive content to obtain text input behavior information of a text input behavior dimension and text language behavior information of a language behavior dimension, so that the information to be verified comprises the text input behavior information and the text language behavior information.
4. The method according to claim 1, wherein the performing the identity confidence calculation for the target account based on the information to be verified in the multiple dimensions to obtain the matching confidence of the task request includes:
respectively extracting features of the information to be verified in multiple dimensions to obtain multiple features to be verified;
respectively carrying out feature space mapping on the plurality of features to be verified to obtain a plurality of standard features;
And carrying out identity confidence calculation aiming at the target account based on the plurality of standard features to obtain the matching confidence of the task request.
5. The method of claim 4, wherein the performing an identity confidence calculation for the target account based on the plurality of standard features to obtain a matching confidence for the task request comprises:
The identity confidence coefficient calculation aiming at the target account is respectively carried out based on the plurality of standard features to obtain a plurality of initial confidence coefficients corresponding to the plurality of standard features, the plurality of initial confidence coefficients are weighted and averaged according to the weight of the plurality of dimensions to obtain the matching confidence coefficient of the task request, or,
And carrying out identity confidence calculation aiming at the target account based on the comprehensive features to obtain the matching confidence of the task request.
6. The method according to any one of claims 1-5, further comprising:
acquiring historical interaction information from the target account, wherein the historical interaction information comprises historical voice input information and historical text input information;
information extraction is carried out on the historical interaction information to obtain training information with multiple dimensions;
And constructing a feature processing model based on the training information, wherein the feature processing model is used for calculating the identity confidence coefficient aiming at the target account based on the information to be verified in multiple dimensions to obtain the matching confidence coefficient of the task request.
7. The method of claim 6, wherein the method further comprises:
providing preset guiding content, so that the historical voice input information comprises voice input information corresponding to the preset guiding content, and the historical text input information comprises text input information corresponding to the preset guiding content.
8. The method of claim 6, wherein the method further comprises:
Responding to the ith satisfying update condition, and according to the ith interaction information from the target account or the ith interaction information and the ith feedback information, updating the feature processing model for the ith time, wherein i is a positive integer, and the ith satisfying update condition comprises at least one of the condition that the non-updated time length satisfies the preset time length, the false judgment of the verification result and the condition that the quantity of the interaction information satisfies the preset quantity.
9. The method of claim 8, wherein the updating the feature processing model for the ith time in response to the ith meeting of an update condition according to the ith interaction information from the target account, or the ith interaction information and the ith feedback information, comprises:
responding to the ith meeting of the updating condition, and calculating an ith target parameter of the feature processing model according to the ith interaction information from the target account or the ith interaction information and the ith feedback information;
carrying out weighted average on the ith-1 parameter and the ith target parameter of the characteristic processing model to obtain the ith parameter of the characteristic processing model, wherein the 0 th parameter is the original parameter of the characteristic processing model;
and carrying out the ith updating on the characteristic processing model according to the ith parameter.
10. The method according to any one of claims 1-5, further comprising:
determining a confidence threshold corresponding to the task request according to the request category of the task request;
And if the matching confidence coefficient and the confidence coefficient threshold value meet the verification passing condition, determining that the verification result of the task request is verification passing.
11. The method according to claim 10, wherein the method further comprises:
If the matching confidence coefficient and the confidence coefficient threshold meet additional verification conditions, determining that the verification result of the task request is that additional verification is needed, and providing additional verification information input indication;
And if the matching confidence coefficient and the confidence coefficient threshold value meet the verification failure condition, determining that the verification result of the task request is verification failure, and refusing to execute the task to be executed.
12. The method according to any one of claims 1-5, further comprising:
in response to the task request, exposing dimension information for the plurality of dimensions, and/or,
The confidence of the match is shown, and/or,
The verification result is displayed, and/or,
And displaying the execution result of the task to be executed.
13. An identity authentication device, the device comprising:
the system comprises an information extraction unit, a target account, a storage unit and a storage unit, wherein the information extraction unit is used for responding to a task request from the target account, extracting information from the task request to obtain information to be verified in a plurality of dimensions, and the dimensions at least comprise voice voiceprint dimensions and language behavior dimensions or at least comprise text input behavior dimensions and language behavior dimensions;
The confidence coefficient calculating unit is used for calculating the identity confidence coefficient aiming at the target account based on the information to be verified in the multiple dimensions to obtain the matching confidence coefficient of the task request;
And the task execution unit is used for executing the task to be executed corresponding to the task request if the verification result of the task request is determined to be verification passing according to the matching confidence coefficient.
14. A computer device, the computer device comprising a processor and a memory:
The memory is used for storing a computer program and transmitting the computer program to the processor;
The processor is configured to perform the identity authentication method of any one of claims 1-12 according to instructions in the computer program.
15. A computer readable storage medium, characterized in that the computer readable storage medium is for storing a computer program for executing the identity authentication method according to any one of claims 1-12.
16. A computer program product comprising a computer program which, when run on a computer device, causes the computer device to perform the identity authentication method of any one of claims 1-12.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510886324.8A CN120856376A (en) | 2025-06-26 | 2025-06-26 | Identity authentication method and related device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510886324.8A CN120856376A (en) | 2025-06-26 | 2025-06-26 | Identity authentication method and related device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120856376A true CN120856376A (en) | 2025-10-28 |
Family
ID=97409173
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510886324.8A Pending CN120856376A (en) | 2025-06-26 | 2025-06-26 | Identity authentication method and related device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120856376A (en) |
-
2025
- 2025-06-26 CN CN202510886324.8A patent/CN120856376A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7689418B2 (en) | Method and system for non-intrusive speaker verification using behavior models | |
| US6490560B1 (en) | Method and system for non-intrusive speaker verification using behavior models | |
| KR101995547B1 (en) | Neural Networks for Speaker Verification | |
| US7039951B1 (en) | System and method for confidence based incremental access authentication | |
| US9799338B2 (en) | Voice print identification portal | |
| KR20160011709A (en) | Method, apparatus and system for payment validation | |
| CN109462482A (en) | Method for recognizing sound-groove, device, electronic equipment and computer readable storage medium | |
| CN113707157B (en) | Voiceprint recognition-based identity verification method and device, electronic equipment and medium | |
| KR20190127372A (en) | Electronic device and method for executing function of electronic device | |
| US12412177B2 (en) | Methods and systems for training a machine learning model and authenticating a user with the model | |
| EP4184355A1 (en) | Methods and systems for training a machine learning model and authenticating a user with the model | |
| US12512094B2 (en) | System and method for consent detection and validation | |
| CN112417412A (en) | Bank account balance inquiry method, device and system | |
| EP1470549B1 (en) | Method and system for non-intrusive speaker verification using behavior models | |
| CN120856376A (en) | Identity authentication method and related device | |
| KR100383391B1 (en) | Voice Recogizing System and the Method thereos | |
| CN119296547B (en) | Voiceprint generation method, verification method, identification device and storage medium | |
| Ceaparu et al. | Multifactor voice-based authentication system | |
| US20260023880A1 (en) | Limiting activity based on a profile | |
| US20260025460A1 (en) | Contact Center Bot Architecture | |
| Duraibi et al. | Suitability of Voice Recognition Within the IoT Environment | |
| CN120612945A (en) | Cross-border financial transaction control method, device, computer equipment and storage medium | |
| CN121120063A (en) | Digital payment identity security verification method, system, equipment and medium based on cloud platform | |
| CN120580988A (en) | Speech model training method, language recognition method, device, equipment and medium | |
| CN118474256A (en) | Identity authentication method, system, equipment, medium and product based on voiceprint recognition |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication |