[go: up one dir, main page]

CN114117037B - Intent recognition method, device, equipment and storage medium - Google Patents

Intent recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN114117037B
CN114117037B CN202111303756.XA CN202111303756A CN114117037B CN 114117037 B CN114117037 B CN 114117037B CN 202111303756 A CN202111303756 A CN 202111303756A CN 114117037 B CN114117037 B CN 114117037B
Authority
CN
China
Prior art keywords
intention
label
confidence
candidate
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111303756.XA
Other languages
Chinese (zh)
Other versions
CN114117037A (en
Inventor
张云云
夏海兵
佘丽丽
毛宇
王福海
纳颖泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaolian Consumer Finance Co ltd
Original Assignee
Zhaolian Consumer Finance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaolian Consumer Finance Co ltd filed Critical Zhaolian Consumer Finance Co ltd
Priority to CN202111303756.XA priority Critical patent/CN114117037B/en
Publication of CN114117037A publication Critical patent/CN114117037A/en
Application granted granted Critical
Publication of CN114117037B publication Critical patent/CN114117037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

本申请涉及一种意图识别方法、装置、设备和存储介质。所述方法包括:获取待测文本,对所述待测文本进行预处理,得到待测文本对应的目标数值序列;将目标数值序列输入由单意图识别模型演变成的多意图识别模型中,得到候选意图标签集,所述候选意图标签集包括至少一个候选意图标签及所述候选意图标签对应的置信度;若存在至少一个候选意图标签的置信度大于第一阈值,则将所述候选意图标签集进行自适应阈值计算,获得自适应阈值;选取所述候选意图标签集中不小于自适应阈值的置信度所对应的意图标签,作为目标意图标签。可实现对不同文本自适应地输出不同数量的意图标签,实现对文本意图的准确识别。

The present application relates to an intent recognition method, device, equipment and storage medium. The method comprises: obtaining a text to be tested, preprocessing the text to be tested, and obtaining a target numerical sequence corresponding to the text to be tested; inputting the target numerical sequence into a multi-intent recognition model evolved from a single intent recognition model to obtain a candidate intent label set, wherein the candidate intent label set includes at least one candidate intent label and the confidence corresponding to the candidate intent label; if there is at least one candidate intent label with a confidence greater than a first threshold, the candidate intent label set is subjected to an adaptive threshold calculation to obtain an adaptive threshold; the intent label corresponding to the confidence not less than the adaptive threshold in the candidate intent label set is selected as the target intent label. It is possible to adaptively output different numbers of intent labels for different texts, thereby achieving accurate recognition of text intent.

Description

Intention recognition method, device, equipment and storage medium
Technical Field
The present application relates to the field of natural language processing technologies, and in particular, to an intent recognition method, apparatus, device, and storage medium.
Background
Natural language processing (Natural Language Processing, NLP) is an important direction in the fields of computer science and artificial intelligence, which studies various theories and methods that enable efficient communication between humans and computers in natural language. With the rapid development and wide application of artificial intelligence technology, more and more industry fields relate to man-machine dialogue systems, so that effective intention recognition is required for language requirement information of users, and accurate corresponding services are provided for the users.
The user intention processing is a science integrating linguistics, computer science and mathematics. The user intention processing is mainly applied to the aspects of recommendation searching, calculation of advertising science, man-machine dialogue, machine translation, public opinion monitoring, automatic abstract, viewpoint extraction, text classification, question answering, text semantic comparison, voice recognition, chinese OCR (Optical Character Recognition ) and the like. In the existing intention recognition model, most of the intention recognition models can only recognize single intention, and the accuracy of the result of recognition of multiple intentions is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an intention recognition method, apparatus, device, and storage medium capable of accurately recognizing a text intention.
In a first aspect, the present application provides an intent recognition method, the method comprising:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical value sequence corresponding to the text to be detected;
inputting a target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels;
if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value;
and selecting the label corresponding to the confidence coefficient which is not smaller than the self-adaptive threshold value in the candidate intention label set as the target intention label.
In one embodiment, the way in which the single intent recognition model is evolved into the multi-intent recognition model includes:
preprocessing a multi-intention sample set to obtain a multi-intention sample value sequence;
Inputting the multi-intention sample numerical sequence into a single intention recognition model to obtain an initial intention label;
calculating cross entropy loss between the initial intent label and actual intent labels of a multi-intent sample set;
And reversely adjusting model parameters of the single-intention model according to the cross entropy loss until a preset training termination condition is met, so as to obtain the multi-intention recognition model.
In one embodiment, if the confidence level of the at least one candidate intention label is greater than the first threshold, performing adaptive threshold calculation on the candidate intention label set to obtain an adaptive threshold, including:
If the confidence coefficient of at least one candidate intention label is larger than a first threshold value, sequencing the confidence coefficient in the candidate intention label set from large to small to obtain a confidence coefficient sequence;
Sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from the first confidence coefficient in the confidence coefficient sequence, calculating a first mean and a first variance of the target element and all confidence coefficients sequenced in front of the target element, and calculating a second mean and a second variance of all confidence coefficients sequenced in behind the target element;
And determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking a target element corresponding to the maximum difference value as an adaptive threshold.
In one embodiment, the determining the difference value by the first mean, the second mean, the first variance, and the second variance includes:
calculating a difference value between the first average value and the second average value to obtain a first difference value;
calculating a difference value between the first variance and the second variance to obtain a second difference value;
And obtaining a difference value according to the first difference value and the second difference value, wherein the difference value is proportional to the square of the first difference value, and the difference value is inversely proportional to the second difference value.
In one embodiment, after the selecting, as the target intention label, a label corresponding to a confidence level greater than the adaptive threshold in the candidate intention label set, the method further includes:
And removing the conflict label in the target intention label to obtain a final intention label identification result.
In one embodiment, the removing the conflict tag in the target intention tag to obtain a final intention tag identification result includes:
and removing the conflict label with smaller confidence in the conflict label to obtain a final intention label identification result.
In a second aspect, the present application also provides an intention recognition apparatus, the apparatus comprising:
The preprocessing module is used for acquiring a text to be detected, preprocessing the text to be detected and obtaining a target numerical value sequence corresponding to the text to be detected;
The model processing module is used for inputting the target numerical value sequence into a multi-intention recognition model which is changed from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels;
the self-adaptive threshold module is used for carrying out self-adaptive threshold calculation on the candidate intention label set to obtain a self-adaptive threshold if the confidence coefficient of at least one candidate intention label is larger than a first threshold;
The intention recognition module is used for selecting the label corresponding to the confidence coefficient which is not smaller than the self-adaptive threshold value in the candidate intention label set as the target intention label.
In one embodiment, the intent recognition device further includes a model training module for developing a single intent recognition model into a multi-intent recognition model, comprising:
the preprocessing unit is used for preprocessing the multi-intention sample set to obtain a multi-intention sample value sequence;
the single intention model unit is used for inputting the multi-intention sample numerical sequence into a single intention recognition model to obtain an initial intention label;
A cross entropy calculation unit for calculating a cross entropy loss between the initial intention label and an actual intention label of the multi-intention sample set;
and the parameter training unit is used for reversely adjusting the model parameters of the single-intention model according to the cross entropy loss until a preset training termination condition is met, so as to obtain the multi-intention recognition model.
In one embodiment, the adaptive threshold module comprises:
The confidence coefficient sequence unit is used for sequencing the confidence coefficient in the candidate intention label set from large to small if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, so as to obtain a confidence coefficient sequence;
The target element unit is used for sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from the first confidence coefficient in the confidence coefficient sequence, calculating a first mean and a first variance of the target element and all confidence coefficients sequenced in front of the target element, and calculating a second mean and a second variance of all confidence coefficients sequenced in behind the target element;
the difference calculation unit is used for determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking a target element corresponding to the maximum difference value as an adaptive threshold.
In one embodiment, the difference calculating unit is further configured to calculate a difference between the first average value and the second average value to obtain a first difference value, calculate a difference between the first variance and the second variance to obtain a second difference value, and obtain a difference value according to the first difference value and the second difference value, where the difference value is proportional to a square of the first difference value, and the difference value is inversely proportional to the second difference value.
In one embodiment, the intention recognition device further includes a conflict processing module, configured to remove, after the selecting, as the target intention label, a label corresponding to a confidence coefficient greater than the adaptive threshold in the candidate intention label set, the conflict label in the target intention label, and obtain a final intention label recognition result.
In one embodiment, the conflict processing module is further configured to remove a conflict tag with a smaller confidence coefficient from the conflict tags, so as to obtain a final intended tag identification result.
In a third aspect, the application further provides electronic equipment. The electronic device comprises a memory and a processor, the memory stores a computer program, and the processor executes the computer program to realize the following steps:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical value sequence corresponding to the text to be detected;
inputting a target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels;
if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value;
and selecting the label corresponding to the confidence coefficient larger than the self-adaptive threshold value in the candidate intention label set as a target intention label.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical value sequence corresponding to the text to be detected;
inputting a target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels;
if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value;
and selecting the label corresponding to the confidence coefficient larger than the self-adaptive threshold value in the candidate intention label set as a target intention label.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical value sequence corresponding to the text to be detected;
inputting a target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels;
if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value;
and selecting the label corresponding to the confidence coefficient larger than the self-adaptive threshold value in the candidate intention label set as a target intention label.
The intent recognition method, device, equipment and storage medium comprise the steps of obtaining a text to be detected, preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected, inputting the target numerical sequence into a multi-intent recognition model changed from a single intent recognition model to obtain a candidate intent label set, wherein the candidate intent label set comprises at least one candidate intent label and confidence corresponding to the candidate intent label, if the confidence of the at least one candidate intent label is larger than a first threshold, carrying out self-adaptive threshold calculation on the candidate intent label set to obtain a self-adaptive threshold, and selecting the intent label corresponding to the confidence of the candidate intent label set larger than the self-adaptive threshold as a target intent label. The single intention recognition model is evolved into the multi-intention recognition model, and the adaptive threshold is obtained by carrying out adaptive threshold calculation, so that different numbers of intention labels can be output to different texts in an adaptive manner, and the accurate recognition of the text intention is realized.
Drawings
FIG. 1 is a diagram of an application environment for a method of intent identification in one embodiment;
FIG. 2 is a flow diagram of a method of intent recognition in one embodiment;
FIG. 3 is a flow diagram of an evolving multi-intent recognition model in one embodiment;
FIG. 4 is a flowchart of step S208 in the embodiment shown in FIG. 2;
FIG. 5 is a flowchart of step S406 in the embodiment shown in FIG. 4;
FIG. 6 is a flow diagram of a method of intent recognition in one embodiment;
FIG. 7 is a flow diagram of a method of intent recognition in one embodiment;
FIG. 8 is a block diagram of an embodiment of an apparatus for identifying intent;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The intention recognition method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The method comprises the steps of receiving a text to be detected sent by a terminal 102, preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected, inputting the target numerical sequence into a multi-intention recognition model changed from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence corresponding to the candidate intention label, if the confidence of the at least one candidate intention label is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value, and selecting labels corresponding to the confidence of the candidate intention label set larger than the self-adaptive threshold value as target intention labels. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, there is provided an intention recognition method, which is described by taking an example that the method is applied to the server in fig. 1, and includes the following steps:
s202, acquiring a text to be tested, and preprocessing the text to be tested to obtain a target numerical sequence corresponding to the text to be tested.
The text to be tested can be directly input text or input voice, and then is converted into text by voice-to-text technology. After the text to be tested is obtained, the text to be tested is preprocessed, for example, special symbols, sensitive information, punctuation marks and stop words in the text to be tested are removed, the text is segmented, symbol marks are added, and the like. The word segmentation refers to the segmentation of a text into a plurality of words, for example, a jieba library is used, and symbol marks are used for indicating the starting position and the ending position of the text to be tested. The pre-processed text to be tested is subjected to numerical coding according to the preset length of the numerical sequence, for example, the text to be tested can be subjected to numerical coding in any mode such as keras library, tensorflow, pytorch, mxnet, caffe, paddlepaddle (flying oar) and the like, and the text to be tested can also be subjected to numerical coding according to an autonomous design programming method to obtain a target numerical sequence corresponding to the text to be tested. Wherein Caffe is a deep learning framework developed by Berkeley Vision AND LEARNING CENTER (BVLC) community contributors.
S204, inputting the target numerical value sequence into a multi-intention recognition model which is changed from the single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence corresponding to the candidate intention label.
The single-intent recognition model can only recognize single-intent tags having single-intent text, and the multi-intent recognition model can recognize multi-intent tags having multi-intent text. Training the single-intent recognition model by using a text data set sample with multiple intents to obtain a multi-intent recognition model capable of recognizing multi-intent labels. Inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and the confidence coefficient corresponding to the candidate intention label, namely, if the text to be detected only contains one intention label, the obtained candidate intention set comprises one candidate intention label and the confidence coefficient corresponding to the candidate intention label, and if the text to be detected contains a plurality of intention labels, the obtained candidate intention label set comprises a plurality of candidate intention labels and the confidence coefficient corresponding to the candidate intention labels. Confidence in the candidate intention label set can be understood as a confidence prediction for the actual intention labels, and the confidence in the candidate intention label set is arranged according to the order of the intention labels and corresponds to the intention labels one by one. The single-purpose recognition model comprises a convolutional neural network text classification model (TextCNN), a quick text classification model (Fasttext), a deep pyramid convolutional neural network model (DPCNN) and other text classification models.
S206, if the confidence coefficient of at least one candidate intention label is larger than the first threshold, carrying out self-adaptive threshold calculation on the candidate intention label set to obtain a self-adaptive threshold.
And if the confidence coefficient of at least one candidate intention label in the candidate intention label set is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value corresponding to the candidate intention label. The first threshold is a super parameter, and can be set according to specific situations, for example, the first threshold is less than or equal to 0.4, and the adaptive threshold is a threshold that is more matched with the whole data features of the confidence degrees corresponding to the candidate intention labels in the candidate intention label set, namely, in different candidate intention label sets, the confidence degrees corresponding to the candidate intention labels are different, and the corresponding adaptive thresholds are also different.
In an alternative embodiment, the confidence level of each candidate intention label in the candidate intention label set is compared with a first threshold value, and if at least one candidate intention label exists with the confidence level greater than the first threshold value, the candidate intention label set is subjected to adaptive threshold value calculation to obtain an adaptive threshold value.
In another optional embodiment, the confidence coefficient corresponding to the largest candidate intention label in the candidate intention label set is obtained first, the largest confidence coefficient is compared with a first threshold value, and if the confidence coefficient is larger than the first threshold value, the candidate intention label set is subjected to self-adaptive threshold value calculation to obtain the self-adaptive threshold value.
S208, selecting the intention label corresponding to the confidence coefficient which is not smaller than the self-adaptive threshold value in the candidate intention label set as the target intention label.
After the self-adaptive threshold is determined, selecting the intention label corresponding to the confidence coefficient which is not smaller than the self-adaptive threshold in the candidate intention label set as the target intention label. That is, the intention labels corresponding to the confidence level greater than or equal to the adaptive threshold value may be sequentially selected as the target intention labels for the present intention recognition according to the order of the candidate intention labels in the candidate intention label set.
According to the intention recognition method, a target numerical value sequence corresponding to a text to be detected is obtained by obtaining the text to be detected and preprocessing the text to be detected, the target numerical value sequence is input into a multi-intention recognition model changed from a single-intention recognition model to obtain a candidate intention label set, the candidate intention label set comprises at least one candidate intention label and confidence corresponding to the candidate intention label, if the confidence of the at least one candidate intention label is larger than a first threshold, the candidate intention label set is subjected to self-adaptive threshold calculation to obtain a self-adaptive threshold, and a label corresponding to the confidence of the candidate intention label set larger than the self-adaptive threshold is selected to serve as a target intention label. The single intention recognition model is evolved into the multi-intention recognition model, and the adaptive threshold is obtained by carrying out adaptive threshold calculation, so that different numbers of intention labels can be output to different texts in an adaptive manner, and the accurate recognition of the text intention is realized.
In one embodiment, as shown in FIG. 3, the manner in which the single intent recognition model is evolved into a multi-intent recognition model, includes the steps of:
s302, preprocessing the multi-intention sample set to obtain a multi-intention sample value sequence.
S304, inputting the multi-intention sample numerical sequence into a single intention recognition model to obtain an initial intention label;
In one implementation, the specific way of coding the actual intention labels of the multi-intention sample set is to assume that n intention labels are involved in the multi-intention sample set and are arranged according to a fixed sequence, for one multi-intention sample, traversing the n labels according to the fixed sequence, if the corresponding position contains the label corresponding to the multi-intention sample set, the corresponding position codes 1, otherwise, 0, and finally the intention label corresponding to the multi-intention sample codes a sequence of 0 and 1, if the multi-intention sample contains m labels, m1, n-m 0 are contained. Other multi-purpose samples in the multi-purpose sample set are encoded according to the same method.
And S308, reversely adjusting model parameters of the single-intention model according to the cross entropy loss until a preset training termination condition is met, and obtaining the multi-intention recognition model.
In this embodiment, for a specific implementation process of preprocessing the multi-purpose sample set to obtain the multi-purpose sample value sequence, reference may be made to the description of step S202 in the above embodiment, which is not repeated herein. After the multi-intention sample value sequence is obtained, inputting the multi-intention sample value sequence into a single-intention recognition model to obtain an initial intention label, calculating cross entropy loss between the initial intention label and an actual intention label corresponding to the multi-intention sample set, reversely adjusting model parameters of the single-intention recognition model according to the cross entropy loss until a preset training termination condition is met, obtaining target model parameters, and using the target model parameters to replace the corresponding model parameters in the original single-intention recognition model to obtain the multi-intention recognition model. The preset termination condition may be preset training times, that is, when the number of times of model training reaches the preset training times, model training is terminated to obtain the target model parameter, or may be preset cross entropy loss, when the cross entropy loss reaches the preset cross entropy loss, the model parameter obtained by the last training is the target model parameter.
In an optional implementation manner, the single-intention recognition model is a convolutional neural network text classification TextCNN model, wherein an activation function in the TextCNN model is a Sigmoid function, a loss function is a two-class cross entropy loss function (Binary Cross Entropy, BCE), a multi-intention sample numerical sequence is input into the TextCNN model to obtain an initial intention label, cross entropy loss between the initial intention label and an actual intention label corresponding to the multi-intention sample set is calculated by using the two-class cross entropy loss function, and model parameters of the single-intention recognition model are reversely adjusted according to the cross entropy loss until the cross entropy loss is zero, namely, an output result of the TextCNN model is identical to an encoding result of the actual intention label, so that target model parameters are obtained, and at the moment, the obtained model with the target model parameters is the multi-intention recognition model.
The multi-intention recognition model obtained after TextCNN model training can output a numerical sequence equal to the actual intention label coding sequence, wherein each numerical value is the confidence coefficient of the intention label at the corresponding position, and the confidence coefficient is closer to 1, the probability that the text contains the corresponding intention label is higher.
In one embodiment, as shown in fig. 4, if there is at least one candidate intention label with a confidence level greater than the first threshold, the step S206 of performing adaptive threshold calculation on the candidate intention label set to obtain an adaptive threshold includes:
And S402, if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, sequencing the confidence coefficient in the candidate intention label set from large to small to obtain a confidence coefficient sequence.
S404, starting from the first confidence in the confidence sequence, sequentially selecting each confidence in the confidence sequence as a target element, calculating a first mean and a first variance of the target element and all the confidences ordered in front of the target element, and calculating a second mean and a second variance of all the confidences ordered behind the target element.
In one specific example, assume a confidence sequence of {0.98,0.96,0.95,0.92}. The method comprises the steps of selecting a first confidence coefficient 0.98 in a confidence coefficient sequence as a target element, calculating a first mean value and a first variance of all confidence coefficients {0.98} which are arranged in front of the target element, wherein the first mean value and the first variance of all confidence coefficients {0.98} which are arranged in front of the target element are respectively 0.98 and 0, the second mean value and the second variance of all confidence coefficients {0.96,0.95,0.92} which are arranged in front of the target element are respectively 0.94 and 0.00030, selecting a second confidence coefficient 0.96 in the confidence coefficient sequence as a target element, calculating a first mean value and a first variance of all confidence coefficients {0.98,0.96} which are arranged in front of the target element, respectively 0.97 and 0.00010, calculating a second mean value and a second variance of all confidence coefficients {0.95,0.92} which are arranged behind the target element, respectively 0.94 and 0.00025, selecting a third confidence coefficient 0.95 in the confidence coefficient sequence as a target element, calculating a first mean value and a first variance of all confidence coefficients {0.98,0.96,0.95} which are arranged in front of the target element, respectively 0.96 and 0.00017, calculating a second mean value and a second confidence coefficient 0.92 which are arranged behind the target element, and a second mean value of all confidence coefficients { 35.95 } which are arranged in front of the target element, respectively 0.95 and a second confidence coefficients {0.92, and a second mean value which are respectively 0.95 and 0.95 are arranged in front of the target element, and 0.92 are respectively 0.95, and a zero, or the second variance of all confidence coefficients which are respectively 0.95 and 0.95 are arranged in front of the target element and 0. The details are shown in table 1 below:
TABLE 1
S406, determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking a target element corresponding to the maximum difference value as an adaptive threshold.
In the embodiment, if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, the confidence coefficient in the candidate intention label set is sequenced from large to small to obtain a confidence coefficient sequence, each confidence coefficient in the confidence coefficient sequence is sequentially selected as a target element from the first confidence coefficient in the confidence coefficient sequence, the average value and the variance of the target element and all confidence coefficients sequenced in front of the target element are calculated and respectively used as a first average value and a first variance, the average value and the variance of all confidence coefficients sequenced behind the target element are calculated and respectively used as a second average value and a second variance, a difference value is determined through the first average value, the second average value, the first variance and the second variance, and the target element corresponding to the largest difference value is used as an adaptive threshold value. Wherein the mean value may be an arithmetic mean value or a weighted mean value.
In one embodiment, as shown in fig. 5, the step S406 of determining the difference value by the first mean, the second mean, the first variance, and the second variance includes:
s502, calculating a difference value between the first average value and the second average value to obtain a first difference value;
s504, calculating a difference value between the first variance and the second variance to obtain a second difference value;
S506, obtaining a difference value according to the first difference value and the second difference value, wherein the difference value is proportional to the square of the first difference value, and the difference value is inversely proportional to the second difference value.
In an alternative embodiment, the difference value diff is:
Wherein μ 1 is the first mean, μ 2 is the second mean, var 1 is the first variance, and var 2 is the second variance.
In another alternative embodiment, the difference value diff is:
Where k is a constant greater than zero. That is, the above formula may allow proper deformation in order to make the result more accurate in a specific application scenario on the basis of retaining the first mean, the second mean, the first variance, and the second variance when specifically used.
In one embodiment, as shown in fig. 6, there is provided an intention recognition method including the steps of:
S602, acquiring a text to be tested, and preprocessing the text to be tested to obtain a target numerical sequence corresponding to the text to be tested.
S604, inputting the target numerical value sequence into a multi-intention recognition model which is changed from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence corresponding to the candidate intention label.
And S606, if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value.
S608, selecting the intention label corresponding to the confidence coefficient which is not smaller than the self-adaptive threshold value in the candidate intention label set as the target intention label.
S610, removing conflict tags in the target intention tags to obtain a final intention tag identification result.
In the embodiment, a text to be detected is firstly obtained, preprocessing is carried out on the text to be detected to obtain a target numerical sequence corresponding to the text to be detected, the target numerical sequence is input into a multi-intention recognition model changed from a single-intention recognition model to obtain a candidate intention label set, the candidate intention label set comprises at least one candidate intention label and confidence coefficient corresponding to the candidate intention label, if the confidence coefficient of the at least one candidate intention label is larger than a first threshold value, the candidate intention label set is subjected to self-adaptive threshold value calculation to obtain a self-adaptive threshold value, a label corresponding to the confidence coefficient of the candidate intention label set larger than the self-adaptive threshold value is selected to serve as a target intention label, and conflicting labels which do not meet preset conditions in the target intention label are removed to obtain a final intention label recognition result. Conflicting tags refer to tags intended to express opposition to each other, such as "confirm transacting" and "deny transacting", which cannot coexist in the same text. Steps S602 to S608 correspond to steps S202 to S208, and are not described herein.
In one embodiment, the step S610 of removing the conflict label in the target intention label to obtain the final intention label recognition result includes removing the conflict label with smaller confidence in the conflict label to obtain the final intention label recognition result. The final intention label identification result comprises an intention label and corresponding intention content.
In one specific example, as shown in fig. 7, the intention recognition method includes the steps of:
s702, obtaining a text to be tested. And the server receives the text to be tested sent by the terminal.
S704, preprocessing the text to be detected to obtain a target numerical sequence. The server preprocesses the text to be detected, and numerical codes the preprocessed text to be detected to obtain a target numerical sequence corresponding to the text to be detected.
S706, inputting the candidate intention labels and the corresponding confidence degrees into a multi-intention recognition model trained by the TextCNN model. Training the TextCNN model by using the multi-intention sample set to obtain a corresponding multi-intention recognition model, inputting the target numerical sequence obtained in the step S704 into the multi-intention recognition model trained by the TextCNN model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence corresponding to the candidate intention label.
S708, judging whether the maximum confidence is larger than a first threshold K. And selecting the maximum confidence coefficient in the candidate intention label set, comparing the maximum confidence coefficient with a first threshold K, and judging whether the maximum confidence coefficient is larger than the first threshold K, wherein the first threshold K is 0.3.
S710, outputting the result as other labels. If the maximum confidence coefficient is smaller than or equal to the first threshold value K, the fact that all confidence coefficients in the candidate intention sets are smaller is indicated, the text to be tested is considered to be irrelevant text, namely the judgment of the model on the text is invalid, and a result is output as other labels.
S712, sequencing the confidence from big to small, calculating the segmentation difference value of the confidence, and determining the self-adaptive threshold t. And sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from the first confidence coefficient in the confidence coefficient sequence, calculating a first mean value and a first variance of the target element and all confidence coefficients sequenced in front of the target element, calculating a second mean value and a second variance of all confidence coefficients sequenced behind the target element, determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking the target element corresponding to the maximum difference value as an adaptive threshold t.
S714, outputting the intention label corresponding to the confidence coefficient larger than the self-adaptive threshold t. Selecting an intention label corresponding to the confidence coefficient larger than the self-adaptive threshold t in the candidate intention set as a target intention label list, wherein the target intention label list comprises intention content, intention labels and corresponding confidence coefficients.
S716, performing post-processing. And processing conflict labels in the target intention label list, and selecting and removing the intention labels with smaller confidence in the conflict labels.
S718, outputting an intention recognition result. The post-processed target intention label list is the final intention recognition result.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an intention recognition device for realizing the intention recognition method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the intention recognition device provided below may refer to the limitation of the intention recognition method hereinabove, and will not be repeated here.
In one embodiment, as shown in FIG. 8, an intent recognition device is provided, comprising a preprocessing module 802, a model processing module 804, an adaptive threshold module 806, an intent recognition module 808, wherein:
the preprocessing module 802 is configured to obtain a text to be tested, and preprocess the text to be tested to obtain a target numerical sequence corresponding to the text to be tested;
The model processing module 804 is configured to input a target numerical sequence into a multi-intention recognition model that is changed from a single-intention recognition model, to obtain a candidate intention label set, where the candidate intention label set includes at least one candidate intention label and a confidence level corresponding to the candidate intention label;
An adaptive threshold module 806, configured to perform adaptive threshold calculation on the candidate intention tag set if there is at least one candidate intention tag with a confidence level greater than the first threshold, to obtain an adaptive threshold;
the intention recognition module 808 is configured to select, as the target intention label, a label corresponding to a confidence level greater than the adaptive threshold in the candidate intention label set.
In one embodiment, the intent recognition device further includes a model training module for developing a single intent recognition model into a multi-intent recognition model, comprising:
the preprocessing unit is used for preprocessing the multi-intention sample set to obtain a multi-intention sample value sequence;
the single intention model unit is used for inputting the multi-intention sample numerical sequence into a single intention recognition model to obtain an initial intention label;
A cross entropy calculation unit for calculating a cross entropy loss between the initial intention label and an actual intention label of the multi-intention sample set;
and the parameter training unit is used for reversely adjusting the model parameters of the single-intention model according to the cross entropy loss until a preset training termination condition is met, so as to obtain the multi-intention recognition model.
In one embodiment, the adaptive threshold module 806 includes:
The confidence coefficient sequence unit is used for sequencing the confidence coefficient in the candidate intention label set from large to small if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, so as to obtain a confidence coefficient sequence;
The target element unit is used for sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from the first confidence coefficient in the confidence coefficient sequence, calculating a first mean and a first variance of the target element and all confidence coefficients sequenced in front of the target element, and calculating a second mean and a second variance of all confidence coefficients sequenced in behind the target element;
the difference calculation unit is used for determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking a target element corresponding to the maximum difference value as an adaptive threshold.
In one embodiment, the difference calculating unit is further configured to calculate a difference between the first average value and the second average value to obtain a first difference value, calculate a difference between the first variance and the second variance to obtain a second difference value, and obtain a difference value according to the first difference value and the second difference value, where the difference value is proportional to a square of the first difference value, and the difference value is inversely proportional to the second difference value.
In one embodiment, the intention recognition device further includes a conflict processing module, configured to remove, after the selecting, as the target intention label, a label corresponding to a confidence coefficient greater than the adaptive threshold in the candidate intention label set, the conflict label in the target intention label, and obtain a final intention label recognition result.
In one embodiment, the conflict processing module is further configured to remove a conflict tag with a smaller confidence in the conflict tags, so as to obtain a final intended tag identification result.
The respective modules in the above-described intention recognition apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of intent recognition.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 9 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, an electronic device is provided comprising a memory storing a computer program and a processor that when executing the computer program performs the steps of:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical value sequence corresponding to the text to be detected;
inputting a target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels;
if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value;
and selecting the label corresponding to the confidence coefficient larger than the self-adaptive threshold value in the candidate intention label set as a target intention label.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical value sequence corresponding to the text to be detected;
inputting a target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels;
if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value;
and selecting the label corresponding to the confidence coefficient larger than the self-adaptive threshold value in the candidate intention label set as a target intention label.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical value sequence corresponding to the text to be detected;
inputting a target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels;
if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, carrying out self-adaptive threshold value calculation on the candidate intention label set to obtain a self-adaptive threshold value;
and selecting the label corresponding to the confidence coefficient larger than the self-adaptive threshold value in the candidate intention label set as a target intention label.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A method of intent recognition, the method comprising:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical value sequence corresponding to the text to be detected;
Inputting the target numerical value sequence into a multi-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels;
If the confidence coefficient of at least one candidate intention label is larger than a first threshold value, sequencing the confidence coefficient in the candidate intention label set from large to small to obtain a confidence coefficient sequence;
Sequentially selecting each confidence degree in the confidence degree sequence as a target element from a first confidence degree in the confidence degree sequence, calculating a first mean and a first variance of the target element and all confidence degrees sequenced in front of the target element, and calculating a second mean and a second variance of all confidence degrees sequenced in behind the target element;
determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking a target element corresponding to the maximum difference value as an adaptive threshold;
and selecting the label corresponding to the confidence coefficient which is not smaller than the self-adaptive threshold value in the candidate intention label set as a target intention label.
2. The method of claim 1, wherein the way in which the single intent recognition model is trained to obtain the multi-intent recognition model comprises:
preprocessing a multi-intention sample set to obtain a multi-intention sample value sequence;
Inputting the multi-intention sample numerical sequence into a single intention recognition model to obtain an initial intention label;
calculating a cross entropy loss between the initial intent label and an actual intent label of the multi-intent sample set;
And reversely adjusting model parameters of the single-intention recognition model according to the cross entropy loss until a preset training termination condition is met, so as to obtain the multi-intention recognition model.
3. The method of claim 1, wherein if the confidence level of the at least one candidate intention label is greater than a first threshold, sorting the confidence levels in the set of candidate intention labels from large to small to obtain a confidence level sequence, comprising:
Obtaining the maximum confidence coefficient corresponding to the candidate intention labels in the candidate intention label set, and comparing the maximum confidence coefficient with a first threshold value;
And if the maximum confidence coefficient is larger than a first threshold value, sequencing the confidence coefficient in the candidate intention label set from large to small to obtain a confidence coefficient sequence.
4. The method of claim 1, wherein the determining a difference value from the first mean, the second mean, the first variance, the second variance comprises:
calculating a difference value between the first average value and the second average value to obtain a first difference value;
calculating a difference value between the first variance and the second variance to obtain a second difference value;
And obtaining a difference value according to the first difference value and the second difference value, wherein the difference value is proportional to the square of the first difference value, and the difference value is inversely proportional to the second difference value.
5. The method according to claim 1, wherein after selecting, as the target intention tag, a tag corresponding to a confidence level not less than the adaptive threshold in the candidate intention tag set, the method further comprises:
And removing the conflict label in the target intention label to obtain a final intention label identification result.
6. The method of claim 5, wherein the removing the conflicting tags from the target intent tag to obtain a final intent tag identification result comprises:
And removing conflict labels which do not meet preset conditions in the target intention labels to obtain a final intention label identification result.
7. An intent recognition device, the device comprising:
The preprocessing module is used for acquiring a text to be detected, preprocessing the text to be detected and obtaining a target numerical value sequence corresponding to the text to be detected;
The model processing module is used for inputting the target numerical value sequence into a multi-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence corresponding to the candidate intention label;
The self-adaptive threshold module is used for sequencing the confidence coefficient in the candidate intention label set from large to small to obtain a confidence coefficient sequence if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from the first confidence coefficient in the confidence coefficient sequence, calculating a first mean and a first variance of the target element and all the confidence coefficients sequenced in front of the target element, and calculating a second mean and a second variance of all the confidence coefficients sequenced behind the target element, determining a difference value through the first mean, the second mean, the first variance and the second variance, and taking the target element corresponding to the maximum difference value as the self-adaptive threshold value;
the intention recognition module is used for selecting the label corresponding to the confidence coefficient which is not smaller than the self-adaptive threshold value in the candidate intention label set as the target intention label.
8. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202111303756.XA 2021-11-05 2021-11-05 Intent recognition method, device, equipment and storage medium Active CN114117037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111303756.XA CN114117037B (en) 2021-11-05 2021-11-05 Intent recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111303756.XA CN114117037B (en) 2021-11-05 2021-11-05 Intent recognition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114117037A CN114117037A (en) 2022-03-01
CN114117037B true CN114117037B (en) 2025-02-11

Family

ID=80380701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111303756.XA Active CN114117037B (en) 2021-11-05 2021-11-05 Intent recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114117037B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114818703B (en) * 2022-06-28 2022-09-16 珠海金智维信息科技有限公司 Multi-intention recognition method and system based on BERT language model and TextCNN model
CN115240676A (en) * 2022-08-02 2022-10-25 中国平安人寿保险股份有限公司 Intelligent outbound call method, device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325037A (en) * 2020-03-05 2020-06-23 苏宁云计算有限公司 Text intent recognition method, apparatus, computer equipment and storage medium
CN111680517A (en) * 2020-06-10 2020-09-18 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for training a model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11267128B2 (en) * 2019-05-08 2022-03-08 International Business Machines Corporation Online utility-driven spatially-referenced data collector for classification
CN114424186A (en) * 2019-12-16 2022-04-29 深圳市欢太科技有限公司 Text classification model training method, text classification method, device and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325037A (en) * 2020-03-05 2020-06-23 苏宁云计算有限公司 Text intent recognition method, apparatus, computer equipment and storage medium
CN111680517A (en) * 2020-06-10 2020-09-18 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for training a model

Also Published As

Publication number Publication date
CN114117037A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN112084337B (en) Training method of text classification model, text classification method and equipment
CN112528637B (en) Text processing model training method, device, computer equipment and storage medium
CN108733778B (en) Industry type identification method and device of object
CN112182180A (en) Question and answer processing method, electronic device, and computer-readable medium
CN112966088B (en) Unknown intention recognition method, device, equipment and storage medium
JP5214760B2 (en) Learning apparatus, method and program
CN114528912B (en) False news detection method and system based on progressive multi-mode fusion network
CN112632248B (en) Question and answer method, device, computer equipment and storage medium
CN112101042B (en) Text emotion recognition method, device, terminal equipment and storage medium
CN112789626A (en) Scalable and compressed neural network data storage system
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
CN112380853A (en) Service scene interaction method and device, terminal equipment and storage medium
CN113254654A (en) Model training method, text recognition method, device, equipment and medium
CN114117037B (en) Intent recognition method, device, equipment and storage medium
CN111783903A (en) Text processing method, text model processing method and device and computer equipment
EP4288910B1 (en) Continual learning neural network system training for classification type tasks
CN110413992A (en) A kind of semantic analysis recognition methods, system, medium and equipment
CN117079298A (en) Information extraction method, training method of information extraction system and information extraction system
CN118298434A (en) Image recognition method, model training method of target recognition model and related equipment
CN115730597A (en) Multi-level semantic intention recognition method and related equipment thereof
CN111611796A (en) Hypernym determination method and device for hyponym, electronic device and storage medium
CN113849645B (en) Mail classification model training method, device, equipment and storage medium
CN114780757A (en) Short media label extraction method and device, computer equipment and storage medium
CN115700828A (en) Table element identification method and device, computer equipment and storage medium
CN119166744A (en) Intent identification method, device, equipment and storage medium for financial service applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant after: Zhaolian Consumer Finance Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: MERCHANTS UNION CONSUMER FINANCE Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant