CN115292495A - Emotion analysis method and device, electronic equipment and storage medium - Google Patents
Emotion analysis method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115292495A CN115292495A CN202210958510.4A CN202210958510A CN115292495A CN 115292495 A CN115292495 A CN 115292495A CN 202210958510 A CN202210958510 A CN 202210958510A CN 115292495 A CN115292495 A CN 115292495A
- Authority
- CN
- China
- Prior art keywords
- emotion
- target user
- model
- probability
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 534
- 238000004458 analytical method Methods 0.000 title claims abstract description 47
- 238000012546 transfer Methods 0.000 claims abstract description 91
- 239000011159 matrix material Substances 0.000 claims abstract description 79
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 26
- 230000008859 change Effects 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 13
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 45
- 239000013598 vector Substances 0.000 claims description 34
- 230000015654 memory Effects 0.000 claims description 22
- 238000009826 distribution Methods 0.000 claims description 18
- 230000008909 emotion recognition Effects 0.000 claims description 15
- 238000004891 communication Methods 0.000 claims description 13
- 238000002372 labelling Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 238000012216 screening Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 description 12
- 238000013473 artificial intelligence Methods 0.000 description 8
- 238000003058 natural language processing Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 238000007476 Maximum Likelihood Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000002996 emotional effect Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 201000006549 dyspepsia Diseases 0.000 description 2
- 208000024798 heartburn Diseases 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 235000021178 picnic Nutrition 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
- G06F16/355—Creation or modification of classes or clusters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3346—Query execution using probabilistic model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Machine Translation (AREA)
Abstract
The embodiment of the application provides an emotion analysis method and device, electronic equipment and a storage medium, and belongs to the technical field of data processing. The method comprises the following steps: preprocessing the acquired historical dialogue information of the target user to obtain sample dialogue information of the target user; inputting the sample conversation information into a preset emotion model for emotion classification to obtain an emotion frequency set; establishing an emotion transfer matrix of the target user according to the emotion frequency set, and training an emotion model based on the emotion transfer matrix to obtain an emotion transfer model; and inputting the acquired target conversation information of the target user into an emotion transfer model for probability prediction to obtain an emotion fluctuation probability value. The embodiment of the application can accurately analyze the emotion change of the user under the condition of being influenced by the emotion of other people.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for emotion analysis, an electronic device, and a storage medium.
Background
The method has a great application prospect in portraying the personality of a person, and can be applied to scenes such as social friend pushing, social matching, psychological consultation and the like, at present, emotion tag tasks of every sentence in Conversation are generally predicted by means of Emotion Recognition (ERC for short) in Conversation or questionnaire survey and the like, wherein the ERC tasks are used for analyzing Emotion tags of a single sentence in the Conversation, the Emotion expressed by a certain sentence of a person is known through the existing ERC tasks, however, the Emotion expressed by the user through the ERC tasks is one-sided, the Emotion probability of the user influenced by other people cannot be accurately calculated, besides, the questionnaire survey is easily influenced by the subjective Emotion of the filling people, and the number of the filling people has certain requirements, so that the labor cost is increased.
Disclosure of Invention
The embodiment of the application mainly aims to provide an emotion analysis method, an emotion analysis device, electronic equipment and a storage medium, which can accurately analyze emotion changes of a user under the influence of emotions of other people.
To achieve the above object, a first aspect of an embodiment of the present application proposes a method for emotion analysis, the method including:
preprocessing acquired historical dialogue information of a target user to obtain sample dialogue information of the target user, wherein the sample dialogue information carries a plurality of emotion labels;
inputting the sample conversation information into a preset emotion model for emotion classification to obtain an emotion frequency set, wherein the emotion frequency set comprises the frequency of occurrence of each emotion label in the sample conversation information;
establishing an emotion transfer matrix of the target user according to the emotion frequency set, wherein the emotion transfer matrix is used for representing a probability value that the emotion of the target user is influenced by the emotions of other people;
training the preset emotion model based on the emotion transfer matrix to obtain an emotion transfer model;
and inputting the acquired target conversation information of the target user into the emotion transfer model for probability prediction to obtain an emotion fluctuation probability value, wherein the emotion fluctuation probability value is a probability value representing that the emotion of the target user changes under the influence of the emotion of other people.
In some embodiments, the historical conversation information includes conversation information for a plurality of users having a conversation with the target user; the preprocessing the acquired historical dialogue information of the target user to obtain the sample dialogue information of the target user comprises the following steps:
determining the sample dialogue information corresponding to the target user in the historical dialogue information;
and labeling the sample dialogue information based on a preset labeling model to obtain the emotion label of the sample dialogue information.
In some embodiments, the preset emotion model comprises a sentence vector encoder and an emotion classifier; inputting the sample dialogue information into a preset emotion model for emotion classification to obtain an emotion frequency set, wherein the emotion frequency set comprises:
inputting sentences in the sample dialogue information into the sentence vector encoder to perform sentence segmentation to obtain sentence vectors of the sample dialogue information;
inputting the sentence vector into the emotion classifier so that the emotion classifier performs emotion recognition on the emotion label in the sentence vector to obtain a recognition result;
and carrying out classified statistics on the recognition results to obtain the emotion frequency set.
In some embodiments, further comprising:
establishing a self emotion matrix according to the emotion frequency set, wherein the self emotion matrix is used for representing the probability value that the emotion of the current conversation of the target user influences the emotion of the next conversation;
and establishing an emotion influence matrix according to the emotion frequency set, wherein the emotion influence matrix is used for representing probability values of emotion influences of others by the target user in a conversation process with the target user.
In some embodiments, the training the preset emotion model based on the emotion transfer matrix to obtain an emotion transfer model includes:
constraining the emotion labels in the sample dialogue information to obtain a real emotion label sequence;
inputting the real emotion label sequence into the emotion classifier to perform probability calculation to obtain label probability distribution;
optimizing a preset score value function according to the emotion transfer matrix, the emotion influence matrix, the real emotion label sequence and the label probability distribution to obtain a prediction sequence probability function;
and training the preset emotion model according to the prediction sequence probability function to obtain the emotion transfer model.
In some embodiments, the preset emotion model comprises a conditional random field layer; the step of constraining the emotion labels in the sample dialogue information to obtain a real emotion label sequence comprises the following steps:
inputting the emotion labels in the sample dialogue information into the conditional random field layer for screening to obtain a real emotion label probability value;
and carrying out category statistics on the probability value of the real emotion label to obtain the real emotion label sequence.
In some embodiments, the predicted sequence probability function comprises a likelihood probability function and a loss function; the training the preset emotion model according to the prediction sequence probability function to obtain the emotion transfer model comprises the following steps:
obtaining a likelihood probability value of the preset emotion model according to the likelihood probability function;
inputting the likelihood probability value into the loss function for calculation to obtain a probability loss value;
obtaining the emotion transfer probability value of the preset emotion model according to the loss function;
and when the probability loss value is smaller than the emotion transfer probability value, updating the preset emotion model according to the probability loss value to obtain the emotion transfer model.
To achieve the above object, a second aspect of embodiments of the present application provides a mood analyzing device, including:
the conversation processing module is used for preprocessing the acquired historical conversation information of the target user to obtain sample conversation information of the target user, wherein the sample conversation information carries a plurality of emotion labels;
the emotion classification module is used for inputting the sample conversation information into a preset emotion model to perform emotion classification to obtain an emotion frequency set, wherein the emotion frequency set comprises the frequency of occurrence of each emotion label in the sample conversation information;
the matrix establishing module is used for establishing an emotion transfer matrix of the target user according to the emotion frequency set, wherein the emotion transfer matrix is used for representing a probability value that the emotion of the target user is influenced by the emotion of other people;
the model training module is used for training the preset emotion model based on the emotion transfer matrix to obtain an emotion transfer model;
and the probability prediction module is used for inputting the acquired target conversation information of the target user into the emotion transfer model for probability prediction to obtain an emotion fluctuation probability value, wherein the emotion fluctuation probability value is used for the probability value of emotion change of the target user under the influence of emotions of other people.
In order to achieve the above object, a third aspect of embodiments of the present application provides an electronic device, which includes a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for implementing connection communication between the processor and the memory, wherein the program, when executed by the processor, implements the emotion analysis method according to the first aspect.
In order to achieve the above object, a fourth aspect of embodiments of the present application proposes a storage medium, which is a computer-readable storage medium for computer-readable storage, and stores one or more programs, which are executable by one or more processors to implement the emotion analysis method according to the first aspect.
The emotion analysis method and device, the electronic device and the storage medium are characterized in that firstly, acquired historical dialogue information of a target user is preprocessed to obtain sample dialogue information of the target user carrying emotion labels, then the sample dialogue information is input into a preset emotion model to be subjected to emotion classification, so that the times of different emotion labels appearing in the sample dialogue information are obtained, an emotion frequency set is generated according to the times of different emotion labels appearing, then an emotion transfer matrix of the target user is established according to the emotion frequency set to facilitate subsequent analysis of emotion changes of the target user, then the preset emotion model is trained according to the emotion transfer matrix, robustness of the preset emotion model is improved, an emotion transfer model is obtained, finally, the acquired target dialogue information of the target user is input into the emotion transfer model to perform probability prediction, so that probability values of emotion changes of the target user under the emotion influences of other people are obtained through the emotion transfer model, the accuracy of emotion analysis of the target user is improved, and the emotion change conditions of the target user influenced by other people can be accurately analyzed.
Drawings
Fig. 1 is a flowchart of an emotion analysis method provided in an embodiment of the present application;
fig. 2 is a flowchart of step S101 in fig. 1;
FIG. 3 is a flowchart of step S102 in FIG. 1;
FIG. 4 is a flow diagram of a method for sentiment analysis provided by another embodiment of the present application;
fig. 5 is a flowchart of step S104 in fig. 1;
fig. 6 is a flowchart of step S501 in fig. 5;
FIG. 7 is a flowchart of step S504 in FIG. 5;
fig. 8 is a schematic structural diagram of an emotion analyzing apparatus provided in an embodiment of the present application;
fig. 9 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, as well as in the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
First, several terms referred to in the present application are resolved:
conversational Emotion Recognition (ERC): ERC is a main task in conversational emotion research for implementing a conversational system with emotion understanding capability. This task is a classification task that aims at classifying the emotions of all utterances in a session. The input to the task is a continuous session and the output is the emotion of all utterances in the session.
Natural Language Processing (NLP): NLP uses computer to process, understand and use human language (such as chinese, english, etc.), and it belongs to a branch of artificial intelligence, which is a cross discipline of computer science and linguistics, also commonly called computational linguistics. Natural language processing includes parsing, semantic analysis, discourse understanding, and the like. Natural language processing is commonly used in the technical fields of machine translation, character recognition of handwriting and print, speech recognition and text-to-speech conversion, information intention recognition, information extraction and filtering, text classification and clustering, public opinion analysis and viewpoint mining, and relates to data mining, machine learning, knowledge acquisition, knowledge engineering, artificial intelligence research, linguistic research related to language calculation and the like related to language processing.
Conditional Random Field (CRF): conditional random fields are a class of discriminant models that are best suited for the prediction task, where neighboring context information or states may influence the current prediction. CRF is an undirected graph model, and is commonly used for labeling or analyzing sequence data, such as natural language characters or biological sequences. In recent years, good effects are achieved in sequence tagging tasks such as word segmentation, part of speech tagging and named entity recognition.
Information Extraction (Information Extraction): and extracting the fact information of entities, relations, events and the like of specified types from the natural language text, and forming a text processing technology for outputting structured data. Information extraction is a technique for extracting specific information from text data. The text data is composed of specific units, such as sentences, paragraphs and chapters, and the text information is composed of small specific units, such as words, phrases, sentences and paragraphs or combinations of these specific units. The extraction of noun phrases, names of people, names of places, etc. in the text data is text information extraction, and of course, the information extracted by the text information extraction technology may be various types of information.
Two-way encoder model based on deformer (BidirectionalEncoder responses from Transformer, BERT): the BERT Model is a pre-trained Language representation Model, emphasizes that the traditional one-way Language Model or a method for performing shallow splicing on two one-way Language models is not adopted for pre-training as before, but a new Masked Language Model (MLM) is adopted so as to generate deep two-way Language representation. The goal of the BERT model is to obtain the Representation of the text containing rich semantic information by using large-scale unmarked corpus training, namely: and performing semantic representation on the text, then fine-tuning the semantic representation of the text in a specific NLP task, and finally applying the semantic representation of the text to the NLP task.
Based on this, the embodiment of the application provides an emotion analysis method and device, an electronic device and a storage medium, and aims to improve the accuracy of emotion analysis on a target user and accurately analyze the emotion change condition of the target user influenced by the emotion of other people.
The emotion analysis method and apparatus, the electronic device, and the storage medium provided in the embodiments of the present application are specifically described in the following embodiments, and first, the emotion analysis method in the embodiments of the present application is described.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The embodiment of the application provides an emotion analysis method, and relates to the technical field of artificial intelligence. The emotion analysis method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smartphone, tablet, laptop, desktop computer, or the like; the server side can be configured into an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and cloud servers for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN (content delivery network) and big data and artificial intelligence platforms; the software may be an application or the like that implements the emotion analyzing method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In each embodiment of the present application, when data related to the identity or characteristics of a user, such as user information, user behavior data, user history data, and user location information, is processed, permission or consent of the user is obtained, and the collection, use, and processing of the data comply with relevant laws and regulations and standards of relevant countries and regions. In addition, when the embodiment of the present application needs to acquire sensitive personal information of a user, individual permission or individual consent of the user is obtained through a pop-up window or a jump to a confirmation page, and after the individual permission or individual consent of the user is definitely obtained, necessary user-related data for enabling the embodiment of the present application to operate normally is acquired.
Fig. 1 is an optional flowchart of an emotion analysis method provided in an embodiment of the present application, and the method in fig. 1 may include, but is not limited to, steps S101 to S105.
Step S101, preprocessing the acquired historical dialogue information of the target user to obtain sample dialogue information of the target user;
it should be noted that the sample dialogue information carries a plurality of emotion labels.
In step S101 in some embodiments, the acquired historical dialog information of the target user is preprocessed to obtain sample dialog information of the target user, so as to facilitate subsequent analysis of emotion probability values and the like of the target user according to the sample dialog information of the target user.
It can be understood that the historical conversation information may be a daily chat record of the target user with friends or relatives, where the chat record may include records of chat voice, chat short messages, or chat emoticons, and the number of the historical conversation information is as large as possible, and includes conversation information of various time periods and various chat friends as possible, and the historical conversation information may be acquired three times per month or once a day, and the present embodiment is not limited in particular.
It should be noted that the above-mentioned information such as the history conversation information, the chat history, the chat voice, and the chat emoticon is acquired when the target user allows it.
Step S102, inputting sample dialogue information into a preset emotion model for emotion classification to obtain an emotion frequency set;
it should be noted that the emotion frequency set includes the number of times each emotion label appears in the sample dialog information.
It is understood that emotional tags include, but are not limited to, tags that include heartburn, anger, heartburn, excitement, difficulty, excitement, and the like.
In step S102 of some embodiments, the sample dialogue information of the target user is input into a preset emotion model to perform emotion classification, frequency values of different emotion labels appearing in the sample dialogue information are obtained, and an emotion frequency set is obtained finally, so that an emotion state proportion in daily life of the target user is obtained, and accurate subsequent analysis of the emotion of the target user is facilitated.
It should be noted that the emotion frequency set further includes frequencies of related emotion labels appearing in different sentences in the sample conversation information, where the different sentences in the sample conversation information may be conversation contents of the target user itself or conversation contents in which the target user interacts with another person, for example, the conversation contents of the target user itself is that "i buy a bunch of flowers today, is happy, and when they get home unfortunately, the flowers are a bit withered and are a bit harder to pass", a preset emotion model identifies emotion labels in the sample conversation information, and in combination with the context conversation sentences, emotion labels in a specific context are subjected to emotion classification to obtain an emotion frequency set; or the conversation content of the target user and other people is the target user ' the weather is good today ', the user goes to a picnic bar ' tomorrow ', the user A ' has a cold, the user is difficult to feel, the user does not go ' the good bar ' tomorrow ', the user unfortunately needs to pay attention to rest ', an emotion model is preset to identify emotion labels in sample conversation information, and emotion labels in the conversation situation are classified according to the conversation content of the target user and the user A to obtain an emotion frequency set.
It is understood that the percentage of each emotion of the target user is the emotion distribution of the target user, the distribution value of each emotion is 0% to 100%, and the total percentage of all emotions is 100%, for example, the happy emotion of the target user is 30%, the distressed emotion is 20%, the angry emotion is 40%, the excited emotion is 10%, and the like, and this embodiment is not limited in particular.
Step S103, establishing an emotion transfer matrix of the target user according to the emotion frequency set;
it should be noted that the emotion transfer matrix is used to represent probability values that the emotion of the target user is affected by the emotion of another person.
In step S103 of some embodiments, the emotion transfer matrix of the target user is established according to the emotion frequency set, which facilitates subsequent training of the preset emotion model according to the emotion transfer matrix, thereby improving the accuracy of predicting emotion change when the emotion of the target user in the preset emotion model is affected.
In some embodiments, an emotion transfer matrix is established according to the number of corresponding emotion tags in the emotion frequency set, where the emotion transfer matrix is a two-dimensional matrix, for example, in a situation where the target user has a conversation with another person, the number of emotion tags in the emotion frequency set that the emotion of the target user is happy is c1, and the number of emotion tags that the emotion of another person is known to be sad is c2, then the emotion transfer matrix is (c 1, c 2) at this time, which indicates that in a situation where another person is happy, the emotion of the target user will not be affected, and the emotion will be changed from happy to sad or kept happy.
Step S104, training a preset emotion model based on the emotion transfer matrix to obtain an emotion transfer model;
in step S104 of some embodiments, a preset emotion model is trained based on the emotion transfer matrix, so that the emotion transfer model can accurately calculate a probability value that the emotion of the target user is affected, thereby enhancing robustness of the emotion transfer model.
And step S105, inputting the acquired target dialogue information of the target user into an emotion transfer model for probability prediction to obtain an emotion fluctuation probability value.
It should be noted that the emotion fluctuation probability value is a probability value representing that the emotion of the target user changes under the influence of the emotion of another person.
In step S105 of some embodiments, first, target session information of a target user is obtained, and then the target session information is input into a trained emotion transfer model for probability prediction, so as to obtain an emotion fluctuation probability value that the target user is affected by others, thereby accurately obtaining the probability that the target user has emotion change.
It can be understood that the target dialog information is also a chat record of the target user with friends or relatives in daily life, and details are not described herein.
In steps S101 to S105 illustrated in the embodiment of the application, first, acquired historical dialogue information of a target user is preprocessed to obtain sample dialogue information of the target user carrying emotion labels, then, the sample dialogue information is input into a preset emotion model to perform emotion classification, so that times of different emotion labels appearing in the sample dialogue information are obtained, an emotion frequency set is generated according to the times of different emotion labels appearing, then, an emotion transfer matrix of the target user is established according to the emotion frequency set to facilitate subsequent analysis of emotion changes of the target user, then, the preset emotion model is trained according to the emotion transfer matrix, robustness of the preset emotion model is improved, an emotion transfer model is obtained, finally, the acquired target dialogue information of the target user is input into the emotion transfer model to perform probability prediction, so that a probability value of emotion changes of the target user under the emotion influence of other people is obtained through the emotion transfer model, the emotion analysis accuracy of the target user is improved, and emotion change conditions of the target user influenced by emotion analysis can be accurately obtained.
Referring to fig. 2, in some embodiments, step S101 may include, but is not limited to, step S201 to step S202:
the history dialogue information includes dialogue information of a plurality of users who have performed dialogues with the target user.
Step S201, determining sample dialogue information corresponding to a target user in historical dialogue information;
and S202, labeling the sample dialogue information based on a preset labeling model to obtain an emotion label of the sample dialogue information.
In step S201 of some embodiments, sample dialog information corresponding to the target user is determined in the historical dialog information, so as to facilitate subsequent emotion analysis.
It should be noted that the sample session information corresponding to the target user may be determined by the identity of the target user, or the sample session information of the target user may be manually selected, and the like, which is not limited in this embodiment.
In step S202 of some embodiments, the sample dialogue information is labeled based on a preset labeling model to obtain a plurality of emotion labels of the sample dialogue information, so that different emotion labels can be analyzed.
The emotion model is preset as a trained conversation emotion recognition model, and the trained conversation emotion recognition model is used for carrying out automatic prediction and labeling on sample conversation information, so that the accuracy of labeling emotion labels is improved, and the labeling cost is reduced.
It can be understood that the sample dialog information may also be labeled in a manner of manual labeling, and this embodiment is not limited in particular.
Referring to fig. 3, in some embodiments, step S102 may include, but is not limited to, step S301 to step S303:
it should be noted that the preset emotion model includes a sentence vector encoder and an emotion classifier.
Step S301, inputting sentences in the sample dialogue information into a sentence vector encoder for sentence segmentation to obtain sentence vectors of the sample dialogue information;
step S302, inputting the sentence vectors into an emotion classifier so that the emotion classifier can identify emotions of emotion labels in the sentence vectors to obtain an identification result;
and step S303, carrying out classified statistics on the identification results to obtain an emotion frequency set.
In step S301 in some embodiments, each text sentence in the sample dialog information is input into a sentence vector encoder, so that the sentence vector encoder segments the sentence according to the field and the preset separator, and finally takes the vector at the 0 th position as a finally output sentence vector, thereby obtaining a complete sentence vector of the sample dialog information and improving the accuracy of recognizing the emotion tag in the sample dialog information.
It should be noted that the sentence vector encoder is a bert model.
In step S302 of some embodiments, the sentence vectors obtained in step S301 are input into an emotion classifier, so that the emotion classifier performs emotion recognition on the emotion tags appearing in the sentence vectors to obtain recognition results, thereby determining probability distributions of different emotion tags in the sentence vectors.
In step S303 in some embodiments, classification statistics is performed on the recognition result obtained in step S302, probability distribution of different emotion labels in a sentence vector is determined, frequency of occurrence of each emotion label is obtained, and finally an emotion frequency set is obtained through statistics.
Referring to fig. 4, fig. 4 is an alternative flowchart of a method for emotion analysis according to another embodiment of the present application, and the method in fig. 4 may include, but is not limited to, steps S401 to S402.
S401, establishing a self emotion matrix according to the emotion frequency set;
it should be noted that the self emotion matrix is used to represent probability values that the emotion of the current conversation of the target user affects the emotion of the next conversation.
And step S402, establishing an emotion influence matrix according to the emotion frequency set.
It should be noted that the emotion influence matrix is used to represent probability values of emotion influences of others by the target user during the dialog with the target user.
In step S401 of some embodiments, a self emotion matrix for representing a probability of transferring the target user from the current self emotion to the next sentence emotion is established according to the frequency of the emotion tags appearing in the dialog under different dialog situations in the emotion frequency set, where the self emotion matrix is a two-dimensional matrix.
It can be understood that, because frequency values of various emotion tags in multiple conversation situations exist in the emotion frequency set, a self emotion matrix can be established according to the frequency of emotion tags appearing in different conversation situations in the emotion frequency set, for example, if a selected conversation situation is a conversation of a target user, the frequency of emotion tags appearing in the conversation process of the target user in the emotion frequency set is assumed that the current emotion of the target user is a calm frequency a1, and the emotion of the next sentence is a happy frequency c1, the established self emotion matrix is (a 1, c 1).
In step S402 of some embodiments, an emotion influence matrix for representing probability values of emotion influences of others by the target user during a conversation with the target user is established according to frequencies of emotion tags appearing in the conversation under different conversation contexts in the emotion frequency set, where the emotion influence matrix is a two-dimensional matrix.
It can be understood that, because frequency values of various emotion tags in multiple conversation situations exist in the emotion frequency set, a conversation situation can be selected as a target user to exchange with conversations of other people, then, a frequency b1 that the current emotion of other people is excited is obtained, a frequency a2 that the current emotion of the target user is angry is obtained, and then, an emotion influence matrix is established as (b 1, a 2).
Referring to fig. 5, in some embodiments, step S104 includes, but is not limited to, steps S501 to S504:
step S501, constraining emotion labels in sample dialogue information to obtain a real emotion label sequence;
step S502, inputting the real emotion label sequence into an emotion classifier to carry out probability calculation, and obtaining label probability distribution;
step S503, optimizing a preset score value function according to the emotion transfer matrix, the emotion matrix of the user, the emotion influence matrix, the real emotion label sequence and the label probability distribution to obtain a prediction sequence probability function;
and step S504, training a preset emotion model according to the prediction sequence probability function to obtain an emotion transfer model.
In step S501 in some embodiments, the emotion labels in the sample dialog information are constrained to obtain a real emotion label sequence, so as to facilitate subsequent analysis of the emotion of the target user according to the real emotion label sequence.
In step S502 of some embodiments, the real emotion tag sequence obtained in step S501 is input to an emotion classifier for probability calculation, so that the emotion classifier predicts emotion tags in the real tag sequence to obtain a tag probability distribution.
In step S503 of some embodiments, the preset score value function is optimized according to each matrix, the real emotion tag sequence and the tag probability distribution, so as to obtain a prediction sequence probability function for predicting emotion change of the target user.
It should be noted that, in the embodiment of the present application, a maximum likelihood optimization method is used to optimize the preset fraction function.
In step S504 of some embodiments, the preset emotion model is finally trained according to the prediction sequence probability function to enhance the robustness of the prediction sequence probability function, so as to obtain an emotion transfer model, which facilitates subsequent prediction of emotion change of the target user.
Referring to fig. 6, in some embodiments, step S501 includes, but is not limited to, step S601 to step S602:
it should be noted that the preset emotion model includes a conditional random field layer.
Step S601, inputting emotion labels in sample dialogue information into a conditional random field layer for screening to obtain a true emotion label probability value;
step S602, carrying out category statistics on the probability value of the real emotion label to obtain a real emotion label sequence.
In some embodiments, emotion labels in sample dialogue information are input into a conditional random field layer in a preset emotion model for screening to obtain a de-maximized real emotion label probability value, and then category statistics is performed on the real emotion label probability value to obtain a real emotion label sequence, so that prediction accuracy is improved.
Referring to fig. 7, in some embodiments, step S504 may include, but is not limited to, step S701 to step S704:
it should be noted that the prediction sequence probability function includes a likelihood probability function and a loss function.
Step S701, a likelihood probability value of a preset emotion model is obtained according to a likelihood probability function;
step S702, inputting the likelihood probability value into a loss function for calculation to obtain a probability loss value;
step S703, obtaining a emotion transfer probability value of a preset emotion model according to the loss function;
and step S704, when the probability loss value is smaller than the emotion transfer probability value, updating the preset emotion model according to the probability loss value to obtain the emotion transfer model.
In some embodiments, the prediction sequence probability function comprises a likelihood probability function and a loss function, a preset emotion model is subjected to maximum likelihood optimization according to the likelihood probability function to obtain a likelihood probability value of the preset emotion model, then the likelihood probability value is input into the loss function to be calculated to obtain a probability loss value, an emotion transfer probability value of the current preset emotion model is obtained according to the loss function, finally the probability loss value and the transfer probability value are compared, and when the probability loss value is smaller than the emotion transfer probability value, the preset emotion model is updated according to the probability loss value to obtain the emotion transfer model.
It should be noted that, when the probability loss value is less than or equal to the transition probability value, the training of the preset emotion model may be stopped.
Referring to fig. 8, an embodiment of the present application further provides an emotion analysis apparatus, which can implement the emotion analysis method, and the apparatus includes:
the conversation processing module 801 is configured to preprocess the acquired historical conversation information of the target user to obtain sample conversation information of the target user, where the sample conversation information carries a plurality of emotion labels;
the emotion classification module 802 is configured to input the sample dialog information into a preset emotion model to perform emotion classification, so as to obtain an emotion frequency set, where the emotion frequency set includes the number of times that each emotion label appears in the sample dialog information;
the matrix establishing module 803 is configured to establish an emotion transfer matrix of the target user according to the emotion frequency set, where the emotion transfer matrix is used to represent a probability value that the emotion of the target user is influenced by the emotions of other people;
the model training module 804 is used for training a preset emotion model based on the emotion transfer matrix to obtain an emotion transfer model;
the probability prediction module 805 is configured to input the acquired target conversation information of the target user into an emotion transfer model for probability prediction to obtain an emotion fluctuation probability value, where the emotion fluctuation probability value is used as a probability value of emotion change of the target user under influence of emotion of another person.
The specific implementation of the emotion analyzing apparatus is substantially the same as the specific implementation of the emotion analyzing method, and is not described herein again.
In order to more clearly illustrate the flow of the emotion analysis method, a specific example is described below.
The first example is as follows:
suppose there are M speakers p in a conversation 0 、p 1 ……p M-1 ,p 0 Is the target user to be analyzed, speaker p i In all say len i In other words. Suppose there are N words in a dialog X, which are sample dialog information, i.e., X 0 、x 1 ……x N-1 Wherein x in the sample dialogue information i Is the speaker p speaker(i) Said, spaker (i) is x i Subscript of the corresponding speaker. And, other speakers p i Say that it isutterance(p i )[j]Is the speaker p i The subscript of the j-th utterance. The ERC task is to predict emotional labels per sentence, including happiness, anger, neutrality, injury, excitement, anger, etc.
First, a single text sentence x in sample dialogue information is extracted t Inputting sentence vector encoder, sentence vector encoder performing sentence segmentation on sample dialogue information, and text sentence x t Cut into token sequences [ cls ] by letter],x t,0 ,x t,1 ……x t,L-1 ,[sep]]Thus giving x t In L words, x t,i Denotes x t Word i of (1), [ cls ]]、[sep]Denote the beginning and end separators of the sentence, respectively, followed by [ [ cls ]],x t,0 ,x t,1 ……x t,L-1 ,[sep]]Of [ cls ]]The representation of the position is used as a sentence vector identifier to obtain u t ,u t Is a vector of size d, sentence vector u t The formula (1) of (a) is as follows:
u t =BERT([[cls],x t,0 ,x t,1 ……x t,L-1 ,[sep]])[0] (1)
in the formula (1), [0] is a 0 th position vector, and the 0 th position vector is taken as a final output sentence vector.
Then, sentence vector u is added t Inputting the emotion classifier to perform emotion recognition, obtaining frequency distribution of each emotion, thereby obtaining an emotion frequency set, wherein a formula (2) for obtaining the emotion frequency set is as follows:
p t =W·u t +b (2)
then, establishing a self emotion matrix M1, an emotion influence matrix M2 and an emotion transfer matrix M3 for the target according to the emotion frequency set, wherein the M1, the M2 and the M3 are two-dimensional matrixes;
inputting the sample dialogue information into a conditional random field layer for screening to obtain a true emotion label probability value, wherein a true emotion label probability value formula (3) is as follows:
max(P(y|X)) (3)
carrying out category statistics on the probability value of the real emotion label to obtain a real emotion label sequence y = [ y = 0 ,y 1 ……y N-1 ]Wherein, y i Is the true emotion label of the ith sentence. X = [ p ] 0 ,p 1 ……p N-1 ]Is the probability distribution of labels calculated by the emotion classifier, and the real label of the ith sentence is y i The ith sentiment classifier predicts a probability distribution of labels as p i 。
And finally, optimizing a preset score value function in the preset emotion model according to the emotion transfer matrix, the emotion matrix and the emotion influence matrix, wherein the preset score value function formula (4) is as follows:
in the model training process, a maximum likelihood optimization method is adopted for optimization, and the likelihood probability value of the obtained prediction sequence is shown as a formula (5):
inputting the likelihood probability value into a loss function for calculation to obtain a probability loss value, wherein the probability loss value is as shown in formula (6):
loss=-log(P(y|X)) (6)
it should be noted that in the training process, the preset emotion model is processed in batches, the probability loss value of the preset emotion model is obtained first, then the gradient is calculated reversely, and the preset emotion model is updated according to the probability loss value to obtain the final emotion transfer model.
In some embodiments, by analyzing the emotion of the target user, daily emotion distribution of the target user can be obtained, for example, the larger the proportion of emotion in the distribution, the emotional state in which the target user is often in is represented. For example, the higher the percentage of happy emotion distribution of a certain person is, the more joyful the person chats in daily life is represented; the emotion change of the target user, for example, the emotion matrix M1 of the target user, representing the emotion change characteristics of the target user, is obtained through model training and learning, wherein M1 in the matrix i,j The representative user changes from the ith emotion to the jth emotion, and the larger the representative user changes from the ith emotion to the jth emotion in chatting more often; the influence of the target user on the emotion of other people, for example, the emotion influence matrix M2 of the target user represents the characteristics of the emotion influence of the target user on other people, and is obtained through model training and learning. Wherein M2 in the matrix i,j M2 representing the possibility that the opposite party is the jth emotion when the user is the ith emotion i,j The larger the representation of such an eventThe more thread transfers occur; the target user is influenced by the emotion of other people, for example, the emotion transfer matrix M3 of the target user represents the characteristics of the user influenced by the emotion of other people, and is obtained through model training and learning. Wherein M3 in the matrix i,j The greater the probability that the user is the jth emotion when the opposite party is the ith emotion, the greater the probability that the emotion transfer occurs, and therefore the character characteristics of the target user can be analyzed in a multidimensional and comprehensive manner.
An embodiment of the present application further provides an electronic device, where the electronic device includes: a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling a connection communication between the processor and the memory, the program, when executed by the processor, implementing the above described emotion analyzing method. The electronic equipment can be any intelligent terminal including a tablet computer, a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of an electronic device according to another embodiment, where the electronic device includes:
the processor 901 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, and is configured to execute a relevant program to implement the technical solution provided in the embodiment of the present Application;
the Memory 902 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a Random Access Memory (RAM). The memory 902 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 902 and called by the processor 901 to execute the emotion analysis method according to the embodiments of the present application;
an input/output interface 903 for implementing information input and output;
a communication interface 904, configured to implement communication interaction between the device and another device, where communication may be implemented in a wired manner (e.g., USB, network cable, etc.), or in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
a bus 905 that transfers information between various components of the device (e.g., the processor 901, the memory 902, the input/output interface 903, and the communication interface 904);
wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 enable a communication connection within the device with each other through a bus 905.
The embodiment of the application also provides a storage medium, which is a computer-readable storage medium for computer-readable storage, and the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement the emotion analysis method.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
According to the emotion analysis method, the emotion analysis device, the electronic equipment and the storage medium, firstly, historical dialogue information of a target user is preprocessed to obtain sample dialogue information of the target user carrying emotion labels, then the sample dialogue information is input into a preset emotion model to be subjected to emotion classification, so that the number of times that different emotion labels appear in the sample dialogue information is obtained, an emotion frequency set is generated according to the number of times that different emotion labels appear, an emotion transfer matrix of the target user is established according to the emotion frequency set to facilitate subsequent analysis of emotion changes of the target user, then the preset emotion model is trained according to the emotion transfer matrix, robustness of the preset emotion model is improved, the emotion transfer model is obtained, finally, the obtained target dialogue information of the target user is input into the emotion transfer model to be subjected to probability prediction, so that probability values of emotion changes of the target user under the condition that the target user is analyzed through the emotion transfer model, the accuracy of emotion analysis of the target user is improved, and the emotion change condition of the target user influenced by other people can be accurately analyzed.
The embodiments described in the embodiments of the present application are for more clearly illustrating the technical solutions of the embodiments of the present application, and do not constitute a limitation to the technical solutions provided in the embodiments of the present application, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems with the evolution of technology and the emergence of new application scenarios.
It will be appreciated by those skilled in the art that the embodiments shown in fig. 1-7 are not limiting of the embodiments of the present application and may include more or fewer steps than those shown, or some of the steps may be combined, or different steps may be included.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, and functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product stored in a storage medium, which includes multiple instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and the scope of the claims of the embodiments of the present application is not limited thereto. Any modifications, equivalents and improvements that may occur to those skilled in the art without departing from the scope and spirit of the embodiments of the present application are intended to be within the scope of the claims of the embodiments of the present application.
Claims (10)
1. A method of sentiment analysis, the method comprising:
preprocessing acquired historical dialogue information of a target user to obtain sample dialogue information of the target user, wherein the sample dialogue information carries a plurality of emotion labels;
inputting the sample conversation information into a preset emotion model for emotion classification to obtain an emotion frequency set, wherein the emotion frequency set comprises the frequency of occurrence of each emotion label in the sample conversation information;
establishing an emotion transfer matrix of the target user according to the emotion frequency set, wherein the emotion transfer matrix is used for representing a probability value that the emotion of the target user is influenced by the emotions of other people;
training the preset emotion model based on the emotion transfer matrix to obtain an emotion transfer model;
and inputting the acquired target conversation information of the target user into the emotion transfer model for probability prediction to obtain an emotion fluctuation probability value, wherein the emotion fluctuation probability value is a probability value representing that the emotion of the target user changes under the influence of the emotion of other people.
2. The emotion analysis method according to claim 1, wherein the historical conversation information includes conversation information of a plurality of users having a conversation with the target user; the preprocessing the acquired historical dialogue information of the target user to obtain the sample dialogue information of the target user comprises the following steps:
determining the sample dialogue information corresponding to the target user in the historical dialogue information;
and labeling the sample dialogue information based on a preset labeling model to obtain the emotion label of the sample dialogue information.
3. The emotion analysis method according to claim 1, wherein the preset emotion model includes a sentence vector encoder and an emotion classifier; inputting the sample dialogue information into a preset emotion model for emotion classification to obtain an emotion frequency set, wherein the emotion frequency set comprises:
inputting sentences in the sample dialogue information into the sentence vector encoder to perform sentence segmentation to obtain sentence vectors of the sample dialogue information;
inputting the sentence vector into the emotion classifier so that the emotion classifier performs emotion recognition on the emotion label in the sentence vector to obtain a recognition result;
and carrying out classified statistics on the recognition results to obtain the emotion frequency set.
4. The emotion analysis method according to claim 3, further comprising:
establishing a self emotion matrix according to the emotion frequency set, wherein the self emotion matrix is used for representing the probability value that the emotion of the current conversation of the target user influences the emotion of the next conversation;
and establishing an emotion influence matrix according to the emotion frequency set, wherein the emotion influence matrix is used for representing probability values of emotion influences of others by the target user in a conversation process with the target user.
5. The emotion analysis method of claim 4, wherein the training of the preset emotion model based on the emotion transfer matrix to obtain an emotion transfer model comprises:
constraining the emotion labels in the sample conversation information to obtain a real emotion label sequence;
inputting the real emotion label sequence into the emotion classifier to perform probability calculation to obtain label probability distribution;
optimizing a preset score value function according to the emotion transfer matrix, the emotion influence matrix, the real emotion label sequence and the label probability distribution to obtain a prediction sequence probability function;
and training the preset emotion model according to the prediction sequence probability function to obtain the emotion transfer model.
6. The emotion analysis method of claim 5, wherein the preset emotion model includes a conditional random field layer; the step of constraining the emotion labels in the sample dialogue information to obtain a real emotion label sequence comprises the following steps:
inputting the emotion labels in the sample dialogue information into the conditional random field layer for screening to obtain a real emotion label probability value;
and carrying out category statistics on the probability value of the real emotion label to obtain the real emotion label sequence.
7. The emotion analysis method of claim 5, wherein the predicted sequence probability function includes a likelihood probability function and a loss function; the training the preset emotion model according to the prediction sequence probability function to obtain the emotion transfer model comprises the following steps:
obtaining a likelihood probability value of the preset emotion model according to the likelihood probability function;
inputting the likelihood probability value into the loss function for calculation to obtain a probability loss value;
obtaining the emotion transfer probability value of the preset emotion model according to the loss function;
and when the probability loss value is smaller than the emotion transfer probability value, updating the preset emotion model according to the probability loss value to obtain the emotion transfer model.
8. An emotion analyzing apparatus, characterized in that the apparatus comprises:
the conversation processing module is used for preprocessing the acquired historical conversation information of the target user to obtain sample conversation information of the target user, wherein the sample conversation information carries a plurality of emotion labels;
the emotion classification module is used for inputting the sample conversation information into a preset emotion model to perform emotion classification to obtain an emotion frequency set, wherein the emotion frequency set comprises the frequency of occurrence of each emotion label in the sample conversation information;
the matrix establishing module is used for establishing an emotion transfer matrix of the target user according to the emotion frequency set, wherein the emotion transfer matrix is used for representing a probability value that the emotion of the target user is influenced by the emotion of other people;
the model training module is used for training the preset emotion model based on the emotion transfer matrix to obtain an emotion transfer model;
and the probability prediction module is used for inputting the acquired target conversation information of the target user into the emotion transfer model for probability prediction to obtain an emotion fluctuation probability value, wherein the emotion fluctuation probability value is used for the probability value of emotion change of the target user under the influence of emotions of other people.
9. An electronic device, characterized in that the electronic device comprises a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling a connection communication between the processor and the memory, the program, when executed by the processor, implementing the steps of the emotion analyzing method as claimed in any one of claims 1 to 7.
10. A storage medium, which is a computer-readable storage medium for computer-readable storage, characterized in that the storage medium stores one or more programs executable by one or more processors to implement the steps of the emotion analyzing method as recited in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210958510.4A CN115292495A (en) | 2022-08-09 | 2022-08-09 | Emotion analysis method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210958510.4A CN115292495A (en) | 2022-08-09 | 2022-08-09 | Emotion analysis method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115292495A true CN115292495A (en) | 2022-11-04 |
Family
ID=83828775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210958510.4A Pending CN115292495A (en) | 2022-08-09 | 2022-08-09 | Emotion analysis method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115292495A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115640323A (en) * | 2022-12-22 | 2023-01-24 | 浙江大学 | A Sentiment Prediction Method Based on Transition Probability |
CN119599055A (en) * | 2025-02-10 | 2025-03-11 | 北京通用人工智能研究院 | Intelligent emotion generation method, system and storage medium |
CN119783817A (en) * | 2024-12-13 | 2025-04-08 | 内蒙古工业大学 | A dialogue behavior-oriented personalized emotion generation method |
CN120224512A (en) * | 2025-05-28 | 2025-06-27 | 谷东科技有限公司 | A method and AI glasses that automatically change frame color based on mood and weather |
CN120224512B (en) * | 2025-05-28 | 2025-07-29 | 谷东科技有限公司 | Method for automatically changing frame color according to mood and weather and AI glasses |
-
2022
- 2022-08-09 CN CN202210958510.4A patent/CN115292495A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115640323A (en) * | 2022-12-22 | 2023-01-24 | 浙江大学 | A Sentiment Prediction Method Based on Transition Probability |
CN115640323B (en) * | 2022-12-22 | 2023-03-17 | 浙江大学 | A Sentiment Forecasting Method Based on Transition Probability |
CN119783817A (en) * | 2024-12-13 | 2025-04-08 | 内蒙古工业大学 | A dialogue behavior-oriented personalized emotion generation method |
CN119599055A (en) * | 2025-02-10 | 2025-03-11 | 北京通用人工智能研究院 | Intelligent emotion generation method, system and storage medium |
CN120224512A (en) * | 2025-05-28 | 2025-06-27 | 谷东科技有限公司 | A method and AI glasses that automatically change frame color based on mood and weather |
CN120224512B (en) * | 2025-05-28 | 2025-07-29 | 谷东科技有限公司 | Method for automatically changing frame color according to mood and weather and AI glasses |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111241237B (en) | Intelligent question-answer data processing method and device based on operation and maintenance service | |
CN113094578B (en) | Deep learning-based content recommendation method, device, equipment and storage medium | |
US20160210279A1 (en) | Methods and systems for analyzing communication situation based on emotion information | |
CN115292495A (en) | Emotion analysis method and device, electronic equipment and storage medium | |
CN111125354A (en) | Text classification method and device | |
CN113987147A (en) | Sample processing method and device | |
CN111651996A (en) | Abstract generation method and device, electronic equipment and storage medium | |
CN111538809B (en) | Voice service quality detection method, model training method and device | |
US12210833B2 (en) | System of and method for automatically detecting sarcasm of a batch of text | |
CN111159409A (en) | Text classification method, device, equipment and medium based on artificial intelligence | |
CN116578688A (en) | Text processing method, device, equipment and storage medium based on multiple rounds of questions and answers | |
CN115292460A (en) | Topic recommendation method and device, electronic equipment and storage medium | |
CN116580704A (en) | Training method of voice recognition model, voice recognition method, equipment and medium | |
CN113505293B (en) | Information pushing method and device, electronic equipment and storage medium | |
CN117648618A (en) | Intention recognition method, device, electronic equipment and storage medium | |
CN116108152A (en) | Speaking operation recommendation method, device, terminal equipment and medium | |
CN112016331A (en) | Passenger transport passenger emotion analysis method | |
CN115795007A (en) | Intelligent question-answering method, intelligent question-answering device, electronic equipment and storage medium | |
CN113158052B (en) | Chat content recommendation method, chat content recommendation device, computer equipment and storage medium | |
CN115017886A (en) | Text matching method, text matching device, electronic device and storage medium | |
CN113191135A (en) | Multi-category emotion extraction method fusing facial characters | |
CN117668758A (en) | Dialog intention recognition method and device, electronic equipment and storage medium | |
CN116543798A (en) | Emotion recognition method and device based on multiple classifiers, electronic equipment and medium | |
CN116564274A (en) | Speech synthesis method, speech synthesis device, electronic device, and storage medium | |
CN116543753A (en) | Speech recognition method, speech recognition device, electronic apparatus, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |