Disclosure of Invention
The main purpose of the application is to provide a classification method, a device, a terminal and a storage medium, so as to solve the problems that the similarity is high but the classification is inaccurate when different vehicle owners are classified in the related technology.
To achieve the above object, in a first aspect, the present application provides a classification method, including:
acquiring voice texts of different vehicle owners;
calculating emotion scores, text similarity and weights of voice texts of different vehicle owners;
and calculating the emotion similarity among different car owners based on emotion scores, text similarity and weights of voice texts of the different car owners so as to determine the types of the different car owners based on the emotion similarity among the different car owners.
In one possible implementation, the different owners include at least a first owner and a second owner, and the voice text of the different owners includes at least a first voice text of the first owner and a second voice text of the second owner;
calculating emotion scores, text similarity and weights of voice texts of different vehicle owners, wherein the emotion scores, the text similarity and the weights comprise:
preprocessing the first voice text and the second voice text respectively to obtain a first word vector corresponding to the first voice text and a second word vector corresponding to the second voice text;
respectively calculating a first emotion score corresponding to the first word vector and a second emotion score corresponding to the second word vector by adopting a fine-granularity emotion dictionary;
calculating the similarity of the first voice text and the second voice text to obtain text similarity;
and calculating a first weight corresponding to the first owner and a second weight corresponding to the second owner based on the first emotion score, the second emotion score and the text similarity.
In one possible implementation manner, preprocessing the first voice text and the second voice text respectively to obtain a first word vector corresponding to the first voice text and a second word vector corresponding to the second voice text, including:
word segmentation is carried out on the first voice text and the second voice text respectively, so that the segmented first voice text and segmented second voice text are obtained;
and respectively carrying out vectorization processing on the first voice text after word segmentation and the second voice text after word segmentation by adopting a TF-IDF algorithm to obtain a first word vector and a second word vector.
In one possible implementation manner, calculating a first emotion score corresponding to a first word vector and a second emotion score corresponding to a second word vector by using a fine-granularity emotion dictionary includes:
comparing each word vector in the first word vector and the second word vector with a preset word vector in a fine granularity emotion dictionary, and determining an emotion score of each word vector in the first word vector and an emotion score of each word vector in the second word vector;
the first emotion score is calculated based on the emotion score of each of the first word vectors, and the second emotion score is calculated based on the emotion score of each of the second word vectors.
In one possible implementation, calculating a first weight corresponding to the first owner and a second weight corresponding to the second owner based on the first emotion score, the second emotion score, and the text similarity includes:
normalizing the first emotion score, the second emotion score and the text similarity to obtain a normalized first emotion score, a normalized second emotion score and a normalized text similarity;
calculating a first information entropy based on the normalized first emotion score and the normalized text similarity, and calculating a second information entropy based on the normalized second emotion score and the normalized text similarity;
the first weight is calculated based on the first information entropy and the text encoding of the first speech text, and the second weight is calculated based on the second information entropy and the text encoding of the second speech text.
In one possible implementation, the emotional similarity between the different owners includes at least emotional similarity of the first owner and the second owner;
based on emotion scores, text similarity and weights of voice texts of different vehicle owners, the emotion similarity among the different vehicle owners is calculated, and the method comprises the following steps:
and calculating emotion similarity of the first vehicle owner and the second vehicle owner based on the first weight, the second weight, the first emotion score, the second emotion score and the text similarity.
In one possible implementation, the calculation formula of the emotion similarity of the first vehicle owner and the second vehicle owner is:
score=wi*EscoreA*EscoreB+wj*similarity(A,B)
wherein score represents emotional similarity of the first vehicle owner and the second vehicle owner, wi represents a first weight, wj represents a second weight, escoreA represents a first emotional score, escoreB represents a second emotional score, similarity (a, B) represents text similarity.
In a second aspect, an embodiment of the present invention provides a classification apparatus, including:
the acquisition module is used for acquiring voice texts of different vehicle owners;
the computing module is used for computing emotion scores, text similarity and weights of voice texts of different vehicle owners;
the classification module is used for calculating emotion similarity among different car owners based on emotion scores, text similarity and weights of voice texts of the different car owners so as to determine types of the different car owners based on the emotion similarity among the different car owners.
In a third aspect, an embodiment of the present invention provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of any of the classification methods described above when the computer program is executed by the processor.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of any of the classification methods described above.
The embodiment of the invention provides a classification method, a classification device, a classification terminal and a storage medium, wherein the classification method comprises the following steps: firstly, voice texts of different vehicle owners are obtained, emotion scores, text similarity and weights of the voice texts of the different vehicle owners are calculated, and then emotion similarity among the different vehicle owners is calculated based on the emotion scores, the text similarity and the weights of the voice texts of the different vehicle owners, so that types of the different vehicle owners are determined based on the emotion similarity among the different vehicle owners. According to the method, the similarity calculation is combined with emotion, so that the emotion similarity of different car owners is calculated, and the classification accuracy of the different car owners is improved.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein.
It should be understood that, in various embodiments of the present invention, the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present invention, "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements that are expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present invention, "plurality" means two or more. "and/or" is merely an association relationship describing an association object, and means that three relationships may exist, for example, and/or B may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and C", "comprising A, B, C" means that all three of A, B, C comprise, "comprising A, B or C" means that one of the three comprises A, B, C, and "comprising A, B and/or C" means that any 1 or any 2 or 3 of the three comprises A, B, C.
It should be understood that in the present invention, "B corresponding to a", "a corresponding to B", or "B corresponding to a" means that B is associated with a, from which B can be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information. The matching of A and B is that the similarity of A and B is larger than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection" depending on the context.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the following description will be made by way of specific embodiments with reference to the accompanying drawings.
In one embodiment, as shown in FIG. 1, a classification method is provided, comprising the steps of:
step S101: and acquiring voice texts of different vehicle owners.
The vehicle driven by the vehicle owner is provided with equipment with a voice acquisition function, and voice texts of different vehicle owners can be directly acquired by acquiring voices of different vehicle owners. The device includes, but is not limited to, a microphone, a mobile phone, a collector, etc. arranged on the vehicle.
Step S102: and calculating emotion scores, text similarity and weights of voice texts of different vehicle owners.
The number of different vehicle owners is not limited, and can be set according to specific situations, such as 2, 3, 10 and the like.
Under the condition that different vehicle owners at least comprise a first vehicle owner and a second vehicle owner, voice texts of the different vehicle owners at least comprise a first voice text of the first vehicle owner and a second voice text of the second vehicle owner, emotion scores, text similarity and weights of the voice texts of the different vehicle owners are calculated, the first voice text and the second voice text need to be preprocessed respectively to obtain a first word vector corresponding to the first voice text and a second word vector corresponding to the second voice text, then fine-grained emotion dictionaries are adopted to calculate a first emotion score corresponding to the first word vector and a second emotion score corresponding to the second word vector respectively, similarity of the first voice text and the second voice text is calculated to obtain the text similarity, and finally, based on the first emotion score, the second emotion score and the text similarity, a first weight corresponding to the first vehicle owner and a second weight corresponding to the second vehicle owner are calculated.
The method comprises the steps of respectively preprocessing a first voice text and a second voice text to obtain a first word vector corresponding to the first voice text and a second word vector corresponding to the second voice text, respectively performing word segmentation on the first voice text and the second voice text to obtain a segmented first voice text and a segmented second voice text, and then performing vectorization on the segmented first voice text and the segmented second voice text by adopting a TF-IDF (term frequency-reverse document frequency) algorithm to obtain a first word vector and a second word vector.
For example, in the case of two owners, the first phonetic text of the first owner is "this navigation is too good", and the second phonetic text of the second owner is "this navigation is too bad". The word segmentation processing is performed on the first voice text which is "the navigation is too good" and the second voice text which is "the navigation is too bad", so that the first voice text after word segmentation is "the navigation, the navigation is too good, the user and the user", and the second voice text is "the navigation, the navigation is too bad, the user and the user".
And then vectorizing the first voice text of 'the first word, the second word, the first word, the second word and the first word, wherein the first voice text is' the first word, the second word, the first word, the second word and the first word. The manner of vectorization is not particularly limited.
After the first term vector and the second term vector are obtained, a fine-grained emotion dictionary is required to be adopted to calculate a first emotion score corresponding to the first term vector and a second emotion score corresponding to the second term vector respectively, specifically, each term vector in the first term vector and the second term vector is required to be compared with a preset term vector in the fine-grained emotion dictionary respectively, the emotion score of each term vector in the first term vector and the emotion score of each term vector in the second term vector are determined, and then the first emotion score is calculated based on the emotion score of each term vector in the first term vector, and the second emotion score is calculated based on the emotion score of each term vector in the second term vector.
The fine-granularity emotion dictionary has preset word vectors, the preset word vectors can be scores corresponding to different emotions, for example, all emotions are classified into six categories including positive, negative, neutral, favorite, aversion and anger, and each category has own scores, namely, the scores corresponding to positive, negative, neutral, favorite, aversion and anger are respectively 1, -1,0,2, -2 and-3.
And carrying out a comparison and matching on each word vector in the first word vector and the second word vector and the score corresponding to the emotion type, so that the emotion score of each word vector in the first word vector and the emotion score of each word vector in the second word vector can be obtained.
And then carrying out sum computation on the emotion score of each word vector in the first word vector to obtain a first emotion score, and carrying out sum computation on the emotion score of each word vector in the second word vector to obtain a second emotion score.
Meanwhile, the text similarity between the first voice text and the second voice text needs to be calculated, and the calculation formula is as follows:
where similarity (a, B) represents a text similarity between the first phonetic text and the second phonetic text, a represents the first phonetic text, and B represents the second phonetic text.
After the first emotion score, the second emotion score and the text similarity are obtained through calculation, a first weight corresponding to a first owner and a second weight corresponding to a second owner are calculated based on the first emotion score, the second emotion score and the text similarity, specifically, normalization processing is carried out on the first emotion score, the second emotion score and the text similarity to obtain a normalized first emotion score, a normalized second emotion score and a normalized text similarity, then a first information entropy is calculated based on the normalized first emotion score and the normalized text similarity, a second information entropy is calculated based on the normalized second emotion score and the normalized text similarity, and then a first weight is calculated based on the first information entropy and the text coding of the first voice text, and a second weight is calculated based on the second information entropy and the text coding of the second voice text.
The normalization process includes, but is not limited to, a process mode such as a range method.
After normalization processing is performed on the first emotion score EscoreA, the second emotion score EscoreB and the text similarity (A, B), an entropy weight method can be adopted to calculate the normalized first emotion score and the normalized text similarity to obtain a first information entropy E i Calculating normalized second emotion scores and normalized text similarity to obtain second information entropy E j 。
And then can be based on the first information entropy E i And a text code i of the first voice text, calculating to obtain a first weight w i The calculation formula is as follows:
where i=1, 2,..n.
According to the first information entropy E j And the text code j of the second voice text, calculating to obtain a second weight w j The calculation formula is as follows:
where j=1, 2,..n.
Step S103: and calculating the emotion similarity among different car owners based on emotion scores, text similarity and weights of voice texts of the different car owners so as to determine the types of the different car owners based on the emotion similarity among the different car owners.
Under the condition that the emotion similarity among different vehicle owners at least comprises emotion similarity of a first vehicle owner and a second vehicle owner, the emotion similarity among the different vehicle owners is calculated based on emotion scores, text similarity and weights of voice texts of the different vehicle owners, and the emotion similarity of the first vehicle owner and the second vehicle owner is calculated based on the first weights, the second weights, the first emotion scores, the second emotion scores and the text similarity.
Specifically, the calculation formula of the emotion similarity of the first vehicle owner and the second vehicle owner is as follows:
score=wi*EscoreA*EscoreB+wj*similarity(A,B)
wherein score represents emotional similarity of the first vehicle owner and the second vehicle owner, wi represents a first weight, wj represents a second weight, escoreA represents a first emotional score, escoreB represents a second emotional score, similarity (a, B) represents text similarity.
The embodiment of the invention provides a classification method, which comprises the following steps: firstly, voice texts of different vehicle owners are obtained, emotion scores, text similarity and weights of the voice texts of the different vehicle owners are calculated, and then emotion similarity among the different vehicle owners is calculated based on the emotion scores, the text similarity and the weights of the voice texts of the different vehicle owners, so that types of the different vehicle owners are determined based on the emotion similarity among the different vehicle owners. According to the method, the similarity calculation is combined with emotion, so that the emotion similarity of different car owners is calculated, and the classification accuracy of the different car owners is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
The following are device embodiments of the invention, for details not described in detail therein, reference may be made to the corresponding method embodiments described above.
Fig. 2 shows a schematic structural diagram of a classification device according to an embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is shown, and the classification device includes an obtaining module 201, a calculating module 202, and a classification module 203, which are specifically as follows:
an acquisition module 201, configured to acquire voice texts of different vehicle owners;
the calculating module 202 is used for calculating emotion scores, text similarity and weights of voice texts of different vehicle owners;
the classification module 203 is configured to calculate emotion similarity between different vehicle owners based on emotion scores, text similarity and weights of voice texts of the different vehicle owners, so as to determine types of the different vehicle owners based on the emotion similarity between the different vehicle owners.
In one possible implementation, the different owners include at least a first owner and a second owner, and the voice text of the different owners includes at least a first voice text of the first owner and a second voice text of the second owner;
the computing module 202 is further configured to pre-process the first voice text and the second voice text, respectively, to obtain a first word vector corresponding to the first voice text and a second word vector corresponding to the second voice text;
respectively calculating a first emotion score corresponding to the first word vector and a second emotion score corresponding to the second word vector by adopting a fine-granularity emotion dictionary;
calculating the similarity of the first voice text and the second voice text to obtain text similarity;
and calculating a first weight corresponding to the first owner and a second weight corresponding to the second owner based on the first emotion score, the second emotion score and the text similarity.
In one possible implementation manner, the computing module 202 is further configured to perform word segmentation processing on the first voice text and the second voice text, so as to obtain a segmented first voice text and a segmented second voice text;
and respectively carrying out vectorization processing on the first voice text after word segmentation and the second voice text after word segmentation by adopting a TF-IDF algorithm to obtain a first word vector and a second word vector.
In a possible implementation manner, the computing module 202 is further configured to compare each of the first word vector and the second word vector with a word vector preset in the fine granularity emotion dictionary, and determine an emotion score of each of the first word vector and an emotion score of each of the second word vector;
the first emotion score is calculated based on the emotion score of each of the first word vectors, and the second emotion score is calculated based on the emotion score of each of the second word vectors.
In one possible implementation manner, the computing module 202 is further configured to normalize the first emotion score, the second emotion score, and the text similarity to obtain a normalized first emotion score, a normalized second emotion score, and a normalized text similarity;
calculating a first information entropy based on the normalized first emotion score and the normalized text similarity, and calculating a second information entropy based on the normalized second emotion score and the normalized text similarity;
the first weight is calculated based on the first information entropy and the text encoding of the first speech text, and the second weight is calculated based on the second information entropy and the text encoding of the second speech text.
In one possible implementation, the emotional similarity between the different owners includes at least emotional similarity of the first owner and the second owner;
the classification module 203 is further configured to calculate emotional similarity of the first vehicle owner and the second vehicle owner based on the first weight, the second weight, the first emotion score, the second emotion score, and the text similarity.
In one possible implementation, the calculation formula of the emotion similarity of the first vehicle owner and the second vehicle owner is:
score=wi*EscoreA*EscoreB+wj*similarity(A,B)
wherein score represents emotional similarity of the first vehicle owner and the second vehicle owner, wi represents a first weight, wj represents a second weight, escoreA represents a first emotional score, escoreB represents a second emotional score, similarity (a, B) represents text similarity.
The embodiment of the invention provides a classification device which can be particularly used for acquiring voice texts of different vehicle owners, calculating emotion scores, text similarity and weights of the voice texts of the different vehicle owners, and calculating emotion similarity among the different vehicle owners based on the emotion scores, the text similarity and the weights of the voice texts of the different vehicle owners so as to determine types of the different vehicle owners based on the emotion similarity among the different vehicle owners. According to the method, the similarity calculation is combined with emotion, so that the emotion similarity of different car owners is calculated, and the classification accuracy of the different car owners is improved.
Fig. 3 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 3, the terminal 3 of this embodiment includes: a processor 301, a memory 302 and a computer program 303 stored in the memory 302 and executable on the processor 301. The steps of the various classification method embodiments described above, such as steps 101-103 shown in fig. 1, are implemented when the processor 301 executes the computer program 303. Alternatively, the processor 301, when executing the computer program 303, performs the functions of the modules/units of the various sorting apparatus embodiments described above, such as the functions of the modules/units 201-203 shown in fig. 2.
The present invention also provides a readable storage medium having a computer program stored therein, which when executed by a processor is configured to implement a classification method provided in the above various embodiments, including:
acquiring voice texts of different vehicle owners;
calculating emotion scores, text similarity and weights of voice texts of different vehicle owners;
and calculating the emotion similarity among different car owners based on emotion scores, text similarity and weights of voice texts of the different car owners so as to determine the types of the different car owners based on the emotion similarity among the different car owners.
In one possible implementation, the different owners include at least a first owner and a second owner, and the voice text of the different owners includes at least a first voice text of the first owner and a second voice text of the second owner;
calculating emotion scores, text similarity and weights of voice texts of different vehicle owners, wherein the emotion scores, the text similarity and the weights comprise:
preprocessing the first voice text and the second voice text respectively to obtain a first word vector corresponding to the first voice text and a second word vector corresponding to the second voice text;
respectively calculating a first emotion score corresponding to the first word vector and a second emotion score corresponding to the second word vector by adopting a fine-granularity emotion dictionary;
calculating the similarity of the first voice text and the second voice text to obtain text similarity;
and calculating a first weight corresponding to the first owner and a second weight corresponding to the second owner based on the first emotion score, the second emotion score and the text similarity.
In one possible implementation manner, preprocessing the first voice text and the second voice text respectively to obtain a first word vector corresponding to the first voice text and a second word vector corresponding to the second voice text, including:
word segmentation is carried out on the first voice text and the second voice text respectively, so that the segmented first voice text and segmented second voice text are obtained;
and respectively carrying out vectorization processing on the first voice text after word segmentation and the second voice text after word segmentation by adopting a TF-IDF algorithm to obtain a first word vector and a second word vector.
In one possible implementation manner, calculating a first emotion score corresponding to a first word vector and a second emotion score corresponding to a second word vector by using a fine-granularity emotion dictionary includes:
comparing each word vector in the first word vector and the second word vector with a preset word vector in a fine granularity emotion dictionary, and determining an emotion score of each word vector in the first word vector and an emotion score of each word vector in the second word vector;
the first emotion score is calculated based on the emotion score of each of the first word vectors, and the second emotion score is calculated based on the emotion score of each of the second word vectors.
In one possible implementation, calculating a first weight corresponding to the first owner and a second weight corresponding to the second owner based on the first emotion score, the second emotion score, and the text similarity includes:
normalizing the first emotion score, the second emotion score and the text similarity to obtain a normalized first emotion score, a normalized second emotion score and a normalized text similarity;
calculating a first information entropy based on the normalized first emotion score and the normalized text similarity, and calculating a second information entropy based on the normalized second emotion score and the normalized text similarity;
the first weight is calculated based on the first information entropy and the text encoding of the first speech text, and the second weight is calculated based on the second information entropy and the text encoding of the second speech text.
In one possible implementation, the emotional similarity between the different owners includes at least emotional similarity of the first owner and the second owner;
based on emotion scores, text similarity and weights of voice texts of different vehicle owners, the emotion similarity among the different vehicle owners is calculated, and the method comprises the following steps:
and calculating emotion similarity of the first vehicle owner and the second vehicle owner based on the first weight, the second weight, the first emotion score, the second emotion score and the text similarity.
In one possible implementation, the calculation formula of the emotion similarity of the first vehicle owner and the second vehicle owner is:
score=wi*EscoreA*EscoreB+wj*similarity(A,B)
wherein score represents emotional similarity of the first vehicle owner and the second vehicle owner, wi represents a first weight, wj represents a second weight, escoreA represents a first emotional score, escoreB represents a second emotional score, similarity (a, B) represents text similarity.
The readable storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media can be any available media that can be accessed by a general purpose or special purpose computer. For example, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC). In addition, the ASIC may reside in a user device. The processor and the readable storage medium may reside as discrete components in a communication device. The readable storage medium may be read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tape, floppy disk, optical data storage device, etc.
The present invention also provides a program product comprising execution instructions stored in a readable storage medium. At least one processor of the apparatus may read the execution instructions from the readable storage medium, the execution instructions being executed by the at least one processor to cause the apparatus to implement a classification method provided by the various embodiments described above, comprising:
acquiring voice texts of different vehicle owners;
calculating emotion scores, text similarity and weights of voice texts of different vehicle owners;
and calculating the emotion similarity among different car owners based on emotion scores, text similarity and weights of voice texts of the different car owners so as to determine the types of the different car owners based on the emotion similarity among the different car owners.
In one possible implementation, the different owners include at least a first owner and a second owner, and the voice text of the different owners includes at least a first voice text of the first owner and a second voice text of the second owner;
calculating emotion scores, text similarity and weights of voice texts of different vehicle owners, wherein the emotion scores, the text similarity and the weights comprise:
preprocessing the first voice text and the second voice text respectively to obtain a first word vector corresponding to the first voice text and a second word vector corresponding to the second voice text;
respectively calculating a first emotion score corresponding to the first word vector and a second emotion score corresponding to the second word vector by adopting a fine-granularity emotion dictionary;
calculating the similarity of the first voice text and the second voice text to obtain text similarity;
and calculating a first weight corresponding to the first owner and a second weight corresponding to the second owner based on the first emotion score, the second emotion score and the text similarity.
In one possible implementation manner, preprocessing the first voice text and the second voice text respectively to obtain a first word vector corresponding to the first voice text and a second word vector corresponding to the second voice text, including:
word segmentation is carried out on the first voice text and the second voice text respectively, so that the segmented first voice text and segmented second voice text are obtained;
and respectively carrying out vectorization processing on the first voice text after word segmentation and the second voice text after word segmentation by adopting a TF-IDF algorithm to obtain a first word vector and a second word vector.
In one possible implementation manner, calculating a first emotion score corresponding to a first word vector and a second emotion score corresponding to a second word vector by using a fine-granularity emotion dictionary includes:
comparing each word vector in the first word vector and the second word vector with a preset word vector in a fine granularity emotion dictionary, and determining an emotion score of each word vector in the first word vector and an emotion score of each word vector in the second word vector;
the first emotion score is calculated based on the emotion score of each of the first word vectors, and the second emotion score is calculated based on the emotion score of each of the second word vectors.
In one possible implementation, calculating a first weight corresponding to the first owner and a second weight corresponding to the second owner based on the first emotion score, the second emotion score, and the text similarity includes:
normalizing the first emotion score, the second emotion score and the text similarity to obtain a normalized first emotion score, a normalized second emotion score and a normalized text similarity;
calculating a first information entropy based on the normalized first emotion score and the normalized text similarity, and calculating a second information entropy based on the normalized second emotion score and the normalized text similarity;
the first weight is calculated based on the first information entropy and the text encoding of the first speech text, and the second weight is calculated based on the second information entropy and the text encoding of the second speech text.
In one possible implementation, the emotional similarity between the different owners includes at least emotional similarity of the first owner and the second owner;
based on emotion scores, text similarity and weights of voice texts of different vehicle owners, the emotion similarity among the different vehicle owners is calculated, and the method comprises the following steps:
and calculating emotion similarity of the first vehicle owner and the second vehicle owner based on the first weight, the second weight, the first emotion score, the second emotion score and the text similarity.
In one possible implementation, the calculation formula of the emotion similarity of the first vehicle owner and the second vehicle owner is:
score=wi*EscoreA*EscoreB+wj*similarity(A,B)
wherein score represents emotional similarity of the first vehicle owner and the second vehicle owner, wi represents a first weight, wj represents a second weight, escoreA represents a first emotional score, escoreB represents a second emotional score, similarity (a, B) represents text similarity.
In the above described embodiments of the apparatus, it is understood that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.