Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 is a flowchart of a complaint event classifying method based on ensemble learning according to an embodiment of the present invention, and the method includes steps S101 to S105:
s101, preprocessing a historical property complaint event to obtain an original data set containing a plurality of historical property complaint events;
The property complaint event record in the embodiment is the latest 10 ten thousand historical property complaint event records exported by the complaint processing system, so that the historical property complaint event is preprocessed, and data extraction is prepared so as to complete subsequent data analysis and classification tasks.
Specifically, as shown in fig. 2, the step S101 includes S201 to S203:
s201, acquiring a plurality of historical property complaint event records, and cleaning repeated records, error records and empty records in the plurality of historical property complaint event records;
s202, extracting keywords in the plurality of historical property complaint event records by using a regular expression and a TF-IDF method, and screening a plurality of high-frequency records according to the keywords;
The regular expression (Regular Expression) is a tool for describing, matching and operating the character strings, and the regular expression can be used for searching, replacing, extracting, matching and the like of the character strings. It can be used in a number of fields such as text processing, programming languages, data verification, web crawlers, etc. Regular expressions provide a flexible and powerful way to pattern match and process text data.
The TF-IDF (Term Frequency-inverse document Frequency) technique is a common weighting technique for information retrieval and data mining, F is Term Frequency (Term Frequency), and IDF is the inverse text Frequency index (Inverse Document Frequency).
S203, filtering out a plurality of historical property complaint events meeting the text requirement from the plurality of high-frequency records, and cleaning data to obtain an original data set.
Specifically, the embodiment includes 10 ten thousand historical property complaint event records, and these records are cleaned to remove repeated records, error records and empty records, so as to obtain a group of cleaned historical property complaint event records. Through the combined use of the regular expression and the TF-IDF method, keywords are extracted from the cleaned historical property complaint event records, and high-frequency records related to the keywords are found out. Filtering a plurality of historical property complaints meeting the text requirement from the high-frequency records, and performing data cleaning, wherein in order to clean junk data, the distribution condition of the data length of the plurality of historical property complaints needs to be firstly derived, then, deleting the event records with the text length smaller than 20, and in order to ensure that the text token (the word group read in the text and meeting the lexical rule) length is smaller than the maximum input length of the BERT model, deleting the event records with the text length larger than 600. After the above steps, 3 ten thousand historical complaint event records are obtained and used as the original data set in the embodiment.
S102, performing abstract extraction on the original data set by using a GPT-3 model to obtain a key data set;
In this embodiment, before extracting the abstract of the original data set, in order to ensure that the training set, the verification set and the test set all contain a considerable number of event classifications, the original data set is subjected to data set segmentation, specifically: and (3) dividing the 3 ten thousand historical complaint event records obtained in the step S101 into a training set, a verification set and a test set according to the ratio of 6:3:1. The training set comprises 1.8 ten thousand historical complaint event records, the verification set comprises 0.9 ten thousand historical complaint event records, and the test set comprises 0.3 ten thousand historical complaint event records.
Furthermore, the GPT-3 model is used for carrying out text summarization on the historical complaints in the training set, the verification set and the test set, and is a powerful natural language processing model with the capability of understanding and generating natural language texts, and the process can simplify the complaints and focus core contents, so that the text quality and the classification performance of a follow-up fusion model are improved. Specifically, a suitable prompt (which generally refers to an input prompt or indication that is used to instruct the model to perform the generation of a specific task) is constructed and input into the GPT-3 model for text summarization extraction, and some prompts or indications may be provided to the model, where the prompt is as follows:
Please generate a concise and clear summary for the following given property complaint event text. Note that: no braiding component is allowed to be added to the answer and the answer is made using chinese. Complaint event text { some complaint event text }. "
And sequentially putting data in the training set and the verification set in { a certain complaint event text }, so as to obtain text summaries corresponding to the training set and the verification set, and obtaining a key data set.
S103, carrying out data enhancement on the key data set by adopting an AEDA technology to obtain an enhanced data set;
In this embodiment, to improve classification performance of the subsequent integrated learning model, AEDA techniques (An Easier DataAugmentation) are employed to achieve data enhancement. The AEDA technology is a data enhancement technology, is used in machine learning and deep learning tasks, and can increase the diversity and quantity of training data and improve the generalization capability of a model by automatically enhancing the original data. The data enhancement is realized by inserting a plurality of random punctuation marks at random positions of the key data set, and the method can keep the relative sequence of characters of the original text unchanged and change the absolute positions of the characters at the same time, so that better generalization performance is obtained. Wherein, candidate insertion punctuation marks: {". ","; ","? ",": "! ",","}. In actual operation, AEDA data enhancement is carried out on the abstract of the training set: four punctuation marks are inserted into each piece of data of the training set, and the insertion positions and the symbol types are random. Example results are shown below:
Original text:
The elevator entrance of the second floor is left with car illegal parking for a long time, potential safety hazards are worried and inconvenience is brought, and even the car is parked for a week or a month, no people move the car, feedback to a manager is repeated for a plurality of times, the problem is not solved, and the car is required to follow up as soon as possible.
Enhanced text example 1:
negative two: the elevator entrance of the floor, long-term illegal parking of the car, worry about potential safety hazard-! And bring inconvenience, some cars are parked for a week or a month even, no one moves the car, and the manager is given a plurality of times: feedback is not resolved, and the user is not satisfied with the feedback and is requested to follow the process as soon as possible.
Enhanced text example 2:
the ground of the second floor; the elevator hoistway of the garage has car illegal stops for a long time, and potential safety hazards are worried; and brings inconvenience, and some vehicles are parked for even a week or a month, no one moves the vehicle, and the vehicles are repeatedly moved to the pipe. The feedback is not solved, and the user is not satisfied with the feedback and requests to follow the feedback as soon as possible.
Enhanced text example 3:
? The elevator entrance of the ground warehouse of the second floor is illegally parked for a long time, so that safety is worried about, hidden danger is brought, inconvenience is brought, and even if the elevator is parked for a week or a month, people cannot move the elevator, and the elevator is reversed to a manager for many times; the feedback is not solved, and the processing is carried out as soon as possible for the dissatisfaction.
S104, training the enhanced data set by using a Blending fusion algorithm so as to generate an integrated learning model; the Blending fusion algorithm comprises two stages, wherein the first stage is two base models, the two base models are a fusion model BiLSTM and DILATED CNN and an Attention model respectively, and the second stage is a BERT model;
the Blending fusion algorithm is an integrated learning method and is used for combining the prediction results of a plurality of basic models to obtain more accurate and stable prediction results.
In the Blending fusion algorithm, the data set is typically split into two non-overlapping portions, a training set and a validation set. The training set is used to train a plurality of different base models and the validation set is used to evaluate their performance. Once the first stage model is trained and tested on the validation set, the predictions on the validation set can be used as new training data, and then the predictions of the validation set can be used as features to train a second stage model, also known as a meta model, that is trained by taking as input the actual labels of the validation set and the predictions of the first stage model.
The fusion method has the advantage that the advantages of different models can be utilized for prediction, so that the accuracy and the robustness of the whole are improved.
In one embodiment, as shown in fig. 3, the step S104 includes steps S301 to S305:
s301, inputting samples in the enhanced data set to a Word2vec representation layer for vector representation to obtain a text vector;
specifically, as shown in fig. 4, the applied model is a blend model of BiLSTM and DILATED CNN, and the input layer is a text representation layer of the complaint event summary, namely a Word2vec representation layer, so as to convert the text into a vector representation. As can be seen from step S203, the maximum length of the text is 600, so each text length n=600, x i is set as a word vector, and the vector of the input text is expressed as: x= [ X 1,x2…,xn ].
In this embodiment, biLSTM model is used to extract semantic relationships of complaint events, biLSTM model (two-way long-short-term memory network) is a variant of Recurrent Neural Network (RNN) that can better capture long-term dependencies when processing sequence data. The DILATED CNN model is used for extracting the dependency relationship between phrases of complaint events, the DILATED CNN model (Dilated convolutional neural network) is a variant of the convolutional neural network, and the receptive field is expanded by introducing hole convolution (dilated convolution), so that the global feature of input data is effectively captured.
S302, inputting the text vector into a forward LSTM unit and a reverse LSTM unit in BiLSTM to respectively obtain forward output and reverse output from two opposite directions, and splicing the forward output and the reverse output to obtain a first output;
In this embodiment, the LSTM cell is composed of a forgetting gate, an input gate, an output gate, and a memory cell. At time t, given input x t, the calculation formula for calculating hidden state h t is as follows:
ft=σ(Wf[xt,ht-1]+bf) (1)
it=σ(Wi[xt,ht-1]+bi) (2)
Ot=σ(Wo[xt,ht-1]+bo) (5)
ht=Ot*tanh(Ct) (6)
Where f t,it,Ot is the state of the forget gate, input gate and output gate at time t. W f,Wi,Wc is the weight matrix for each component, b f,bi,bc is the bias vector for each component, Is a candidate state value of the memory cell at time t, calculated by the tanh function, C t is the memory cell state at time t, σ is the sigmoid function, and for the BiLSTM model, the hidden layer outputs h t of the forward LSTM cell and the backward LSTM cell are calculated by formulas (1) to (6), respectively. And then/>And/>Spliced together, wherein-For forward output acquired by forward LSTM cell in BiLSTM,/>And obtaining BiLSTM hidden layer output of time t for the reverse output obtained by the reverse LSTM unit. The formula is as follows:
therefore, the output of BiLSTM model is h= [ h 1,h2..hn ], i.e. the first output obtained.
S303, convolving the first output by using one-dimensional cavity convolutions with different sizes to obtain a plurality of second outputs:
In this embodiment, the first output is input into the DILATED CNN model, which is convolved using one-dimensional hole convolutions of different sizes. Specifically, since the picture has two dimensions, a width and a height, and the text has only one dimension, one-dimensional hole convolution is used herein. The one-dimensional cavity convolution combines the characteristics of the one-dimensional convolution and the cavity convolution: the width of the convolution kernel of the one-dimensional convolution is equal to the width of the input matrix, so that the one-dimensional convolution can only move up and down and can not move left and right; hole convolution is to inject holes into a common convolution kernel, thereby increasing the receptive field. The expansion ratio of the cavity convolution is set to (r, 1), which represents the expansion ratio in the transverse direction as r, the expansion ratio in the column direction as 1, and the expansion ratio in this example is the expansion ratio in the transverse direction, and the expansion ratios in the column direction are all set to 1. Because the length of the text abstract of the complaint event is large, three sizes of cavity convolution are used for processing the BiLSTM output in order to capture the remote dependency of the text, namely the expansion rate in the transverse vector direction is set to be 3 and is respectively 1,2 and 3. The final second output is calculated as follows:
Wherein, For the hole convolution, W i is the convolution kernel and h is the first output.
S304, respectively pooling and splicing the plurality of second outputs to obtain a third output;
In this embodiment, the multi-scale hole convolution layer mainly uses hole convolutions with different expansion rates to obtain information with different scales, and fuses the information. The multi-scale information is fused, so that different distance relations of the text can be fully extracted to provide rich features for text classification. The results after convolution respectively use maximum pooling to obtain the maximum characteristics of the text, and the maximum characteristics have the greatest effect on text classification.
Respectively pooling a plurality of second outputs of the multi-scale cavity convolution layer, and splicing the pooled results to obtain a third output, wherein the calculation formula is as follows:
Si=P(Ci) i=1,2,3
S=[S1,S2,S3] i=1,2,3
where P is the max pooling operation.
S305, fusing the first output and the third output to obtain an output result of the fusion model.
The fusing the first output in the step S302 and the third output in the step S304 to obtain an output result of the fusion model includes:
Fusing the first output and the third output according to the following formula;
α1=Sigmoid(W1h+b1)
z=α1h+α2S
Wherein W 1 and b 1 are respectively a weight matrix and a bias term, α 1 and α 2 are respectively a first weight vector and a second weight vector, algebraic relation between the two weight vectors is α 1+α2 =1, h is a first output, S is a third output, and z is an output result of the fusion model.
In this embodiment, some sentences in the text may relate to other topics, so the convolution operation may obtain salient features of other classifications. Therefore, it is necessary to preserve the context features. According to the different degree of dependence of each sample on the context feature, the embodiment introduces a gating mechanism, distributes weights for the text salient feature captured by the DILATED CNN model and the text context feature captured by the BiLSTM model, and organically combines the two features, namely, the first output in the step S302 and the third output in the step S304 are fused, so that the accuracy of text classification is further improved.
In one embodiment, as shown in fig. 5, the step S104 further includes steps S401 to S403:
S401, inputting samples in the enhanced data set to a Word2vec representation layer to perform vector representation, so as to obtain a text vector X= [ X 1,x2…,xn ];
In this embodiment, as in step S301, the maximum length of the text is 600, and therefore, each text length n=600, x i is set as a word vector, and the vector representation is performed on the samples in the enhanced data set, and the vector representation of the input text is: x= [ X 1,x2…,xn ].
S402, performing association calculation processing on the text vector through an attention mechanism according to the following formula to obtain a context vector:
Assuming that the length of the text sequence is m, x j is the word vector of the j-th character, a context variable is obtained after the attention mechanism, wherein, Is a correlation coefficient between the jth character and the kth character,/>The following formula is used to obtain:
Wherein the score function is expressed as follows: Wherein, W a and w u are training parameters, T is the transpose of the matrix, x j and x k use w a and w u to perform matrix mapping and conversion respectively, the conversion result is input into the tanh activation function to obtain the joint feature, and finally, the joint feature is obtained by/>Scores of x j and w uxk were obtained.
S403, splicing the context vector gx with the original text vector X to obtain an output result of the Attention model: x j=[xj,gj ].
In this embodiment, as shown in fig. 6, the Attention model is applied to deep learning to simulate the Attention model of the human brain. In the structure of CNNs and RNNs, the contribution of each word to the classification target is the same. In fact, each word in the text sequence has different contributions to the text topic classification, the keywords play a more important role in topic classification than other words, and this embodiment introduces a Attention mechanism for calculating semantic association coefficients between each word and other words in the text sequence, and assigns word vector weights according to the semantic association coefficients, and the weighted linear combination of word vectors forms the final word context vector. In this way, word context vectors may be more focused on words with greater weighting.
In one embodiment, as shown in fig. 7, the step S104 further includes steps S501 to S502:
s501, combining the output result of the fusion model and the output result of the Attention model to obtain an output q, and inputting the output q to the BERT model according to the following formula to obtain an output S: s=bert (q);
S502, mapping the feature space to a classification space by using a full connection layer according to the following manner, and classifying the output S to obtain an output y: y= soffmax (W 2S+b2); wherein W 2 and b 2 are parameters of the fully connected layer, y is a vector whose dimension is consistent with the length of a category, wherein each column represents the probability that the text is divided into the category, and in the prediction phase, a formula is used to predict the classification
The following prediction categories: wherein argmax is a function, which is a function of solving parameters (set) of the function;
The loss is calculated as follows:
Wherein, P ij is the probability that the model predicts that the ith sample is the jth class, y ij is the probability that the ith sample is the jth class of its label, M represents the class number, wherein n is the number of small-batch training samples.
In this embodiment, the BERT model is a pre-trained language model that aims to train with large scale unlabeled corpus to obtain semantic representations of text, which are then fine-tuned in a specific downstream NLP task, here text classification.
The enhanced data set comprises a training set, a verification set and a test set; the complaint event classification method further comprises the following steps:
And inputting samples of the verification set and the test set into the two trained base models, combining output results of the two base models, and inputting the combined output results into the trained BERT model to obtain a verification result and a test result.
Further, as shown in fig. 8, which is a specific flowchart of the Blending fusion algorithm, firstly, train_x|train_y in the upper left corner represents the text of training data and the text vector of the corresponding training text, the training data are respectively input into a fusion model BiLSTM and DILATED CNN and an Attention model for training, and training weights a and weights b (namely, a trained fusion model BiLSTM and DILATED CNN and an Attention model) can be obtained after training is finished; and then combining the output result z of the fusion model and the output result x j=[xj,gj of the Attention model to obtain an output q, inputting the output q into the BERT model to obtain an output S, classifying the output S, and completing the training process of the training set. Then, inputting the text data of the verification set into the weight a and the weight b respectively to obtain a (val_x) and b (val_y), and splicing the two groups of features together to finally obtain a new data set: a (val_x), b (val_x) |val_y. Next, a (val_x), b (val_x) |val_y is input into the BERT model to obtain the final weight. To test the performance of this weight, referring to the previous step, test_x|test_y is input into the fusion model BiLSTM and DILATED CNN and the Attention model respectively to obtain a (test_x), b (test_x), and the two sets of features are spliced together to obtain the final test data set: a (test_x), b (test_x) |test_y, this test set is fed into the final weight and the performance of the final weight is tested. In a specific application example, text is input, the text is tested according to the path of the test set, so that the inferred text labels can be obtained, and the difference is that no label exists in the inferred text, and the data of the test set has labels.
S105, deploying the integrated learning model and classifying the currently input property complaint event.
In the present embodiment, as shown in fig. 9, the ensemble learning model is a technique of combining a plurality of base models to obtain better performance. After the integrated learning model is deployed at the embedded equipment end, when a complaint event occurs in the field of the industry, a property staff can orally describe the passing of the complaint event, at the moment, the equipment can automatically complete voice recognition, then abstract extraction is carried out on the complaint event, processes such as text label prediction and the like are carried out in the built integrated learning model, if errors of labels predicted by a machine are found, manual adjustment can be carried out, and finally correct labels of the complaint event are output. The integrated learning model is deployed and used for real-time processing of property complaints. When a new complaint event comes in, the model can automatically classify the complaint event, and help business staff to quickly evaluate the nature and the emergency degree of the complaint. The ensemble learning model can consider the predicted results of the multiple underlying models and weight them to determine the final classification result. Therefore, the deviation of individual models can be reduced, and the overall classification accuracy is improved.
By applying the integrated learning model to the property complaint event classification, the processing efficiency and accuracy can be improved, and the working efficiency of property staff can be improved. This will help to expedite the resolution of complaints, improve property management and quality of service.
As shown in fig. 10, the embodiment of the present invention further provides a complaint event classifying device 600 based on ensemble learning, including: a preprocessing unit 601, a digest extraction unit 602, a data enhancement unit 603, a model generation unit 604, and a model deployment unit 605.
The preprocessing unit 601 is configured to preprocess a historical property complaint event to obtain an original data set including a plurality of historical property complaint events;
The abstract extraction unit 602 is configured to extract an abstract of the original data set by using a GPT-3 model, so as to obtain a key data set;
A data enhancing unit 603, configured to perform data enhancement on the key data set by using an AEDA technology, so as to obtain an enhanced data set;
A model generating unit 604, configured to train the enhanced data set using a Blending fusion algorithm, so as to generate an integrated learning model; the Blending fusion algorithm comprises two stages, wherein the first stage is two base models, the two base models are a fusion model BiLSTM and DILATED CNN and an Attention model respectively, and the second stage is a BERT model;
The model deployment unit 605 is configured to deploy the integrated learning model and classify a currently input property complaint event.
In one embodiment, as shown in fig. 11, the preprocessing unit 601 includes:
The data cleaning unit 701 is configured to obtain a plurality of historical property complaint event records, and clean repeated records, error records and empty records in the plurality of historical property complaint event records;
The keyword extraction unit 702 is configured to extract keywords in the plurality of historical property complaint event records by using a regular expression and TF-IDF method, and screen a plurality of high-frequency records according to the keywords;
the data filtering unit 703 is configured to filter a plurality of historical property complaint events meeting the text requirement from the plurality of high-frequency records, and perform data cleaning to obtain an original data set.
In one embodiment, as shown in fig. 12, the model generating unit 604 includes:
A text vector representation unit 801, configured to input samples in the enhanced data set to a Word2vec representation layer for vector representation, to obtain a text vector;
A first output unit 802, configured to input the text vector to a forward LSTM unit and a reverse LSTM unit in BiLSTM, so as to obtain a forward output and a reverse output from two opposite directions, and splice the forward output and the reverse output to obtain a first output;
A second output unit 803, configured to convolve the first output with one-dimensional hole convolutions with different sizes, so as to obtain a plurality of second outputs;
A third output unit 804, configured to pool and splice the plurality of second outputs respectively, to obtain a third output;
and a result fusion unit 805, configured to fuse the first output and the third output to obtain an output result of the fusion model.
In an embodiment, as shown in fig. 13, the model generating unit 604 further includes:
A text vector representation unit 901, configured to input samples in the enhanced dataset to a Word2vec representation layer for vector representation, so as to obtain a text vector x= [ X 1,x2…,xn ];
the association calculation unit 902 is configured to obtain a context vector after performing association calculation on the text vector according to the following formula:
Wherein, Is a correlation coefficient between the jth character and the kth character,/>The following formula is used to obtain:
Wherein the score function is expressed as follows: w a and w u are training parameters. T is the transpose of the matrix.
The text vector splicing unit 903 is configured to splice the context vector g j with the original text vector X, to obtain an output result of the Attention model: x j=[xj,gj ].
In an embodiment, as shown in fig. 14, the model generating unit 604 further includes:
The result merging unit 1001 is configured to merge the output result of the fusion model and the output result of the Attention model to obtain an output q, and input the output q to the BERT model according to the following formula to obtain an output S: s=bert (q);
The result classification unit 1002 is configured to map the feature space to the classification space by using the full connection layer according to the following manner, and classify the output S to obtain an output y: y=softmax (W 2S+b2); wherein W 2 and b 2 are parameters of the fully connected layer. y is a vector whose dimension corresponds to the length of a category, where each column represents the probability that the text is divided into the category. In the prediction phase, then, a formula is used to predict classification
The following prediction categories: wherein argmax is a function, which is a function of solving parameters (set) of the function;
The loss is calculated as follows: Wherein, P ij is the probability that the model predicts that the ith sample is the jth class, y ij is the probability that the ith sample is the jth class of its label, M represents the class number, wherein n is the number of small-batch training samples.
The device utilizes the GPT-3 model to carry out text summarization on the property complaint event, uses the Blending fusion algorithm to train, verify and test the data to generate the integrated learning model, deploys the integrated learning model and classifies the currently input property complaint event, finally completes reasoning classification, reduces the working pressure of property staff and improves the working efficiency.
It should be noted that, as those skilled in the art can clearly understand the specific implementation process of the foregoing apparatus and each unit, reference may be made to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, no further description is provided herein.
The complaint event classifying means based on ensemble learning as described above may be implemented in the form of a computer program which can be run on a computer device as shown in fig. 15.
Referring to fig. 15, fig. 15 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device 1100 is a server, and the server may be a stand-alone server or a server cluster formed by a plurality of servers.
With reference to FIG. 15, the computer device 1100 includes a processor 1102, memory, and a network interface 1105 connected through a system bus 1101, wherein the memory may include a non-volatile storage medium 1103 and an internal memory 1104.
The non-volatile storage medium 1103 may store an operating system 11031 and computer programs 11032. The computer program 11032, when executed, may cause the processor 1102 to perform an ensemble learning-based complaint event classification method.
The processor 1102 is operable to provide computing and control capabilities to support the operation of the overall computer device 1100.
The internal memory 1104 provides an environment for the execution of a computer program 11032 in the non-volatile storage medium 1103, which computer program 11032, when executed by the processor 1102, causes the processor 1102 to perform an ensemble learning-based complaint event classification method.
The network interface 1105 is used for network communication such as providing transmission of data information, etc. It will be appreciated by those skilled in the art that the architecture shown in fig. 15 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting of the computer device 1100 to which the present inventive arrangements may be implemented, and that a particular computer device 1100 may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
It will be appreciated by those skilled in the art that the embodiment of the computer device shown in fig. 15 is not limiting of the specific construction of the computer device, and in other embodiments, the computer device may include more or less components than those illustrated, or certain components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may include only a memory and a processor, and in such embodiments, the structure and function of the memory and the processor are consistent with the embodiment shown in fig. 15, and will not be described again.
It should be appreciated that in an embodiment of the invention, the Processor 1102 may be a central processing unit (Central Processing Unit, CPU), the Processor 1102 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), off-the-shelf Programmable gate arrays (Field-Programmable GATEARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program when executed by a processor implements the complaint event classification method based on ensemble learning of the embodiment of the present invention.
The storage medium is a physical, non-transitory storage medium, and may be, for example, a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, device and unit described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.