Disclosure of Invention
In view of this, the present application provides a grammar parsing method, apparatus, device and storage medium for parsing grammar information of text, thereby helping users to learn language better, and the technical scheme is as follows:
a syntax parsing method, comprising:
acquiring a target sentence;
Analyzing the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence by using a pre-established grammar analysis model, and generating a joint analysis tree capable of simultaneously presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence according to an analysis result;
the grammar analysis model is obtained by training a training sentence and a joint analysis tree corresponding to the training sentence.
Optionally, the parsing the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence includes:
Taking the target sentence as a text to be analyzed, predicting a component label of the text to be analyzed, and predicting a segmentation mode of the text to be analyzed;
Predicting dependency relationship labels among text segments obtained after the text to be analyzed is segmented according to the segmentation mode;
and for each text segment obtained by segmentation, if the text segment comprises a plurality of words, taking the text segment as a text to be analyzed, and executing the component label of the predicted text to be analyzed and the subsequent steps.
Optionally, the joint parse tree includes a plurality of leaf nodes and a plurality of non-leaf nodes, each leaf node represents a word in the target sentence, each non-leaf node represents a phrase in the target sentence, the next level of each non-leaf node is a leaf node and/or a non-leaf node, and the information of each node includes the word or phrase represented by the node and the component tag of the word or phrase represented by the node;
the different nodes with the hierarchical relationship are connected through first connecting lines, the peer nodes with the dependency relationship are connected through second connecting lines, and each second connecting line is provided with a dependency relationship label.
Optionally, the predicting the component label of the text to be parsed includes:
Predicting the probability that the component label of the text to be analyzed is a set component label, and determining the component label of the text to be analyzed according to the probability that the component label of the text to be analyzed is a set component label;
The method for predicting the segmentation mode of the text to be analyzed comprises the following steps:
predicting the score of each candidate segmentation mode of the text to be analyzed, and determining the segmentation mode of the text to be analyzed according to the score of each candidate segmentation mode of the text to be analyzed.
Optionally, the predicting the probability that the component label of the text to be parsed is the set component label includes:
Determining a representation vector of a text to be parsed according to a forward vector of a first word, a forward vector of a backward adjacent word of a second word, a backward vector of the second word and a backward vector of the forward adjacent word of the first word, wherein the first word is a first word of the text to be parsed, the second word is a last word of the text to be parsed, the forward vector of one word can represent the semantic of the word when the target sentence is examined from front to back, and the backward vector of one word can represent the semantic of the word when the target sentence is examined from back to front;
and predicting the probability that the component labels of the text to be analyzed are the set component labels according to the characterization vector of the text to be analyzed.
Optionally, the determining the token vector of the text to be parsed according to the forward vector of the first word, the forward vector of the backward neighboring word of the second word, the backward vector of the second word and the backward vector of the forward neighboring word of the first word includes:
The forward vector of the first word is differenced with the forward vector of the backward adjacent word of the second word to obtain a forward vector difference value, and the backward vector of the second word is differenced with the backward vector of the forward adjacent word of the first word to obtain a backward vector difference value;
and splicing the forward vector difference value and the backward vector difference value, and taking the spliced vector as a characterization vector of the text to be analyzed.
Optionally, the process of obtaining the forward vector and the backward vector of a word in the target sentence includes:
Obtaining a word vector, a part-of-speech characterization vector and a position characterization vector of the word by utilizing the grammar analysis model, summing the part-of-speech characterization vector of the word and the position characterization vector of the word, splicing the summed vector with the word vector of the word, and taking the spliced vector as the characterization vector of the word to obtain the characterization vector of each word in the target sentence;
And performing attention calculation on the token vector of the word and the token vectors of other words in the target sentence by using the grammar analysis model to obtain a context vector of the word, and obtaining a forward vector and a backward vector of the word according to the context vector of the word.
Optionally, the predicting the score of each candidate segmentation mode of the text to be parsed includes:
For each candidate segmentation approach:
predicting the probability that each text segment obtained by segmenting the text to be analyzed according to the candidate segmentation mode is a phrase component, so as to obtain the probability that each text segment corresponds to each text segment;
Summing the probabilities corresponding to the text segments respectively, wherein the summed probability is used as the score of the candidate segmentation mode;
To obtain the score of each candidate segmentation mode of the text to be analyzed.
Optionally, the predicting the dependency relationship label between text segments obtained after the text to be analyzed is segmented according to the segmentation mode includes:
Predicting the score of each candidate arc drawing mode of the text segment obtained after the text to be analyzed is segmented according to the segmentation mode, and determining a target arc drawing mode according to the score of each candidate arc drawing mode, wherein each arc under each candidate arc drawing mode is a directed arc pointing from one word in one text segment to one word in the other text segment;
And for each arc in the target arc drawing mode, predicting the probability that the dependency relationship label of the two words connected by the arc is the set dependency relationship label, and determining the dependency relationship label of the two words connected by the arc according to the probability that the dependency relationship label of the two words connected by the arc is the set dependency relationship label.
Optionally, the predicting the probability that the dependency label of the two words connected by the arc is the set dependency label includes:
Acquiring one or more of the following characteristics of two words connected by the arc, namely the characteristics of word level, distance characteristics and sentence level;
Determining a characterization vector of the arc according to the acquired characteristics;
Based on the characterization vector of the arc, the probability that the dependency label of the two words connected by the arc is the set dependency label is predicted.
Optionally, acquiring sentence-level features of the two words connected by the arc includes:
Obtaining a characterization vector of a first part and a characterization vector of a last part in three parts obtained by dividing the target sentence by taking two words connected by the arc as boundary lines;
and differencing the characterization vector of the last part with the characterization vector of the first part, wherein the vector obtained by differencing is used as the sentence-level feature of the two words connected by the arc.
Optionally, the process of establishing the syntax analysis model includes:
The training text is used as a text to be analyzed, and the grammar analysis model is used for predicting the probability that the component labels of the text to be analyzed are set as the component labels and the score of each candidate segmentation mode of the text to be analyzed as a first prediction result;
Predicting the score of each candidate arc drawing mode of the text segment obtained by cutting the text to be analyzed according to each candidate cutting mode of the text to be analyzed, and the probability that the dependency relationship label of two words connected by each arc under each candidate arc drawing mode is a set dependency relationship label, so as to obtain a prediction result under each candidate cutting mode as a second prediction result;
parameter updating is carried out on the grammar analysis model according to the first prediction result, the second prediction result and the relevant part in the joint analysis tree corresponding to the training text;
And aiming at each text segment obtained by segmentation according to each candidate segmentation mode, if the text segment comprises a plurality of words, taking the text segment as a text to be analyzed, and executing the probability of predicting the component label of the text to be analyzed as each set component label by using a grammar analysis model, and the score and the follow-up steps of each candidate segmentation mode of the text to be analyzed.
Optionally, the updating parameters of the grammar analysis model according to the first prediction result, the second prediction result and the relevant part in the joint analysis tree corresponding to the training text includes:
Determining a first prediction loss of a grammar analysis model according to the first prediction result and a relevant part in hierarchical grammar structure information presented by a joint analysis tree corresponding to the training text;
Determining a second prediction loss of the grammar analysis model according to the second prediction result and the relevant part in the inter-word dependency relationship information presented by the joint analysis tree corresponding to the training text;
and fusing the first prediction loss and the second prediction loss, and updating parameters of the grammar analysis model according to the fused loss.
A grammar analysis device comprises a text acquisition module and a grammar analysis module;
the text acquisition module is used for acquiring a target sentence;
The grammar analysis module is used for analyzing the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence by utilizing a pre-established grammar analysis model, and generating a joint analysis tree capable of simultaneously presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence according to analysis results;
the grammar analysis model is obtained by training a training sentence and a joint analysis tree corresponding to the training sentence.
A syntax parsing apparatus includes a memory and a processor;
The memory is used for storing programs;
the processor is configured to execute the program to implement each step of the syntax analysis method described in any one of the above.
A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the syntax parsing method as claimed in any one of the preceding claims.
According to the grammar analysis method provided by the application, firstly, the target sentence is acquired, then the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence are analyzed by utilizing the pre-established grammar analysis model, and a joint analysis tree capable of presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence is generated according to the analysis result. The grammar analysis method provided by the embodiment of the application can analyze more detailed grammar information of the target sentence.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to obtain the grammar information of the target sentence, the applicant performs research, and the initial thought is to adopt a natural language processing tool package to analyze the target sentence by combining rules, namely, firstly, analyzing the target sentence by using a natural language processing tool, and then, combining some rules to determine the grammar information of the target sentence. However, the technology of processing the package by natural language is early technology, the processing effect on the target sentence is poor, the adopted rule is simple, and finally determined grammar information is not comprehensive enough, so that the learning requirement of the user is difficult to meet.
In view of the problems of the parsing method of the natural language processing tool package combining rules, the inventor tries to propose a syntax parsing method which does not adopt rules and can parse more comprehensive and detailed syntax information, and for this reason, the applicant has conducted intensive research and finally proposes a syntax parsing method with better effect through continuous research, the syntax parsing method can be applied to an electronic device with data processing capability, and the electronic device can be a server (a single server or a plurality of servers or a server cluster) on a network side or a terminal used on a user side, such as a smart phone, a dictionary pen and the like. The syntax parsing method provided by the present application will be described through the following embodiments.
First embodiment
Referring to fig. 1, a flow chart of a syntax parsing method provided by an embodiment of the present application is shown, where the method may include:
step S101, acquiring a target sentence.
Alternatively, the target sentence may be an english sentence, and of course, the embodiment is not limited thereto, and the target sentence may be a sentence of another language.
Step S102, analyzing the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence by utilizing a pre-established grammar analysis model, and generating a joint analysis tree capable of presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence at the same time according to the analysis result.
The grammar analysis model is obtained by training a training sentence and a joint analysis tree corresponding to the training sentence, and the joint analysis tree corresponding to the training sentence can simultaneously present hierarchical grammar structure information and inter-word dependency relationship information of the training sentence.
Referring to fig. 2, a flow chart of "parsing hierarchical grammar structure information and inter-word dependency information of a target sentence with a pre-established grammar parsing model" in step S102 may include:
and S201, taking the target sentence as a text to be analyzed.
And S202, predicting component labels of the text to be analyzed by using a grammar analysis model, and predicting the segmentation mode of the text to be analyzed.
The component label of the text to be analyzed is one of a plurality of set component labels, and is used for indicating why the text to be analyzed is a phrase, for example, the component label of the text to be analyzed is VP, the text to be analyzed is a verb phrase, the component label of the text to be analyzed is NP, and the text to be analyzed is a noun phrase.
And S203, predicting dependency relationship labels among text segments obtained by segmenting the text to be analyzed according to the segmentation mode of the text to be analyzed by utilizing a grammar analysis model.
The dependency relationship between text segments refers to the dependency relationship between one word in one text segment and one word in another text segment, that is, the dependency relationship essentially refers to the dependency relationship between two words, and it is to be noted that two words having the dependency relationship, one of which is a core word (head) and the other of which is a dependency word (dependency).
The dependency relationship is of various types, namely noun clause modification, related clause modification, clause complement, clause main word (including passive), open complement and the like, clause modification, emphasis, locality and feature word modification, preposition modification, noun modification (all lattices, time and the like), time modification, locality modification and the like, compound words (verb, noun and the like), foreign words, graduated words, all lattices, pronoun reference relationship and the like, noun main word (including passive), object (time space and the like), auxiliary verb, passive language auxiliary verb and the like, root, coordination conjunctions, juxtaposition, conjunctions prepositions, qualifiers, punctuations and other marks. The dependency label between two words is used to indicate what dependency is specific between the two words.
Step S204, for each text segment obtained by segmenting the text to be analyzed according to the segmentation mode of the text to be analyzed, if the text segment comprises a plurality of words, the text segment is used as the text to be analyzed, and the step S202 and the subsequent steps are executed.
It should be noted that, for each text segment obtained by segmentation, if the text segment includes a word, the processing of the text segment may be ended.
The above-described analysis process is described below with reference to a specific example.
Assume that the text to be parsed is "FEDERAL PAPER Board SELLS PAPER AND wood products":
the text to be parsed "FEDERAL PAPER Board SELLS PAPER AND wood products" is taken as the text to be parsed, and the following steps are executed:
firstly, a component label of ' FEDERAL PAPER Board SELLS PAPER AND wood products ' is predicted, then, a segmentation mode of ' FEDERAL PAPER Board SELLS PAPER AND wood products ' is determined, and assuming that the segmentation mode is ' FEDERAL PAPER Board/SELLS PAPER AND wood products/', three text segments ' FEDERAL PAPER Board ', ' SELLS PAPER AND wood products ' and ' are obtained after segmentation according to the segmentation mode, finally, a syntax dependency relationship label among the three text segments obtained by segmentation is predicted, and ' sells ' in ' SELLS PAPER AND wood products ' and ' Board ' in ' FEDERAL PAPER Board ' are predicted, the dependency relationship label is ' nsubj ', ' nsubj ' represents a noun subject, wherein ' sells ' is a core word ' and ' Board ' is a dependency word, and in addition, the dependency relationship label is ' punct ', ' sells ' and ' by prediction,
"Punct" represents punctuation.
Since "FEDERAL PAPER Board" and "SELLS PAPER AND wood products" obtained by cutting "FEDERAL PAPER Board SELLS PAPER AND wood products" each include a plurality of words, the text segment "FEDERAL PAPER Board" is further parsed as the text to be parsed, and the text segment "SELLS PAPER AND wood products" is also further parsed as the text to be parsed, so that the text segment "SELLS PAPER AND wood products" is parsed as an example:
Firstly, predicting the component label of 'SELLS PAPER AND wood products', predicting the component label of 'SELLS PAPER AND wood products' as 'VP', then, predicting the segmentation mode of 'SELLS PAPER AND wood products', predicting the segmentation mode of 'SELLS PAPER AND wood products' as 'sels', namely 'sels/paper and wood products', and obtaining two text segments 'sels' and 'paper and wood products' by segmentation in the mode, and finally, predicting the dependency relation label of the words in 'sels' and 'paper and wood products', predicting the dependency relation between 'sels' and 'products' in 'paper and wood products', wherein the dependency relation label of 'sels' and 'products' is 'obj', and the object is indicated by 'obj'.
Since the text segment "paper and wood products" contains a plurality of words, it is necessary to further parse "paper and wood products" as the text to be parsed:
Firstly, predicting a component label of 'paper and wood products', predicting the component label of 'paper and wood products' as 'NP', then predicting the segmentation mode of 'paper and wood products', predicting the segmentation mode of 'paper and wood products' as 'paper and wood/products', segmenting the text into two text segments of 'paper and wood' and 'products' according to the segmentation mode, and finally predicting the dependency relationship label of the word in the 'paper and wood' and the 'products', wherein the dependency relationship between the 'paper' and the 'products' is predicted, and the dependency relationship label of the 'paper' and the 'products' is predicted as 'composition', and the 'composition' represents a compound word.
Since the text section "paper and wood" contains a plurality of words, the text section "paper and wood" needs to be further parsed as text to be parsed:
Firstly, predicting component labels of a "paper and wood", then, predicting a segmentation mode of the "paper and wood", wherein the segmentation mode of the "paper and wood" is predicted to be "paper/and/wood", three text segments "paper", "and", "wood" can be obtained by segmentation according to the segmentation mode, and finally, predicting the dependency relationship among words of the "paper", "and" wood ", wherein the dependency relationship labels are predicted to be" conj:and "," conj:and "represent connecting words, and the dependency relationship labels are predicted to be" compound ". Since "paper", "and" wood "are all one word, the analysis of" SELLS PAPER AND wood products "ends. Preferably, if the text segment obtained by segmentation includes only one word, the part-of-speech tag of the word may be used as the component tag of the word.
The above-described procedure is performed to analyze "FEDERAL PAPER Board SELLS PAPER AND wood products" and finally obtain hierarchical grammar information (hierarchical grammar information shown in FIG. 3) and inter-word dependency information (inter-word dependency information shown in FIG. 4) of the sentence.
After predicting the analysis result, a joint analysis tree may be generated according to the analysis result, and optionally, since the analysis is performed layer by layer (or from coarse granularity to fine granularity), the joint analysis tree may be gradually generated according to the continuous generation of the analysis result.
In this embodiment, the joint parse tree capable of presenting the hierarchical grammar structure information and the inter-word dependency information of the target sentence simultaneously includes a plurality of leaf nodes and a plurality of non-leaf nodes, each leaf node representing a word in the target sentence, each non-leaf node representing a phrase in the target sentence, the next level of each non-leaf node being a leaf node and/or a non-leaf node, the information of each node including a word or phrase represented by the node and a component tag of the word or phrase represented by the node, the different level nodes having a hierarchical relationship being connected by a first connecting line, the peer nodes having a dependency relationship being connected by a second connecting line (such as a directed arc), each second connecting line having a dependency relationship tag thereon. The longitudinal presentation of the joint analysis tree is hierarchical grammar structure information of the target sentence, and the transverse presentation of the joint analysis tree is inter-word dependency relationship information of the target sentence.
Referring to FIG. 5, a schematic diagram of a joint parse tree (dependency labels not shown) obtained by parsing "FEDERAL PAPER Board SELLS PAPER AND wood products" using a grammar parse model is shown, wherein nodes marked (1, 9) in the diagram represent the phrase "FEDERAL PAPER Board SELLS PAPER AND wood products", S "at the nodes are component labels of" FEDERAL PAPER Board SELLS PAPER AND wood products ", nodes marked (1, 3) in the diagram and nodes marked (4, 8) in the diagram represent component labels of" 35 49 Board "and" SELLS PAPER AND wood products "obtained by parsing in a manner of" FEDERAL PAPER Board/SELLS PAPER AND wood products ", VP" at the nodes marked (1, 3) is a component label of the phrase "FEDERAL PAPER Board", the same-layer nodes in the diagram represent component labels of "35 Board" and "3" are the same-layer phrases in the diagram, such as the lower-level phrases of "35 Board" and "3" are represented by the arc-level phrases "35" between the nodes (3, 38) and "3) indicating the arc-like" 3 and "38" are represented by the lower-level phrases "35 Board" between the nodes (4, 8) indicating that the arc-level phrases (3, 38) are represented by the arc-level phrases "3 and" 3 "and" lower-level "are represented by the arc-level" 3 "representing nodes (3, 38) indicating the arc-level-like" between the nodes and "two-level" are represented by the nodes (3, 8).
According to the grammar analysis method provided by the embodiment of the application, firstly, a target sentence is acquired, then, the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence are analyzed by utilizing a pre-established grammar analysis model, and a joint analysis tree capable of presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence is generated according to an analysis result. The grammar analysis method provided by the embodiment of the application can analyze more detailed grammar information of the target sentence.
Second embodiment
The embodiment introduces the specific implementation process of the step S202 of predicting the component labels of the text to be resolved by using the grammar resolution model and predicting the segmentation mode of the text to be resolved and the step S203 of predicting the dependency relationship labels among text segments obtained after the text to be resolved is segmented in the segmentation mode by using the grammar resolution model.
Referring to fig. 6, a flow chart for predicting a component tag of a text to be parsed and predicting a segmentation method of the text to be parsed by using a syntax parsing model may include:
And S601, predicting the probability that the component label of the text to be analyzed is a set component label by using the grammar analysis model, and determining the component label of the text to be analyzed according to the probability that the component label of the text to be analyzed is the set component label.
The process of predicting the probability that the component label of the text to be parsed is the set component label may include:
step S6011, determining a characterization vector of the text to be analyzed according to the forward vector of the first word, the forward vector of the backward adjacent word of the second word, the backward vector of the second word and the backward vector of the forward adjacent word of the first word.
The first word is the first word of the text to be analyzed, the second word is the last word of the text to be analyzed, the forward adjacent word of the first word is the word which is positioned in front of the first word and adjacent to the first word in the target sentence, and the backward adjacent word of the second word is the word which is positioned behind the second word and adjacent to the second word in the target sentence. It should be noted that, the forward vector of a word can represent the meaning of the word when the target sentence is examined from the front to the back, and the backward vector of a word can represent the meaning of the word when the target sentence is examined from the back to the front.
The forward and backward vectors for each word in the target sentence may be obtained as follows:
Step a1, aiming at each word x i in the target sentence, acquiring a word vector, a part-of-speech representation vector and a position representation vector of the word x i by using a grammar analysis model, summing the part-of-speech representation vector of the word x i and the position representation vector of the word, splicing the summed vector with the word vector of the word x i, and using the spliced vector as the representation vector of the word x i to obtain the representation vector of each word in the text to be analyzed.
The word vector of each word in the target sentence can be determined through a pre-training language model, when the word vector of each word in the target sentence is determined, the target sentence is preprocessed firstly, wherein the preprocessing comprises, but is not limited to, adding a space before punctuation marks, replacing a't negative form with a non, unifying a case and a case, and the like, then performing word segmentation on the preprocessed sentence, and finally, inputting each word obtained through word segmentation into the pre-training language model, and outputting the word vector of each word in the target sentence through the pre-training language model. After the word vector of each word in the target sentence is obtained, the word vector of each word in the target sentence may be input into a part-of-speech tagging model trained in advance, and the part-of-speech tagging model outputs the part-of-speech tag of each word in the target sentence, and optionally, the part-of-speech tagging model may include a bidirectional LSTM (or a neural network structure with any timing sequence) and a full-connection layer, as shown in fig. 7, and the part-of-speech tag of each word in the target sentence may be obtained by sequentially passing the word vector of each word in the target sentence through the bidirectional LSTM (or the neural network structure with any timing sequence) and the full-connection layer.
Referring to fig. 8, part-of-speech tags of words in a target sentence "FEDERAL PAPER Board SELLS PAPER AND wood products" predicted using a part-of-speech tagging model are shown, and each part-of-speech tag in fig. 8 represents:
NNP Proper noun, singular (proper noun, singular);
VBZ: verb,3rd person singular present (Verb, third person, singular);
NN: noun, singular or mass (noun, singular or group);
NNS Noun, plural (noun, plural);
CC Coordinating conjunction (parallel, conjunctive).
Optionally, the grammar parsing model in this embodiment may include an embedding layer, for each word x i in the target sentence, the embedding layer of the grammar parsing model obtains a word vector of the word x i output by the pre-training language model, obtains a representation vector of the part of speech represented by the part of speech tag of the word x i output by the part of speech tag model, and obtains a position representation vector of the word x i, and then sums the part of speech representation vector of the word x i with the position representation vector of the word x i, and splices the summed vector with the word vector of the word x i to obtain a representation vector of the word x i.
Step a2, for each word x i in the target sentence, performing attention computation on the token vector of the word x i and the token vectors of other words in the target sentence by using a grammar parsing model to obtain a context vector of the word x i, and obtaining a forward vector and a backward vector of the word x i according to the context vector of the word x i.
In one possible implementation, attention calculations may be performed on token vectors of word x i and token vectors of other words in the target sentence based on a single-head self-attention mechanism, so that a context vector may be obtained, the context vector may be split into a forward vector and a backward vector, the split forward vector may be used as the forward vector of word x i, and the split backward vector may be used as the backward vector of word x i. In another possible implementation manner, attention calculation may be performed on the token vector of the word x i and the token vectors of other words in the target sentence based on a multi-head self-attention mechanism, so that a plurality of context vectors may be obtained, each context vector is divided into a forward vector and a backward vector, all the forward vectors are spliced, the spliced vector is used as the forward vector of the word x i, and similarly, all the backward vectors are spliced, and the spliced vector is used as the backward vector of the word x i.
The method comprises the steps of determining a characterization vector of a text to be analyzed according to a forward vector of a first word, a forward vector of a backward adjacent word of a second word, a backward vector of the second word and a backward vector of a forward adjacent word of the first word, wherein the process of determining the characterization vector of the text to be analyzed comprises the steps of differencing the forward vector of the first word and the forward vector of the backward adjacent word of the second word to obtain a forward vector difference value, differencing the backward vector of the second word and the backward vector of the forward adjacent word of the first word to obtain a backward vector difference value, splicing the forward vector difference value and the backward vector difference value, and taking the spliced vector as the characterization vector of the text to be analyzed.
If the first word is represented by x i (the first word in the text to be parsed, the i-th word in the target sentence), the second word is represented by x j (the last word in the text to be parsed, the j-th word in the target sentence), and the forward vector of the first word x i is represented byRepresenting the backward vector of the second word x j The backward vector of the forward neighboring word x i-1 of the first word x i is represented byRepresenting the forward vector of the backward neighboring word x j+1 of the second word x j Representing, the token vector S ij of the text to be parsed may be represented as:
and step S6012, predicting the probability that the component labels of the text to be analyzed are the set component labels according to the characterization vector of the text to be analyzed.
Specifically, the probability s labels (i, j) that the component label of the text to be parsed is the set component label can be determined by the following formula:
slabels(i,j)=Vlg(Wlsij+bl) (2)
Where g represents a nonlinear transformation, W l and b l are parameters of a syntax analysis model, which is obtained by training.
And after predicting the probability that the component label of the text to be analyzed is the set component label, determining the component label corresponding to the maximum probability in the predicted probabilities as the component label of the text to be analyzed.
Step S602, predicting the score of each candidate segmentation mode of the text to be analyzed by using the grammar analysis model, and determining the segmentation mode of the text to be analyzed according to the score of each candidate segmentation mode of the text to be analyzed.
The candidate segmentation modes of the text to be analyzed are all possible segmentation modes of the text to be analyzed. The scoring process for predicting each candidate segmentation mode of the text to be analyzed comprises the following steps:
executing, for each candidate segmentation mode of the text to be parsed:
step S6021, predicting the probability that each text segment obtained by segmenting the text to be analyzed according to the candidate segmentation mode is a phrase component, so as to obtain the probability that each text segment obtained by segmenting according to the candidate segmentation mode corresponds to each text segment.
Specifically, for each text segment obtained by segmenting the text to be analyzed according to the candidate segmentation mode, determining a characterization vector of the text segment according to a forward vector of a first word of the text segment, a forward vector of a backward adjacent word of a last word of the text segment, a backward vector of the last word of the text segment and a backward vector of a forward adjacent word of the first word of the text segment, and determining the probability that the text segment is a phrase component according to the characterization vector of the text segment. It should be noted that, the manner of determining the token vector of a text segment is similar to the implementation manner of determining the token vector of the text to be parsed, and for a more specific implementation manner of determining the token vector of a text segment, reference may be made to a specific implementation manner of determining the token vector of the text to be parsed, which is not described herein in detail.
Wherein the probability of a text segment being a phrase component can be determined by:
sspan(m,n)=Vsg(Wssmn+bs) (3)
S mn represents a representation vector of a text segment composed of an mth word to an nth word in the text to be analyzed, s span (m, n) represents a probability that the text segment composed of the mth word to the nth word in the text to be analyzed is a phrase component, and W s and b s are parameters of a grammar analysis model, which are obtained through training.
And step S6022, summing the probabilities respectively corresponding to the text segments obtained by segmentation in the candidate segmentation mode, wherein the summed probabilities are used as the score of the candidate segmentation mode.
Assuming that the text to be parsed is segmented according to a candidate segmentation method k to obtain a plurality of text segments x i~xk1、xk1+1~xk2、…、xkn~xj, the probability S span (i, k 1) corresponding to the text segment x i~xk1, the probability S span (k1+1, k 2) corresponding to the text segment x k1+1~xk2, the probability S span (kn, j) corresponding to the text segment x kn~xj can be obtained through step S6021, after the probabilities corresponding to each text segment are obtained, the probabilities corresponding to each text segment can be summed, and the summed probabilities can be used as the score of the candidate segmentation method k, namely, the score S split (i, k, j) of the candidate segmentation method k can be expressed as:
ssplit(i,k,j)=sspan(i,k1)+sspan(k1+1,k2)+...+sspan(kn,j) (4)
The score of each candidate segmentation mode of the text to be analyzed can be obtained through the process. After obtaining the score of each candidate segmentation mode of the text to be analyzed, the candidate segmentation mode with the highest score can be determined as the segmentation mode of the text to be analyzed.
Alternatively, the syntax analysis model in this embodiment may include a component syntax analysis unit, and the component syntax analysis unit of the syntax analysis model may be used to determine the component labels and the segmentation method of the text to be analyzed in the above manner.
Next, the implementation process of "step S203, in which the dependency relationship labels between text segments obtained after the text to be parsed is cut in the cutting manner of the text to be parsed is predicted by using the grammar parsing model" will be described.
Referring to fig. 9, a flow chart for predicting dependency relationship labels between text segments obtained by segmenting a text to be parsed according to a segmentation mode of the text to be parsed by using a grammar parsing model may include:
And step S901, predicting the score of each candidate arc drawing mode of the text segment obtained after the text to be analyzed is segmented according to the segmentation mode of the text to be analyzed by utilizing the grammar analysis model, and determining the target arc drawing mode according to the score of each candidate arc drawing mode.
Wherein each arc drawn in each candidate arc drawing manner is a directed arc directed from one word in one text segment to one word in another text segment. In this embodiment, drawing an arc refers to drawing a directed arc for two words that may have a dependency relationship, and the directed arc is directed to the dependency word by the core word. In addition, the arcs in the same candidate arc drawing mode should not intersect. Referring to fig. 10, a schematic diagram of two candidate arc drawing modes for three text segments is shown.
The method comprises the steps of obtaining a characterization vector of each arc drawn according to the candidate arc drawing mode, determining the score of the arc according to the characterization vector of the arc to obtain the score of each arc drawn according to the candidate arc drawing mode, summing the scores of the arcs drawn according to the candidate arc drawing mode, and taking the summed scores as the score of the candidate arc drawing mode. The score of each candidate arc drawing mode can be obtained in the mode, and after the score of each candidate arc drawing mode is obtained, the candidate arc drawing mode with the highest score can be determined as the target arc drawing mode.
The process of determining the characterization vector of one arc may include obtaining one or more of the following features of two words connected by the arc, namely, a feature of word level, a feature of distance, and a feature of sentence level (preferably, the three features are obtained simultaneously), determining the characterization vector of the arc according to the obtained features, specifically, inputting the obtained features into a full-connection layer, and taking the output of the full-connection layer as the characterization vector of the arc.
In this embodiment, the target vector (the spliced vector of the forward vector and the backward vector) of each word in the target sentence may be input BiLSTM to obtain the feature of the word level of each word in the target sentence, and for two words connected by one arc, the feature of the word level of the two words connected by the one arc may be obtained from the feature of the word level of each word in the target sentence.
The distance characteristic of two words connected by one arc can represent the distance between the two words, alternatively, if no other words exist between the two words connected by one arc, the distance between the two words connected by one arc is considered to be 1,1 is expressed as a vector, the distance characteristic of the two words connected by one arc is used as the distance characteristic of the two words connected by one arc, if one word exists between the two words connected by one arc, the distance between the two words connected by one arc is determined to be 2, the 2 is expressed as a vector, the distance characteristic of the two words connected by one arc is used as the vector, and the like.
The sentence-level feature of two words connected by an arc can be obtained by dividing a target sentence into three parts by taking the two words connected by the arc as a boundary, obtaining a characterization vector of a first part and a characterization vector of a last part obtained by dividing according to a target vector (a spliced vector of a forward vector and a backward vector) of each word in the target sentence, and taking the obtained vector as the sentence-level feature of the two words connected by the arc by differencing the characterization vector of the last part and the characterization vector of the first part. Alternatively, the target vector for each word in the target sentence may be input BiLSTM first, and then the output of BiLSTM is input to LSTM to obtain the first portion of token vector and the last portion of token vector.
Step S902, predicting the probability that the dependency labels of the two words connected by the arc are set dependency labels by using a grammar analysis model for each arc in the target arc drawing mode, and determining the dependency labels of the two words connected by the arc according to the probability that the dependency labels of the two words connected by the arc are set dependency labels.
The step of predicting the probability that the dependency label of the two words connected by the arc is the set dependency label may include predicting the probability that the dependency label of the two words connected by the arc is the set dependency label based on the characterization vector of the arc. Specifically, the token vector of the arc may be passed through the nonlinear layer and the full-connection layer to obtain the probability that the dependency label of the two words connected by the arc is the set dependency label.
After predicting the probability that the dependency label of the two words connected by the arc is the set dependency label, determining the dependency label corresponding to the maximum probability as the dependency label of the two words connected by the arc.
Optionally, the syntax analysis model in this embodiment may include a dependency syntax analysis unit, and the dependency syntax analysis unit of the syntax analysis model may predict, according to the above manner, dependency relationship labels between text segments obtained after the text to be analyzed is segmented according to the segmentation manner of the text to be analyzed.
Third embodiment
The implementation manner provided by the embodiment can obtain the joint analysis tree capable of simultaneously presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence, and optionally, after obtaining the joint analysis tree, the joint analysis tree can be directly output, and part of information in the information presented by the joint analysis tree can also be output. For example, the hierarchical grammar structure information presented by the joint analysis tree is output, or the dependency relationship information between words presented by the joint analysis tree is output, optionally, when the hierarchical grammar structure information is output, a hierarchical phrase set and/or tree structure can be output, so that a user can understand the structure and the hierarchical relationship of a target sentence more quickly, when the dependency relationship information between words is output, the dependency relationship type (namely the dependency relationship label) related to the target sentence can be output, and the dependency relationship label of the word pair with the dependency relationship and the word pair with the dependency relationship can be output.
In addition to information output, the information obtained by parsing may be further processed, for example, for a noun phrase obtained by parsing, judgment of proper nouns is performed on the basis of part-of-speech labels output by the part-of-speech labeling model, and for a verb phrase, a fixed collocation of matching verb phrases may be searched.
In addition, part-of-speech tags output by the part-of-speech tagging model can be output and interpreted, and part-of-speech may be output as follows:
Nouns, namely single plural number, proper nouns, foreign nouns, personal names and all lattices of nouns;
Verbs, single complex, past word segmentation, proper nouns and situational verbs;
adjective adverbs, prototypes, comparison stages, highest stages;
Other words, conjunctions, ordinal words, prepositions, main pronouns, exclamation words, EX (presence of a heat), to, special symbols, adverbs beginning with wh, all lattice words of nouns, etc.
Further, the verb change form obtained by marking the part-of-speech marking model can be marked by a large english resource library or a tool kit for natural language processing to obtain the original form of the verb, and the original form and the change form in the sentence can be output together.
Furthermore, synonyms, approximate words, related words and the like of words in the target sentences can be obtained from the corpus according to word vectors of the words in the target sentences, so that the expanded knowledge is used for helping users to quickly understand strange words and difficult words.
Fourth embodiment
As can be seen from the above embodiments, the grammar parsing of the target sentence is implemented by using a pre-established grammar parsing model, and the grammar parsing model is obtained by training a training sentence and a joint parsing tree corresponding to the training sentence.
Referring to fig. 11, a flow diagram of training a grammar parsing model using training sentences and a joint parsing tree corresponding to the training sentences is shown, which may include:
Step 1101, obtaining a training text, and taking the training text as a text to be analyzed.
And step 1102, predicting the probability of the component labels of the text to be parsed as the set component labels and the score of each candidate segmentation mode of the text to be parsed by using the grammar parsing model as a first prediction result.
It should be noted that, in the embodiment, the implementation process of predicting the probability that the component label of the text to be parsed is the set component label and the score of each candidate segmentation mode of the text to be parsed by using the grammar parsing model is similar to the implementation process of predicting the probability that the component label of the text to be parsed is the set component label and the score of each candidate segmentation mode of the text to be parsed by using the grammar parsing model in the above embodiment, and specifically, refer to the relevant parts in the above embodiment, and the description of the embodiment is omitted here.
Step S1103, predicting the score of each candidate arc drawing mode of the text segment obtained by cutting the text to be analyzed according to each candidate cutting mode of the text to be analyzed, and the probability that the dependency relationship label of two words connected by each arc in each candidate arc drawing mode is the set dependency relationship label, so as to obtain a prediction result in each candidate cutting mode as a second prediction result.
It should be noted that, in this embodiment, the implementation process of predicting the score of each candidate arc drawing mode of the text segment obtained by splitting the text to be resolved according to a candidate splitting mode is similar to the implementation process of predicting the score of each candidate arc drawing mode of the text segment obtained by splitting the text to be resolved according to the splitting mode of the text to be resolved in the above embodiment, and the implementation process of predicting the probability that the dependency relationship label of two words connected by each arc in the candidate arc drawing mode is the set dependency relationship label is similar to the implementation process of predicting the probability that the dependency relationship label of two words connected by each arc in the target arc drawing mode is the set dependency relationship label in the above embodiment, which is not described herein in detail.
And step 1104, updating parameters of the grammar analysis model according to the first prediction result, the second prediction result and the relevant part in the joint analysis tree corresponding to the training text.
Specifically, according to the first prediction result, the second prediction result and the relevant part in the joint analysis tree corresponding to the training text, the process of updating parameters of the grammar analysis model comprises the following steps:
step S1104-1, determining a first prediction loss of the grammar analysis model according to the first prediction result and the relevant part in the hierarchical grammar structure information presented by the association analysis tree corresponding to the training text.
Alternatively, the cross entropy loss can be calculated according to the first prediction result and the relevant part in the hierarchical grammar structure information presented by the joint analysis tree corresponding to the training text, and the cross entropy loss is used as the first prediction loss of the grammar analysis model. The calculation manner of the cross entropy loss is the prior art, and this embodiment is not described herein.
Step S1104-2, determining a second prediction loss of the grammar analysis model according to the second prediction result and the relevant part in the inter-word dependency relationship information presented by the association analysis tree corresponding to the training text.
Alternatively, the cross entropy loss can be calculated according to the second prediction result and the relevant part in the inter-word dependency relationship information presented by the joint parsing tree corresponding to the training text, and the cross entropy loss is used as the second prediction loss of the grammar parsing model.
And step S1104-3, fusing the first prediction loss and the second prediction loss, and updating parameters of the grammar analysis model according to the fused loss.
There are a number of ways to fuse the first predictive loss with the second predictive loss, in one possible implementation the first predictive loss may be summed directly with the second predictive loss, in another possible implementation the first predictive loss may be summed weighted with the second predictive loss. If the first prediction LOSS is denoted as LOSS1 and the second prediction LOSS is denoted as LOSS2, the post-fusion LOSS obtained in the first manner is loss1+loss1, and the post-fusion LOSS obtained in the second manner is α×loss1+βloss2, where α is a weight corresponding to the first prediction LOSS1, β is a weight corresponding to the second prediction LOSS2, and α and β may be set according to practical situations.
Step S1105, for each text segment obtained by segmentation according to each candidate segmentation mode, if the text segment comprises a word, ending the processing of the text segment, if the text segment comprises a plurality of words, taking the text segment as a text to be analyzed, turning to step S1102, and if the text segment comprises a word, ending the processing of the text segment.
And performing iterative training on the grammar analysis model for a plurality of times according to the process until the training ending condition is met, wherein the model obtained after the training is ended is the built grammar analysis model.
Fifth embodiment
The embodiment of the application also provides a grammar analysis device, which is described below, and the grammar analysis device described below and the grammar analysis method described above can be correspondingly referred to each other.
Referring to fig. 12, a schematic structural diagram of a syntax parsing apparatus according to an embodiment of the present application may include a text obtaining module 1201 and a syntax parsing module 1202.
The text acquisition module 1201 is configured to acquire a target sentence.
The grammar parsing module 1202 is configured to parse the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence by using a pre-established grammar parsing model, and generate a joint parsing tree capable of presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence at the same time according to the parsing result. The grammar analysis model is obtained by training a training sentence and a joint analysis tree corresponding to the training sentence.
Optionally, the grammar parsing module 1202 is specifically configured to, when parsing the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence by using a pre-established grammar parsing model:
Taking the target sentence as a text to be analyzed, predicting a component label of the text to be analyzed by using a pre-established grammar analysis model, and predicting a segmentation mode of the text to be analyzed;
predicting dependency relationship labels among text segments obtained after segmentation of the text to be analyzed according to the segmentation mode by using a pre-established grammar analysis model;
And for each text segment obtained by segmentation, if the text segment comprises a plurality of words, taking the text segment as a text to be analyzed, and executing the component labels for predicting the text to be analyzed by utilizing the pre-established grammar analysis model and the subsequent steps.
Optionally, the joint parsing tree includes a plurality of leaf nodes and a plurality of non-leaf nodes, each leaf node represents a word in the target sentence, each non-leaf node represents a phrase in the target sentence, the next level of each non-leaf node is a leaf node and/or a non-leaf node, the information of each node includes a word or phrase represented by the node and a component label of the word or phrase represented by the node, different nodes with hierarchical relationship are connected through a first connecting line, peer nodes with dependency relationship are connected through a second connecting line, and each second connecting line has a dependency relationship label.
Optionally, the syntax parsing module 1202, when predicting the component tags of the text to be parsed by using the syntax parsing model, is specifically configured to:
predicting the probability of the component labels of the text to be parsed as the set component labels by using a grammar parsing model, and determining the component labels of the text to be parsed according to the probability of the component labels of the text to be parsed as the set component labels;
The grammar parsing module 1202 is specifically configured to, when predicting a segmentation method of a text to be parsed by using a grammar parsing model:
Predicting the score of each candidate segmentation mode of the text to be analyzed by using the grammar analysis model, and determining the segmentation mode of the text to be analyzed according to the score of each candidate segmentation mode of the text to be analyzed.
Optionally, the syntax parsing module 1202 predicts, using a syntax parsing model, a probability that a component tag of a text to be parsed is a set component tag, including:
Determining a representation vector of a text to be parsed according to a forward vector of a first word, a forward vector of a backward adjacent word of a second word, the backward vector of the second word and the backward vector of the forward adjacent word of the first word by using a grammar parsing model, wherein the first word is the first word of the text to be parsed, the second word is the last word of the text to be parsed, the forward vector of one word can represent the semantic of the word when the target sentence is viewed from front to back, and the backward vector of one word can represent the semantic of the word when the target sentence is viewed from back to front;
And predicting the probability of the component labels of the text to be analyzed as the set component labels by using the grammar analysis model and taking the characterization vector of the text to be analyzed as the basis.
Optionally, the grammar parsing module 1202 determines, using the grammar parsing model, a token vector of a text to be parsed according to a forward vector of a first word, a forward vector of a backward neighboring word of a second word, a backward vector of the second word, and a backward vector of a forward neighboring word of the first word, including:
And utilizing a grammar analysis model to make a difference between the forward vector of the first word and the forward vector of the backward adjacent word of the second word so as to obtain a forward vector difference value, making a difference between the backward vector of the second word and the backward vector of the forward adjacent word of the first word so as to obtain a backward vector difference value, splicing the forward vector difference value and the backward vector difference value, and taking the spliced vector as a representation vector of a text to be analyzed.
Optionally, the syntax parsing module 1202 is further configured to:
Obtaining a word vector, a part-of-speech characterization vector and a position characterization vector of the word by utilizing the grammar analysis model, summing the part-of-speech characterization vector of the word and the position characterization vector of the word, splicing the summed vector with the word vector of the word, and taking the spliced vector as the characterization vector of the word to obtain the characterization vector of each word in the target sentence;
And performing attention calculation on the token vector of the word and the token vectors of other words in the target sentence by using the grammar analysis model to obtain a context vector of the word, and obtaining a forward vector and a backward vector of the word according to the context vector of the word.
Optionally, the grammar parsing module 1202 is specifically configured to, when predicting the score of each candidate segmentation method of the text to be parsed by using the grammar parsing model:
For each candidate segmentation approach:
Predicting the probability of each text segment obtained by segmenting the text to be analyzed according to the candidate segmentation mode as phrase components by using a grammar analysis model to obtain the probability corresponding to each text segment respectively, and summing the probabilities corresponding to the text segments respectively, wherein the summed probability is used as the score of the candidate segmentation mode;
To obtain the score of each candidate segmentation mode of the text to be analyzed.
Optionally, when the grammar parsing module 1202 predicts the dependency relationship tag between text segments obtained after the text to be parsed is segmented according to the segmentation method by using the grammar parsing model, the grammar parsing module is specifically configured to:
Predicting the score of each candidate arc drawing mode of the text segment obtained after the text to be analyzed is segmented according to the segmentation mode by using a grammar analysis model, and determining a target arc drawing mode according to the score of each candidate arc drawing mode, wherein each arc under each candidate arc drawing mode is a directed arc pointing from one word in one text segment to one word in the other text segment;
For each arc in the target arc drawing mode, predicting the probability that the dependency relationship label of the two words connected by the arc is the set dependency relationship label by using a grammar analysis model, and determining the dependency relationship label of the two words connected by the arc according to the probability that the dependency relationship label of the two words connected by the arc is the set dependency relationship label.
Optionally, when the grammar parsing module 1202 predicts the probability that the dependency label of the two words connected by the arc is the set dependency label by using the grammar parsing model, the grammar parsing module is specifically configured to:
And obtaining one or more of the following characteristics of the two words connected by the arc by using a grammar analysis model, namely, word level characteristics, distance characteristics and sentence level characteristics, determining a characterization vector of the arc according to the obtained characteristics, and predicting the probability that the dependency relationship label of the two words connected by the arc is a set dependency relationship label according to the characterization vector of the arc.
Optionally, the grammar parsing module 1202 is specifically configured to, when using the grammar parsing model to obtain sentence-level features of the two words connected by the arc:
and obtaining a characterization vector of a first part and a characterization vector of a last part in three parts obtained by dividing the text to be analyzed by taking two words connected by the arc as boundary lines by using a grammar analysis model, and differencing the characterization vector of the last part and the characterization vector of the first part, wherein the vector obtained by differencing is used as the sentence-level feature of the two words connected by the arc.
The grammar parsing device provided by the embodiment of the application can also comprise a model training module. Model training module for:
The training text is used as a text to be analyzed, and the grammar analysis model is used for predicting the probability that the component labels of the text to be analyzed are set as the component labels and the score of each candidate segmentation mode of the text to be analyzed as a first prediction result;
Predicting the score of each candidate arc drawing mode of the text segment obtained by cutting the text to be analyzed according to each candidate cutting mode of the text to be analyzed, and the probability that the dependency relationship label of two words connected by each arc under each candidate arc drawing mode is a set dependency relationship label, so as to obtain a prediction result under each candidate cutting mode as a second prediction result;
parameter updating is carried out on the grammar analysis model according to the first prediction result, the second prediction result and the relevant part in the joint analysis tree corresponding to the training text;
And aiming at each text segment obtained by segmentation according to each candidate segmentation mode, if the text segment comprises a plurality of words, taking the text segment as a text to be analyzed, and executing the probability of predicting the component label of the text to be analyzed as the set component label by using the grammar analysis model and the score of each candidate segmentation mode of the text to be analyzed.
Optionally, when the model training module updates parameters of the grammar analysis model according to the first prediction result, the second prediction result and the relevant part in the joint analysis tree corresponding to the training text, the model training module is specifically configured to:
Determining a first prediction loss of a grammar analysis model according to the first prediction result and a relevant part in hierarchical grammar structure information presented by a joint analysis tree corresponding to the training text;
Determining a second prediction loss of the grammar analysis model according to the second prediction result and the relevant part in the inter-word dependency relationship information presented by the joint analysis tree corresponding to the training text;
and fusing the first prediction loss and the second prediction loss, and updating parameters of the grammar analysis model according to the fused loss.
The grammar analysis device provided by the embodiment of the application firstly acquires the target sentence, then analyzes the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence by utilizing the pre-established grammar analysis model, and generates a joint analysis tree capable of presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence according to the analysis result. The grammar analysis device provided by the embodiment of the application can analyze more detailed grammar information of the target sentence.
Sixth embodiment
The embodiment of the present application further provides a syntax parsing apparatus, please refer to fig. 13, which shows a schematic structural diagram of the syntax parsing, where the syntax parsing may include at least one processor 1301, at least one communication interface 1302, at least one memory 1303 and at least one communication bus 1304;
in the embodiment of the present application, the number of the processor 1301, the communication interface 1302, the memory 1303 and the communication bus 1304 is at least one, and the processor 1301, the communication interface 1302 and the memory 1303 complete the communication between each other through the communication bus 1304;
processor 1301 may be a central processing unit CPU, or an Application specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement embodiments of the present invention, etc.;
The memory 1303 may include a high-speed RAM memory, and may further include a non-volatile memory (non-volatile memory), etc., such as at least one disk memory;
wherein the memory stores a program, the processor is operable to invoke the program stored in the memory, the program operable to:
acquiring a target sentence;
Analyzing the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence by using a pre-established grammar analysis model, and generating a joint analysis tree capable of simultaneously presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence according to an analysis result;
the grammar analysis model is obtained by training a training sentence and a joint analysis tree corresponding to the training sentence.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
Seventh embodiment
The embodiment of the present application also provides a readable storage medium storing a program adapted to be executed by a processor, the program being configured to:
acquiring a target sentence;
Analyzing the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence by using a pre-established grammar analysis model, and generating a joint analysis tree capable of simultaneously presenting the hierarchical grammar structure information and the inter-word dependency relationship information of the target sentence according to an analysis result;
the grammar analysis model is obtained by training a training sentence and a joint analysis tree corresponding to the training sentence.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.