CN110008476B - Semantic analysis method, device, equipment and storage medium - Google Patents
Semantic analysis method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN110008476B CN110008476B CN201910284812.6A CN201910284812A CN110008476B CN 110008476 B CN110008476 B CN 110008476B CN 201910284812 A CN201910284812 A CN 201910284812A CN 110008476 B CN110008476 B CN 110008476B
- Authority
- CN
- China
- Prior art keywords
- word
- vector
- attention
- sequence
- semantic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
Abstract
Description
技术领域technical field
本公开涉及一种语义解析方法、分词装置、电子设备及可读存储介质。The disclosure relates to a semantic analysis method, a word segmentation device, electronic equipment and a readable storage medium.
背景技术Background technique
传统的语义解析方法是将意图预测和语义槽填充分成两步,第一步先对输入语句进行意图分类,第二步在特定的意图下进行语义槽填充。往往忽略了同一个句子中意图和语义槽之间的关系,但是语义槽通常与意图有着很大的联系。The traditional semantic analysis method divides intent prediction and semantic slot filling into two steps. The first step is to classify the intent of the input sentence, and the second step is to fill the semantic slot under a specific intent. The relationship between intentions and semantic slots in the same sentence is often overlooked, but semantic slots are often strongly associated with intentions.
现有技术中已提出通过使用注意力机制模拟语义槽和意图之间的关系来同时进行意图预测和语义槽填充的方法,以求提高语义槽填充和意图预测的准确率。但是在语义槽填充任务上,仅仅使用了注意力机制不能达到很好的效果。In the prior art, a method of simultaneously performing intent prediction and semantic slot filling by using an attention mechanism to simulate the relationship between semantic slots and intent has been proposed, in order to improve the accuracy of semantic slot filling and intent prediction. But on the semantic slot filling task, only using the attention mechanism cannot achieve good results.
发明内容Contents of the invention
为了解决上述技术问题中的至少一个,本公开提供了一种语义解析方法、语义解析装置、电子设备及可读存储介质。In order to solve at least one of the above technical problems, the present disclosure provides a semantic analysis method, a semantic analysis device, electronic equipment and a readable storage medium.
根据本公开的一个方面,一种语义解析方法,包括获取与输入序列中的每个词相关的特征,得到包含每个词相关的特征的序列;基于所述包含每个词相关的特征的序列,生成所述输入序列的第一向量表示;基于所述第一向量表示,生成意图注意力向量和语义槽注意力向量;根据所述第一向量表示、意图注意力向量和语义槽注意力向量得到与所述输入序列对应的语义解析结果,其中,所述语义解析结果包括意图预测结果和语义槽填充结果。According to one aspect of the present disclosure, a semantic parsing method includes obtaining features related to each word in the input sequence, and obtaining a sequence containing features related to each word; based on the sequence containing features related to each word , generate the first vector representation of the input sequence; based on the first vector representation, generate an intention attention vector and a semantic slot attention vector; according to the first vector representation, the intention attention vector and the semantic slot attention vector A semantic analysis result corresponding to the input sequence is obtained, wherein the semantic analysis result includes an intent prediction result and a semantic slot filling result.
根据本公开的一个方面,所述方法还包括,所述包含每个词相关的特征的序列是通过输入序列的词嵌入与相关的特征融合得到的。According to an aspect of the present disclosure, the method further includes that the sequence containing the features related to each word is obtained by fusing the word embedding of the input sequence with the related features.
根据本公开的一个方面,所述方法还包括,基于所述包含每个词相关的特征的序列,生成所述输入序列的第一向量表示包括:所述包含每个词相关的特征的序列被输入到双向长短期记忆网络(BLSTM),并将正向隐藏序列与反向隐藏序列拼接得到所述第一向量表示。According to an aspect of the present disclosure, the method further includes, based on the sequence containing the features related to each word, generating the first vector representation of the input sequence includes: the sequence containing the features related to each word is input to a bidirectional long-short-term memory network (BLSTM), and concatenate the forward hidden sequence and the reverse hidden sequence to obtain the first vector representation.
根据本公开的一个方面,所述方法还包括,基于所述第一向量表示,生成意图注意力向量和语义槽注意力向量包括:对于第一向量表示中的每个隐藏状态计算注意力权重,基于与每个隐藏状态及其对应的注意力权重,获得语义槽注意力向量,基于最后一个时间步的隐藏状态及其对应的注意力权重,获得意图注意力向量。According to an aspect of the present disclosure, the method further includes, based on the first vector representation, generating an intention attention vector and a semantic slot attention vector includes: calculating an attention weight for each hidden state in the first vector representation, Based on each hidden state and its corresponding attention weight, the semantic slot attention vector is obtained, and based on the hidden state of the last time step and its corresponding attention weight, the intent attention vector is obtained.
根据本公开的一个方面,所述方法还包括,基于所述意图注意力向量和语义槽注意力向量获得与每个隐藏状态相关的加权特征,基于所述加权特征、第一向量表示和语义槽注意力向量得到所述语义槽填充结果。According to an aspect of the present disclosure, the method further includes, based on the intention attention vector and the semantic slot attention vector, obtaining a weighted feature related to each hidden state, based on the weighted feature, the first vector representation and the semantic slot Attention vectors get the semantic slot filling results.
根据本公开的一个方面,所述方法还包括,将所述加权特征、第一向量表示和语义槽注意力向量输入条件随机场(CRF)层得到语义槽填充结果。According to an aspect of the present disclosure, the method further includes inputting the weighted feature, the first vector representation and the semantic slot attention vector into a conditional random field (CRF) layer to obtain a semantic slot filling result.
根据本公开的一个方面,所述方法还包括,从知识图谱中获取与输入序列中的每个词相关的特征,并按位置将输入序列的词嵌入与相关的特征融合得到包含每个词相关的特征的序列。According to one aspect of the present disclosure, the method further includes obtaining features related to each word in the input sequence from the knowledge graph, and merging the word embedding of the input sequence with the related features according to the position to obtain a sequence of features.
根据本公开的一个方面,提供了一种语义解析装置,包括:词-特征序列生成模块,用于获取与输入序列中的每个词相关的特征,并得到包含每个词相关的特征的序列;第一向量表示生成模块,用于基于所述包含每个词相关的特征的序列,生成所述输入序列的第一向量表示;注意力向量生成模块,用于基于所述第一向量表示,生成意图注意力向量和语义槽注意力向量;语义解析结果生成模块,用于根据所述第一向量表示、意图注意力向量和语义槽注意力向量得到与所述输入序列对应的语义解析结果,其中,所述语义解析结果包括意图预测结果和语义槽填充结果。According to one aspect of the present disclosure, a semantic parsing device is provided, including: a word-feature sequence generation module, used to obtain features related to each word in the input sequence, and obtain a sequence containing features related to each word ; The first vector representation generation module is used to generate the first vector representation of the input sequence based on the sequence containing the features related to each word; the attention vector generation module is used to represent based on the first vector representation, Generating an intention attention vector and a semantic slot attention vector; a semantic analysis result generation module, configured to obtain a semantic analysis result corresponding to the input sequence according to the first vector representation, the intention attention vector and the semantic slot attention vector, Wherein, the semantic analysis results include intent prediction results and semantic slot filling results.
根据本公开的又一方面,一种电子设备,包括:存储器,存储器存储计算机执行指令;以及处理器,处理器执行存储器存储的计算机执行指令,使得处理器执行上述的方法。According to yet another aspect of the present disclosure, an electronic device includes: a memory storing computer-executable instructions; and a processor executing the computer-executable instructions stored in the memory so that the processor executes the above method.
根据本公开的再一方面,一种可读存储介质,可读存储介质中存储有计算机执行指令,计算机执行指令被处理器执行时用于实现上述的方法。According to another aspect of the present disclosure, a readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to implement the above method when executed by a processor.
附图说明Description of drawings
附图示出了本公开的示例性实施方式,并与其说明一起用于解释本公开的原理,其中包括了这些附图以提供对本公开的进一步理解,并且附图包括在本说明书中并构成本说明书的一部分。The accompanying drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the disclosure, are included to provide a further understanding of the disclosure, and are incorporated in and constitute this specification. part of the manual.
图1是根据本公开一个实施方式的语义解析方法的示意性流程图。Fig. 1 is a schematic flowchart of a semantic parsing method according to an embodiment of the present disclosure.
图2是根据本公开一个实施方式的注意力向量生成过程的示意性流程图。FIG. 2 is a schematic flowchart of an attention vector generation process according to an embodiment of the present disclosure.
图3是根据本公开一个实施方式的语义解析装置的示意性框图。Fig. 3 is a schematic block diagram of a semantic parsing device according to an embodiment of the present disclosure.
图4是根据本公开一个实施方式的注意力向量生成装置的示意性框图。Fig. 4 is a schematic block diagram of an attention vector generation device according to an embodiment of the present disclosure.
图5是根据本公开一个实施方式的电子设备的示意性视图。FIG. 5 is a schematic view of an electronic device according to one embodiment of the present disclosure.
具体实施方式Detailed ways
下面结合附图和实施方式对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施方式仅用于解释相关内容,而非对本公开的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本公开相关的部分。The present disclosure will be further described in detail below with reference to the drawings and embodiments. It can be understood that the specific implementation manners described here are only used to explain relevant content, rather than to limit the present disclosure. It should also be noted that, for ease of description, only parts related to the present disclosure are shown in the drawings.
需要说明的是,在不冲突的情况下,本公开中的实施方式及实施方式中的特征可以相互组合。下面将参考附图并结合实施方式来详细说明本公开。It should be noted that, in the case of no conflict, the implementation modes and the features in the implementation modes in the present disclosure can be combined with each other. The present disclosure will be described in detail below with reference to the drawings and embodiments.
在本领域的相关技术中,语义解析是自然语言处理(NLP)中非常重要的一个模块,包括意图预测和语义槽填充。意图预测是指对用户的意图进行判断,常见的意图有“天气查询”,“音乐点播”,“视频点播”等。语义槽填充是指在特定的意图下,抽取对应的实体来进一步精确解析语义。如“天气查询”意图中,槽位可以是“城市名”和“时间”,“音乐点播”意图中,槽位可以是“歌曲名”,“歌手名”,“专辑名”和“歌曲类型”等,“视频点播”意图中,槽位可以是“视频名”,“导演”和“演员”等。In related technologies in this field, semantic parsing is a very important module in natural language processing (NLP), including intent prediction and semantic slot filling. Intent prediction refers to the judgment of the user's intention. Common intentions include "weather query", "music on demand", "video on demand" and so on. Semantic slot filling refers to extracting corresponding entities to further accurately parse semantics under specific intentions. For example, in the "weather query" intent, the slots can be "city name" and "time", in the "music on demand" intent, the slots can be "song name", "artist name", "album name" and "song type ", etc., in the "Video on Demand" intent, the slots can be "Video Name", "Director" and "Actor", etc.
图1是根据本公开一个实施方式的语义解析方法的示意性流程图。Fig. 1 is a schematic flowchart of a semantic parsing method according to an embodiment of the present disclosure.
在本公开的一个实施例中,所述语义解析方法包括,S11获取与输入序列中的每个词相关的特征,得到包含每个词相关的特征的序列;S12基于所述包含每个词相关的特征的序列,生成所述输入序列的第一向量表示;S13基于所述第一向量表示,生成意图注意力向量和语义槽注意力向量;S14根据所述第一向量表示、意图注意力向量和语义槽注意力向量得到与所述输入序列对应的语义解析结果,其中,所述语义解析结果包括意图预测结果和语义槽填充结果。In one embodiment of the present disclosure, the semantic analysis method includes, S11 acquiring features related to each word in the input sequence to obtain a sequence containing features related to each word; S12 based on the Sequence of features, generating a first vector representation of the input sequence; S13 generating an intention attention vector and a semantic slot attention vector based on the first vector representation; S14 generating an intention attention vector and a semantic slot attention vector according to the first vector representation, the intention attention vector and the semantic slot attention vector to obtain a semantic analysis result corresponding to the input sequence, wherein the semantic analysis result includes an intent prediction result and a semantic slot filling result.
在步骤S11中,获取与输入序列中的每个词相关的特征,通过输入序列的词嵌入与相关的特征融合得到的包含每个词相关的特征的序列。In step S11, the features related to each word in the input sequence are obtained, and the sequence containing the features related to each word is obtained by fusing the word embedding of the input sequence with the related features.
本公开中,对于输入序列x={x1,x2,……xT},在每一个时间步i上,将词嵌入(wordembedding)与相关的一种或多种特征融合(如拼接)得到(xi,fi),从而生成包含每个词相关的特征的序列。在本公开中,上述包含每个词相关的特征的序列被称为词-特征序列(word-feature sequence)。In this disclosure, for an input sequence x={x 1 ,x 2 ,...x T }, at each time step i, word embedding (wordembedding) is fused with one or more related features (such as splicing) Get ( xi , f i ), thus generating a sequence containing features associated with each word. In the present disclosure, the above-mentioned sequence including features related to each word is called a word-feature sequence.
具体地,在知识图谱中查询到与输入序列中的各个词的相关一个或多个特征。所述特征可以包括词性、属性、与其他实体的关系等。接着,按位置将输入序列的词嵌入与相关的特征融合得到包含每个词相关的特征的序列。Specifically, one or more features related to each word in the input sequence are queried in the knowledge graph. The features may include parts of speech, attributes, relationships to other entities, and the like. Next, word embeddings of the input sequence are fused with associated features positionally to obtain a sequence containing features associated with each word.
在步骤S22中,词-特征序列被输入到神经网络模型,例如双向长短期记忆网络(BLSTM),从而获得第一向量表示。所述第一向量表示,对输入的词-特征序列中的上下文信息进行了建模。本领域技术人员应理解,上述双向长短期记忆网络(BLSTM)进行是示例性的,其它能实现类似或更多功能的神经网络模型也可以采用。In step S22, the word-feature sequence is input into a neural network model, such as a bidirectional long short-term memory network (BLSTM), so as to obtain a first vector representation. The first vector representation models the context information in the input word-feature sequence. Those skilled in the art should understand that the above bidirectional long short-term memory network (BLSTM) implementation is exemplary, and other neural network models that can achieve similar or more functions can also be used.
具体地,在双向长短期记忆网络(BLSTM)中,将正向隐藏序列与反向隐藏序列拼接得到所述第一向量表示。例如,在每一个时间步i上,可以表示为隐藏状态hi。Specifically, in a bidirectional long-short-term memory network (BLSTM), the forward hidden sequence and the reverse hidden sequence are concatenated to obtain the first vector representation. For example, at each time step i, it can be expressed as a hidden state h i .
接着,在步骤S13中,所述第一向量表示分布被输入到意图注意力层和语义槽注意力层,从而得到意图注意力向量和语义槽注意力向量。Next, in step S13, the first vector representation distribution is input to the intention attention layer and the semantic slot attention layer, so as to obtain the intention attention vector and the semantic slot attention vector.
最后,在步骤S14中,所述第一向量表示、意图注意力向量和语义槽注意力向量被输入到输出层,从而得到与所述输入序列对应的语义解析结果,其中,所述语义解析结果包括意图预测结果和语义槽填充结果。Finally, in step S14, the first vector representation, the intent attention vector and the semantic slot attention vector are input to the output layer, so as to obtain the semantic analysis result corresponding to the input sequence, wherein the semantic analysis result Including intent prediction results and semantic slot filling results.
例如,对于输入序列“播放XYZ的七里香”(XYZ为歌手的姓名,在此以XYZ表示),则可以预测上述输入语句的意图预测结果为“音乐点播”,其对应的槽位包括“歌手名”、“歌曲名”,在“歌手名”这个槽位中填充“XYZ”,在“歌曲名”这个槽位中填充“七里香”。For example, for the input sequence "Qilixiang playing XYZ" (XYZ is the singer's name, represented by XYZ here), it can be predicted that the intent prediction result of the above input sentence is "music on demand", and its corresponding slot includes "singer name ", "Song Name", fill the "Singer Name" slot with "XYZ", and fill the "Song Name" slot with "Qilixiang".
图2是根据本公开一个实施方式的注意力向量生成过程的示意性流程图。FIG. 2 is a schematic flowchart of an attention vector generation process according to an embodiment of the present disclosure.
在步骤S21中,在意图注意力层和语义槽注意力层中,通过利用注意力(Attention)机制得到注意力权重。In step S21, in the intention attention layer and the semantic slot attention layer, attention weights are obtained by using an attention mechanism.
在步骤S22、S23中,在每一个时间步i上,将隐藏状态hi与注意力权重加权求和得到语义槽注意力向量。使用最后一个时间步的隐藏状态值与对应的注意力权重加权得到意图注意力向量。In steps S22 and S23, at each time step i, the hidden state h i and the attention weight are weighted and summed to obtain the semantic slot attention vector. The intention attention vector is obtained by weighting the hidden state value of the last time step with the corresponding attention weight.
接着,在步骤S24中,基于语义槽注意力向量和意图注意力向量得到与每个隐藏状态相关的加权特征。例如通过语义槽门(slot-gated)。Next, in step S24, weighted features related to each hidden state are obtained based on the semantic slot attention vector and the intent attention vector. For example via semantic slot-gated.
接着,将第一向量表示、语义槽注意向量以及通过融合语义槽注意力向量和意图注意力向量得到与每个隐藏状态相关的加权特征,作为输出层(例如,条件随机场层)的输入,得到语义槽填充的结果。在此,通过融合语义槽注意力向量和意图注意力向量,建模了意图与语义槽之间的关系,同时提供了意图预测和语义槽填充的性能。Next, the first vector representation, the semantic slot attention vector, and the weighted features associated with each hidden state obtained by fusing the semantic slot attention vector and the intent attention vector are used as the input of the output layer (e.g., conditional random field layer), Get the result of semantic slot filling. Here, by fusing semantic-slot attention vectors and intent-attention vectors, the relationship between intent and semantic slots is modeled, while providing the performance of both intent prediction and semantic slot filling.
根据本公开的再一实施方式,提供了一种语义解析装置30。图3示出了根据本公开一个实施方式的语义解析装置,其包括词-特征序列生成模块31,用于获取与输入序列中的每个词相关的特征,并得到包含每个词相关的特征的序列;第一向量表示生成模块32,用于基于所述包含每个词相关的特征的序列,生成所述输入序列的第一向量表示;注意力向量生成模块33,用于基于所述第一向量表示,生成意图注意力向量和语义槽注意力向量;语义解析结果生成模块34,用于根据所述第一向量表示、意图注意力向量和语义槽注意力向量得到与所述输入序列对应的语义解析结果,其中,所述语义解析结果包括意图预测结果和语义槽填充结果。According to yet another embodiment of the present disclosure, a
根据本公开的再一实施方式,提供了一种注意力向量生成装置40,其为语义解析装置30的具体实现方式。图4是根据本公开另一个实施方式的注意力向量生成装置40。其包括,注意力权重生成模块41,用于生成与每个隐藏状态对应的注意力权重;语义槽注意力向量生成模块42,用于基于与每个隐藏状态及其对应的注意力权重,获得语义槽注意力向量;意图注意力向量生成模块43,基于最后一个时间步的隐藏状态及其对应的注意力权重,获得意图注意力向量;联合加权特征生成模块44,基于所述意图注意力向量和语义槽注意力向量获得与每个隐藏状态相关的加权特征。According to yet another embodiment of the present disclosure, an attention
并且上述各模块中执行的处理过程分别与上述方法中具体描述的相应过程相对应。And the processing procedures executed in the above modules respectively correspond to the corresponding procedures specifically described in the above method.
本公开还提供一种电子设备,如图5所示,该设备包括:通信接口1000、存储器2000和处理器3000。通信接口1000用于与外界设备进行通信,进行数据交互传输。存储器2000内存储有可在处理器3000上运行的计算机程序。处理器3000执行所述计算机程序时实现上述实施方式中方法。所述存储器2000和处理器3000的数量可以为一个或多个。The present disclosure also provides an electronic device, as shown in FIG. 5 , the device includes: a
存储器2000可以包括高速RAM存储器,也可以还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。The
如果通信接口1000、存储器2000及处理器3000独立实现,则通信接口1000、存储器2000及处理器3000可以通过总线相互连接并完成相互间的通信。所述总线可以是工业标准体系结构(ISA,Industry Standard Architecture)总线、外部设备互连(PCI,PeripheralComponent)总线或扩展工业标准体系结构(EISA,Extended Industry StandardComponent)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,该图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。If the
可选的,在具体实现上,如果通信接口1000、存储器2000、及处理器3000集成在一块芯片上,则通信接口1000、存储器2000、及处理器3000可以通过内部接口完成相互间的通信。Optionally, in terms of specific implementation, if the
本发明主要结合实际业务需求,针对现有分词算法的不足之处进行了改进,将机器学习算法和领域定制的词典相结合,一方面能够提高分词准确率,另一方面能够针对实际的应用场景,改善其领域适应性。The present invention mainly combines the actual business needs, improves the shortcomings of the existing word segmentation algorithm, and combines the machine learning algorithm with the field-customized dictionary. On the one hand, it can improve the accuracy of word segmentation, and on the other hand, it can target actual application scenarios. , to improve its domain adaptability.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施方式所属技术领域的技术人员所理解。处理器执行上文所描述的各个方法和处理。例如,本公开中的方法实施方式可以被实现为软件程序,其被有形地包含于机器可读介质,例如存储器。在一些实施方式中,软件程序的部分或者全部可以经由存储器和/或通信接口而被载入和/或安装。当软件程序加载到存储器并由处理器执行时,可以执行上文描述的方法中的一个或多个步骤。备选地,在其他实施方式中,处理器可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行上述方法之一。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the present disclosure includes additional implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order depending on the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present disclosure belong. The processor executes the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied on a machine-readable medium, such as memory. In some implementations, part or all of the software program may be loaded and/or installed via memory and/or a communication interface. One or more steps in the methods described above may be performed when a software program is loaded into memory and executed by a processor. Alternatively, in other implementation manners, the processor may be configured to perform one of the above-mentioned methods in any other suitable manner (for example, by means of firmware).
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,可以具体实现在任何可读存储介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。The logic and/or steps shown in the flowcharts or otherwise described herein can be embodied in any readable storage medium for instruction execution systems, devices or devices (such as computer-based systems, processor-included system or other systems that may fetch and execute instructions from an instruction execution system, device, or device), or be used in conjunction with such an instruction execution system, device, or device.
就本说明书而言,“可读存储介质”可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。可读存储介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式只读存储器(CDROM)。另外,可读存储介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在存储器中。As far as this specification is concerned, a "readable storage medium" may be any device that can contain, store, communicate, propagate or transmit programs for instruction execution systems, devices or devices or use in conjunction with these instruction execution systems, devices or devices. More specific examples (non-exhaustive list) of readable storage media include the following: electrical connection with one or more wires (electronic device), portable computer disk case (magnetic device), random access memory (RAM), Read Only Memory (ROM), Erasable and Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Read Only Memory (CDROM). In addition, the readable storage medium may even be paper or other suitable medium on which the program can be printed, since it may be possible, for example, by optical scanning of the paper or other medium, followed by editing, interpretation or other suitable processing if necessary. The program is processed electronically and then stored in memory.
应当理解,本公开的各部分可以用硬件、软件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of the present disclosure may be realized by hardware, software or a combination thereof. In the embodiments described above, various steps or methods may be implemented by software stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
本技术领域的普通技术人员可以理解实现上述实施方式方法的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种可读存储介质中,该程序在执行时,包括方法实施方式的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps to realize the method of the above-mentioned embodiment can be completed by instructing related hardware through a program, and the program can be stored in a readable storage medium, and the program can be executed when executed , comprising one or a combination of the steps of the method embodiments.
此外,在本公开各个实施方式中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个可读存储介质中。所述存储介质可以是只读存储器,磁盘或光盘等。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing module, each unit may exist separately physically, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are realized in the form of software function modules and sold or used as independent products, they can also be stored in a readable storage medium. The storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.
在本说明书的描述中,参考术语“一个实施方式/方式”、“一些实施方式/方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施方式/方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式/方式或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施方式/方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施方式/方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施方式/方式或示例以及不同实施方式/方式或示例的特征进行结合和组合。In the description of this specification, descriptions referring to the terms "one embodiment/mode", "some embodiments/modes", "examples", "specific examples", or "some examples" mean that the embodiments/modes are combined Specific features, structures, materials or features described in or examples are included in at least one implementation/mode or example of the present application. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment/mode or example. Moreover, the described specific features, structures, materials or characteristics may be combined in any one or more embodiments/modes or examples in an appropriate manner. In addition, those skilled in the art can combine and combine different implementations/modes or examples and features of different implementations/modes or examples described in this specification without conflicting with each other.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, the features defined as "first" and "second" may explicitly or implicitly include at least one of these features. In the description of the present application, "plurality" means at least two, such as two, three, etc., unless otherwise specifically defined.
本领域的技术人员应当理解,上述实施方式仅仅是为了清楚地说明本公开,而并非是对本公开的范围进行限定。对于所属领域的技术人员而言,在上述公开的基础上还可以做出其它变化或变型,并且这些变化或变型仍处于本公开的范围内。It should be understood by those skilled in the art that the above-mentioned embodiments are only for clearly illustrating the present disclosure, rather than limiting the scope of the present disclosure. For those skilled in the art, other changes or modifications can be made on the basis of the above disclosure, and these changes or modifications are still within the scope of the present disclosure.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910284812.6A CN110008476B (en) | 2019-04-10 | 2019-04-10 | Semantic analysis method, device, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910284812.6A CN110008476B (en) | 2019-04-10 | 2019-04-10 | Semantic analysis method, device, equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110008476A CN110008476A (en) | 2019-07-12 |
| CN110008476B true CN110008476B (en) | 2023-04-28 |
Family
ID=67170734
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910284812.6A Active CN110008476B (en) | 2019-04-10 | 2019-04-10 | Semantic analysis method, device, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110008476B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110674314B (en) * | 2019-09-27 | 2022-06-28 | 北京百度网讯科技有限公司 | Sentence recognition method and device |
| CN110853626B (en) * | 2019-10-21 | 2021-04-20 | 成都信息工程大学 | Dialogue understanding method, device and device based on bidirectional attention neural network |
| CN111046674B (en) * | 2019-12-20 | 2024-05-31 | 科大讯飞股份有限公司 | Semantic understanding method and device, electronic equipment and storage medium |
| CN113505591B (en) * | 2020-03-23 | 2025-06-24 | 华为技术有限公司 | Slot identification method and electronic device |
| CN113204952B (en) * | 2021-03-26 | 2023-09-15 | 南京邮电大学 | A joint recognition method of multi-intent and semantic slots based on clustering pre-analysis |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107315737A (en) * | 2017-07-04 | 2017-11-03 | 北京奇艺世纪科技有限公司 | A kind of semantic logic processing method and system |
| CN108920666A (en) * | 2018-07-05 | 2018-11-30 | 苏州思必驰信息科技有限公司 | Searching method, system, electronic equipment and storage medium based on semantic understanding |
| CN109241524A (en) * | 2018-08-13 | 2019-01-18 | 腾讯科技(深圳)有限公司 | Semantic analysis method and device, computer readable storage medium, electronic equipment |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107273487A (en) * | 2017-06-13 | 2017-10-20 | 北京百度网讯科技有限公司 | Generation method, device and the computer equipment of chat data based on artificial intelligence |
-
2019
- 2019-04-10 CN CN201910284812.6A patent/CN110008476B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107315737A (en) * | 2017-07-04 | 2017-11-03 | 北京奇艺世纪科技有限公司 | A kind of semantic logic processing method and system |
| CN108920666A (en) * | 2018-07-05 | 2018-11-30 | 苏州思必驰信息科技有限公司 | Searching method, system, electronic equipment and storage medium based on semantic understanding |
| CN109241524A (en) * | 2018-08-13 | 2019-01-18 | 腾讯科技(深圳)有限公司 | Semantic analysis method and device, computer readable storage medium, electronic equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110008476A (en) | 2019-07-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110008476B (en) | Semantic analysis method, device, equipment and storage medium | |
| CN109344413B (en) | Translation processing method, translation processing device, computer equipment and computer readable storage medium | |
| CN110413760B (en) | Human-computer dialogue method, device, storage medium and computer program product | |
| WO2020232997A1 (en) | Speech synthesis method and apparatus, and device and computer-readable storage medium | |
| CN112163067A (en) | Sentence reply method, sentence reply device and electronic device | |
| CN108985358A (en) | Emotion identification method, apparatus, equipment and storage medium | |
| US12062365B2 (en) | Apparatus and method for training dialogue summary model | |
| CN112632961A (en) | Natural language understanding processing method, device and equipment based on context reasoning | |
| CN105074816A (en) | Facilitating development of a spoken natural language interface | |
| CN107515862A (en) | Speech translation method, device and server | |
| CN110210021A (en) | Read understanding method and device | |
| US11244166B2 (en) | Intelligent performance rating | |
| CN108038107A (en) | Sentence sensibility classification method, device and its equipment based on convolutional neural networks | |
| CN108563655A (en) | Text based event recognition method and device | |
| CN114450747B (en) | Method, system, and computer-readable medium for updating documents based on audio files | |
| CN110110083A (en) | A kind of sensibility classification method of text, device, equipment and storage medium | |
| CN106557554A (en) | Display packing and device based on the Search Results of artificial intelligence | |
| CN119167940B (en) | A method for optimizing prompts in large-scale text-based graph models based on scene graphs, electronic devices, and media. | |
| KR20230060320A (en) | Knowledge graph integration method and machine learning device using the same | |
| CN110704592A (en) | Statement analysis processing method, apparatus, computer equipment and storage medium | |
| CN110287498B (en) | Hierarchical translation method, device and storage medium | |
| US12367859B2 (en) | Artificial intelligence factsheet generation for speech recognition | |
| CN110895924B (en) | Method and device for reading document content aloud, electronic equipment and readable storage medium | |
| CN114462431A (en) | Neural machine translation system, method, electronic device, and readable storage medium | |
| CN106708475A (en) | Northbound data conversion method and apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20241111 Address after: 200232 room 2015, floor 2, No. 24, Lane 315, Fenggu Road, Xuhui District, Shanghai Patentee after: SHANGHAI MOBVOI INFORMATION TECHNOLOGY Co.,Ltd. Country or region after: China Address before: 100094 1001, 10th floor, office building a, 19 Zhongguancun Street, Haidian District, Beijing Patentee before: MOBVOI INFORMATION TECHNOLOGY Co.,Ltd. Country or region before: China |
|
| TR01 | Transfer of patent right |