[go: up one dir, main page]

CN106875940B - Machine self-learning construction knowledge graph training method based on neural network - Google Patents

Machine self-learning construction knowledge graph training method based on neural network Download PDF

Info

Publication number
CN106875940B
CN106875940B CN201710127387.0A CN201710127387A CN106875940B CN 106875940 B CN106875940 B CN 106875940B CN 201710127387 A CN201710127387 A CN 201710127387A CN 106875940 B CN106875940 B CN 106875940B
Authority
CN
China
Prior art keywords
neural network
sentence
statement
answer
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710127387.0A
Other languages
Chinese (zh)
Other versions
CN106875940A (en
Inventor
王东亮
刘颖博
王洪斌
姜雨霁
姚兴
于延龙
李晓文
张志微
张凌凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Sirong Technology Co ltd
Original Assignee
Jilin Sirong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Sirong Technology Co ltd filed Critical Jilin Sirong Technology Co ltd
Priority to CN201710127387.0A priority Critical patent/CN106875940B/en
Publication of CN106875940A publication Critical patent/CN106875940A/en
Application granted granted Critical
Publication of CN106875940B publication Critical patent/CN106875940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/72Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for transmitting results of analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a machine self-learning construction knowledge graph training method based on a neural network, which comprises the following steps: obtaining sentences based on natural scenes and sent by a user, and performing filtering and noise reduction on the input sentences by adopting a voice noise reduction algorithm to determine matched feedback sentences; if not, giving an answer to the statement sent by the user according to the neural network dialogue model; the method comprises the following steps: the coding layer of the user sending statement model is constructed into a first neural network, and the user sending statement is analyzed in the first neural network to obtain a first intermediate vector for expressing user sending statement semantics; the decoding layer of the dialogue generating model is constructed into a second neural network, the intermediate vector is analyzed in the second neural network, and a vector group representing the statement answer is obtained.

Description

Machine self-learning construction knowledge graph training method based on neural network
Technical Field
The invention relates to the field of intelligent robots, in particular to a machine self-learning construction knowledge graph training method based on a neural network.
Background
A chat bot (chatterbot) is a program used to simulate a human conversation or chat. The reason why the chat robot generates is that the developer puts the interesting answers into the database, when a question is thrown to the chat robot, the developer finds the closest question from the database through the similarity matching algorithm, and then gives the most appropriate answer according to the corresponding relation between the question and the answer to reply to the chat partner.
However, in current robotic chat scenarios, when the same or similar questions that match the question requested by the user cannot be found in the robot knowledge base, the robot is unable to return the correct or appropriate answer to the user.
The limitation of the prior art not only causes the problem of limited robot knowledge base, but also causes semantic understanding error, thereby leading to poor experience of the user in the process of communicating with the robot. In addition, the knowledge reasoning process in the prior art has certain limitation in knowledge reasoning, and the conventional knowledge reasoning solves the knowledge reasoning problem by writing some rules by a program developer. However, it is not imaginable for developers to exhaust and set up these rules. Since there are always rules that are not completely written in the natural language processing domain. At this time, the robot is required to have the learning ability and carry out reasoning.
Disclosure of Invention
The invention designs and develops a machine self-learning construction knowledge graph training method based on a neural network, and a smaller mean square error can be obtained by adopting a threshold speech noise reduction algorithm, so that the signal-to-noise ratio of a reconstructed speech signal is improved.
It is yet another object of the present invention to generate a dialogue model using neural network training, and the robot can freely talk to the user using the trained model.
The technical scheme provided by the invention is as follows:
a machine self-learning construction knowledge graph training method based on a neural network comprises the following steps:
obtaining a sentence based on a natural scene sent by a user, filtering and denoising an input sentence by adopting a threshold speech denoising algorithm, obtaining the category of the sentence, and obtaining an upper sentence of the sentence and the category of the upper sentence;
determining a matched feedback statement according to the statement category;
if not, giving an answer to the statement sent by the user according to the neural network dialogue model; the method comprises the following steps:
the coding layer of the user sending statement model is constructed into a first neural network, and the user sending statement is analyzed in the first neural network to obtain a first intermediate vector for expressing user sending statement semantics;
constructing a decoding layer of the dialogue generation model into a second neural network, and analyzing the intermediate vector in the second neural network to obtain a vector group representing a statement answer; and
and the vector group representing the sentence answers is output as the question answers.
Preferably, when parsing the sentence sent by the user in the first neural network, the method includes the following steps:
splitting a sentence input by a user into minimum word units with semantics in an encoding layer to obtain a plurality of word units, respectively obtaining the attribute of each word unit, selecting at least one word with a large information content as a central word, and inputting the central word into an input layer of the first neural network in a vector form as a problem vector group;
and performing semantic analysis on the output of the input layer of the first neural network and the output of the hidden layer of the first neural network at the previous moment in the hidden layer of the first neural network, and performing linear weighted combination to form a middle vector representing the sentence meaning.
Preferably, the method for resolving the intermediate vector in the second neural network includes the following steps:
receiving the intermediate vector at a decoding layer and inputting the intermediate vector as an input layer of a second neural network;
performing semantic analysis on the intermediate vector from the input layer and the output of the hidden layer of the second neural network at the previous moment in the hidden layer of the second neural network, and sequentially generating a plurality of single vectors to form an answer vector group, wherein the semantic of each single vector in the answer vector group corresponds to the semantic of the minimum unit word in an answer output statement;
and outputting the answer vector group at an output layer of the second neural network.
Preferably, after the answer vector group is output as an answer output sentence, the answer output sentence is stored in the knowledge base in correspondence with the dialogue input sentence, so as to update and expand the knowledge base.
Preferably, after performing the knowledge base matching calculation, a request standard signal bit is set according to whether a dialog sentence, the matching degree of which with the dialog input sentence reaches a predetermined value, exists in the knowledge base, and whether the dialog generation model is required to be requested to give an answer is determined according to the validity of the request flag signal bit.
Preferably, the linear weighted combination comprises the following steps:
the method comprises the following steps: counting n groups of central word data groups extracted from user input sentences, wherein n is a positive integer, and the probability x of central words appearing in each group of central word data groups within lambda daysiProbability y of occurrence of central word in λ days of central word data set of previous sentenceiEstablishing a univariate regression model,
yi=ω′i·xi
wherein i is an integer, i ═ 1,2,3.iIs a weighted regression coefficient in λ days;
step two: solving the formula in the first step by adopting a least square method, and respectively calculating to obtain regression coefficient estimated values in lambda days:
Figure BDA0001238841480000031
wherein,
Figure BDA0001238841480000032
is an estimated value of the regression coefficient; x is the number ofijThe probability of the occurrence of the ith central word in the jth group of central word data sets is obtained;
Figure BDA0001238841480000033
the average value of the probability of the j group central word data group is obtained; y isijThe ith day in the central word data group in the sentence at the moment before the jth group of user input sentences; the probability of occurrence of the central word;
Figure BDA0001238841480000034
is the average value of the probability of the central word data group of the previous moment of the j groups
Step three: normalization processing is carried out, and weighted weight values are obtained:
Figure BDA0001238841480000035
wherein, ω isiAnd inputting the weighted weight value of the statement for the user.
Preferably, the output collocation requires the user to select and apply, and when the output answer is accurate, the output collocation is stored in a knowledge base.
Preferably, the speech noise reduction algorithm includes:
a, distinguishing a voice frame into a mute frame and a voice frame through end point detection;
b, for a mute frame, calculating a power spectrum value of a current frame as a noise power spectrum estimation value, and for a voice frame, calculating a voice noise power spectrum estimation value;
c, subtracting the noise power spectrum estimation value from the power spectrum of the voice frame to obtain a noise-reduced voice power spectrum;
and d, obtaining the voice frame after noise reduction according to the voice power spectrum after noise reduction.
9. The neural network based machine self-learning constructed knowledge graph training method as claimed in claim 8, wherein the speech noise power spectrum estimation value is calculated by the formula:
Figure BDA0001238841480000041
wherein I is noise power spectrum energy; threshold value
Figure BDA0001238841480000042
n is the frame number of the noise signal; j is 1-5Is a conversion coefficient, e is a natural constant; pi is the circumference ratio; f. ofcIs the frequency of the noise signal; τ (t) is 0.03t2+0.6t + 0.1; t is the decomposition scale, and t is more than or equal to 1 and less than or equal to 4.
The invention has the advantages of
The invention designs and develops a machine self-learning construction knowledge graph training method based on a neural network, and a smaller mean square error can be obtained by adopting a threshold speech noise reduction algorithm, so that the signal-to-noise ratio of a reconstructed speech signal is improved.
It is yet another object of the present invention to generate a dialogue model using neural network training, and the robot can freely talk to the user using the trained model.
Drawings
FIG. 1 is a flow chart of a training method for establishing a knowledge graph based on machine self-learning of a neural network according to the present invention.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
As shown in FIG. 1, the training method for establishing the knowledge graph based on the machine self-learning of the neural network provided by the invention comprises the following steps:
s100: obtaining a sentence based on a natural scene sent by a user, filtering and denoising an input sentence by adopting a threshold speech denoising algorithm, obtaining the category of the sentence, and obtaining an upper sentence of the sentence and the category of the upper sentence;
s200: determining a matched feedback statement according to the statement category;
s300: if not, giving an answer to the statement sent by the user according to the neural network dialogue model; the method comprises the following steps:
s310: constructing a coding layer of a user sending statement model into a first neural network, analyzing a user sending statement in the first neural network to obtain a first intermediate vector for expressing user sending statement semantics;
s320: constructing a decoding layer of a dialogue generation model into a second neural network, and analyzing the intermediate vector in the second neural network to obtain a vector group representing a statement answer; and
s400: and the vector group representing the sentence answers is output as the question answers.
When the statement sent by the user is analyzed in the first neural network in step S310, the method includes the following steps:
s311: splitting a sentence input by a user into minimum word units with semantics in an encoding layer to obtain a plurality of word units, respectively obtaining the attribute of each word unit, selecting at least one word containing a large amount of information as a central word, and inputting the central word into an input layer of a first neural network in a vector form as a problem vector group;
s312: and performing semantic analysis on the output of the input layer of the first neural network and the output of the hidden layer of the first neural network at the previous moment in the hidden layer of the first neural network, and performing linear weighted combination to form an intermediate vector representing the sentence meaning.
In step S320, when the intermediate vector is analyzed in the second neural network, the method includes the following steps:
s321: receiving the intermediate vector at a decoding layer, and taking the intermediate vector as an input layer input of a second neural network;
s322: performing semantic analysis on the intermediate vector from the input layer and the output of the hidden layer of the second neural network at the previous moment in the hidden layer of the second neural network, and sequentially generating a plurality of single vectors to form an answer vector group, wherein the semantic of each single vector in the answer vector group corresponds to the semantic of the minimum unit in an answer output statement;
and outputting the answer vector group at an output layer of the second neural network.
In another embodiment, after the answer vector group is output as an answer output statement, the answer output statement and the dialogue input statement are correspondingly stored in a knowledge base so as to update and expand the knowledge base.
In another embodiment, after performing the knowledge base matching calculation, a request standard signal bit is set according to whether a dialog sentence, the matching degree of which with the dialog input sentence reaches a predetermined value, exists in the knowledge base, and whether the dialog generation model is required to be requested to give an answer is determined according to the validity of the request flag signal bit.
In another embodiment, the linear weighted combination in step S312 includes the following steps:
the method comprises the following steps: counting n groups of central word data groups extracted from user input sentences, wherein n is a positive integer, and the probability x of central words appearing in each group of central word data groups within lambda daysiProbability y of occurrence of central word in λ days of central word data set of previous sentenceiEstablishing a univariate regression model,
yi=ω′i·xi
wherein i is an integer, i ═ 1,2,3.iIs a weighted regression coefficient in λ days;
step two: solving the formula in the first step by adopting a least square method, and respectively calculating to obtain regression coefficient estimated values in lambda days:
Figure BDA0001238841480000061
wherein,
Figure BDA0001238841480000062
is an estimated value of the regression coefficient; x is the number ofijThe probability of the occurrence of the ith central word in the jth group of central word data sets is obtained;
Figure BDA0001238841480000063
the average value of the probability of the j group central word data group is obtained; y isijThe ith day in the central word data group in the sentence at the moment before the jth group of user input sentences; the probability of occurrence of the central word;
Figure BDA0001238841480000064
is the average value of the probability of the central word data group of the previous moment of the j groups
Step three: normalization processing is carried out, and weighted weight values are obtained:
Figure BDA0001238841480000071
wherein, ω isiAnd inputting the weighted weight value of the statement for the user.
Preferably, the output collocation requires the user to select and apply, and when the output answer is accurate, the output collocation is stored in a knowledge base.
In another embodiment, the threshold speech noise reduction algorithm in step S100 includes:
a, distinguishing a voice frame into a mute frame and a voice frame through end point detection;
b, for a mute frame, calculating a power spectrum value of a current frame as a noise power spectrum estimation value, and for a voice frame, calculating a voice noise power spectrum estimation value;
c, subtracting the noise power spectrum estimation value from the power spectrum of the voice frame to obtain a noise-reduced voice power spectrum;
and d, obtaining the voice frame after noise reduction according to the voice power spectrum after noise reduction.
The calculation formula of the speech noise power spectrum estimation value is as follows:
Figure BDA0001238841480000072
wherein I is noise power spectrum energy; threshold value
Figure BDA0001238841480000073
n is the frame number of the noise signal; j is 1-5Is a conversion coefficient, e is a natural constant; pi is the circumference ratio; f. ofcIs the frequency of the noise signal; τ (t) is 0.03t2+0.6t + 0.1; t is the decomposition scale, and t is more than or equal to 1 and less than or equal to 4.
The method comprises the steps that a noise map related to voice is obtained through a voice collecting device, and voice frames are divided into silent frames and voice frames through end point detection; for a mute frame, calculating a power spectrum value of a current frame as a noise power spectrum estimation value, and for a voice frame, calculating:
Figure BDA0001238841480000081
wherein I is noise power spectrum energy; threshold value
Figure BDA0001238841480000082
n is the frame number of the noise signal; j is 1-5Is a conversion coefficient, e is a natural constant; pi is the circumference ratio; f. ofcIs the frequency of the noise signal; τ (t) is 0.03t2+0.6t + 0.1; t is the decomposition scale, and t is more than or equal to 1 and less than or equal to 4.
A speech noise power spectrum estimation value; subtracting the noise power spectrum estimation value from the power spectrum of the voice frame to obtain a noise-reduced voice power spectrum; and obtaining the voice frame after noise reduction according to the voice power spectrum after noise reduction.
The invention designs and develops a machine self-learning construction knowledge graph training method based on a neural network, a smaller mean square error can be obtained by adopting a threshold speech noise reduction algorithm, the signal-to-noise ratio of a reconstructed speech signal is improved, a dialogue model is generated by adopting neural network training, and a robot can freely talk with a user by utilizing the model obtained by training.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.

Claims (5)

1. A machine self-learning construction knowledge graph training method based on a neural network is characterized by comprising the following steps:
obtaining a sentence based on a natural scene sent by a user, filtering and denoising an input sentence by adopting a voice denoising algorithm, obtaining the category of the sentence, and obtaining the upper sentence of the sentence and the category of the upper sentence;
the voice noise reduction algorithm comprises the following steps:
a, distinguishing a voice frame into a mute frame and a voice frame through end point detection;
b, for a mute frame, calculating a power spectrum value of a current frame as a noise power spectrum estimation value, and for a voice frame, calculating a voice noise power spectrum estimation value;
c, subtracting the estimated value of the power spectrum of the voice noise from the power spectrum of the voice frame to obtain a voice power spectrum after noise reduction, wherein the calculation formula of the estimated value of the power spectrum of the voice noise is as follows:
Figure FDA0002485395840000011
wherein I is noise power spectrum energy; threshold value
Figure FDA0002485395840000012
n is the frame number of the noise signal; j is 1-5Is a conversion coefficient, e is a natural constant; pi is the circumference ratio; f. ofcIs the frequency of the noise signal; τ (t) is 0.03t2+0.6t + 0.1; t is a decomposition scale, and t is more than or equal to 1 and less than or equal to 4;
d, obtaining a voice frame after noise reduction according to the voice power spectrum after noise reduction;
determining a matched feedback statement according to the statement category;
if not, giving an answer to the statement sent by the user according to the neural network dialogue model; the method comprises the following steps:
the coding layer of the user sending statement model is constructed into a first neural network, and the user sending statement is analyzed in the first neural network to obtain a first intermediate vector for expressing user sending statement semantics;
constructing a decoding layer of the dialogue generation model into a second neural network, and analyzing the intermediate vector in the second neural network to obtain a vector group representing a statement answer; and
the vector group representing the sentence answers is output as the answer to the question;
when the statement sent by the user is analyzed in the first neural network, the method comprises the following steps:
splitting a sentence input by a user into minimum word units with semantics in an encoding layer to obtain a plurality of word units, respectively obtaining the attribute of each word unit, selecting at least one word with a large information content as a central word, and inputting the central word into an input layer of the first neural network in a vector form as a problem vector group;
performing semantic analysis on the output of the input layer of the first neural network and the output of the hidden layer of the first neural network at the previous moment in the hidden layer of the first neural network, and performing linear weighted combination to form a middle vector representing a sentence meaning;
the linear weighted combination comprises the following steps:
the method comprises the following steps: counting n groups of central word data groups extracted from user input sentences, wherein n is a positive integer, and the probability x of central words appearing in each group of central word data groups within lambda daysiProbability y of occurrence of central word in λ days of central word data set of previous sentenceiEstablishing a univariate regression model:
yi=ω′i·xi
wherein i is an integer, i ═ 1,2,3.iIs a weighted regression coefficient in λ days;
step two: solving the formula in the first step by adopting a least square method, and respectively calculating to obtain regression coefficient estimated values in lambda days:
Figure FDA0002485395840000021
wherein,
Figure FDA0002485395840000022
is an estimated value of the regression coefficient; x is the number ofijThe probability of the occurrence of the ith central word in the jth group of central word data sets is obtained;
Figure FDA0002485395840000023
the average value of the probability of the j group central word data group is obtained; y isijThe probability of the occurrence of the ith central word in the central word data group in the sentence at the moment before the jth group of users input the sentence;
Figure FDA0002485395840000024
the average value of the probability of the central word data group of the j groups at the previous moment is obtained;
step three: normalization processing is carried out, and weighted weight values are obtained:
Figure FDA0002485395840000031
wherein, omega'iAnd inputting the weighted weight value of the statement for the user.
2. The neural network-based machine self-learning construction knowledge-graph training method of claim 1, wherein when the intermediate vector is parsed in the second neural network, the method comprises the following steps:
receiving the intermediate vector at a decoding layer and inputting the intermediate vector as an input layer of a second neural network;
performing semantic analysis on the intermediate vector from the input layer and the output of the hidden layer of the second neural network at the previous moment in the hidden layer of the second neural network, and sequentially generating a plurality of single vectors to form an answer vector group, wherein the semantic of each single vector in the answer vector group corresponds to the semantic of the minimum unit word in an answer output statement;
and outputting the answer vector group at an output layer of the second neural network.
3. The neural network-based machine self-learning knowledge graph training method as claimed in any one of claims 1-2, wherein after the answer vector group is output as an answer output sentence, the answer output sentence is saved in a knowledge base corresponding to a dialogue input sentence so as to update and expand the knowledge base.
4. The neural network-based machine self-learning constructed knowledge graph training method as claimed in any one of claims 1-2, wherein after the knowledge base matching calculation is performed, a request standard signal bit is set according to whether a dialogue statement with a matching degree reaching a predetermined value with the dialogue input statement exists in the knowledge base, and whether a dialogue generation model is required to be requested to give an answer is determined according to validity of the request flag signal bit.
5. The neural network-based machine self-learning knowledge graph construction training method as claimed in claim 4, wherein the output collocation is selected and used by a user, and is stored in a knowledge base when the output answer is accurate.
CN201710127387.0A 2017-03-06 2017-03-06 Machine self-learning construction knowledge graph training method based on neural network Active CN106875940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710127387.0A CN106875940B (en) 2017-03-06 2017-03-06 Machine self-learning construction knowledge graph training method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710127387.0A CN106875940B (en) 2017-03-06 2017-03-06 Machine self-learning construction knowledge graph training method based on neural network

Publications (2)

Publication Number Publication Date
CN106875940A CN106875940A (en) 2017-06-20
CN106875940B true CN106875940B (en) 2020-08-14

Family

ID=59171199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710127387.0A Active CN106875940B (en) 2017-03-06 2017-03-06 Machine self-learning construction knowledge graph training method based on neural network

Country Status (1)

Country Link
CN (1) CN106875940B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388793B (en) * 2017-08-03 2023-04-07 阿里巴巴集团控股有限公司 Entity marking method, intention identification method, corresponding device and computer storage medium
CN109933773B (en) * 2017-12-15 2023-05-26 上海擎语信息科技有限公司 Multiple semantic statement analysis system and method
CN108108449A (en) * 2017-12-27 2018-06-01 哈尔滨福满科技有限责任公司 A kind of implementation method based on multi-source heterogeneous data question answering system and the system towards medical field
CN108389614B (en) * 2018-03-02 2021-01-19 西安交通大学 A method for constructing medical image atlas based on image segmentation and convolutional neural network
WO2020153047A1 (en) 2019-01-24 2020-07-30 ソニーセミコンダクタソリューションズ株式会社 Voltage control device
CN110349463A (en) * 2019-07-10 2019-10-18 南京硅基智能科技有限公司 A kind of reverse tutoring system and method
US12210973B2 (en) * 2019-09-12 2025-01-28 Oracle International Corporation Compressing neural networks for natural language understanding
US11379710B2 (en) * 2020-02-28 2022-07-05 International Business Machines Corporation Personalized automated machine learning
CN113450781B (en) * 2020-03-25 2022-08-09 阿里巴巴集团控股有限公司 Speech processing method, speech encoder, speech decoder and speech recognition system
CN112309183A (en) * 2020-11-12 2021-02-02 江苏经贸职业技术学院 Interactive listening and speaking exercise system suitable for foreign language teaching
CN112528039A (en) * 2020-12-16 2021-03-19 中国联合网络通信集团有限公司 Word processing method, device, equipment and storage medium
CN112487173B (en) * 2020-12-18 2021-09-10 北京百度网讯科技有限公司 Man-machine conversation method, device and storage medium
CN113870905A (en) * 2021-09-29 2021-12-31 马上消费金融股份有限公司 An audio processing method, model training method, device and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1647069A (en) * 2002-04-11 2005-07-27 株式会社PtoPA Conversation control system and conversation control method
CN104217226A (en) * 2014-09-09 2014-12-17 天津大学 Dialogue act identification method based on deep neural networks and conditional random fields
CN105704013A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Context-based topic updating data processing method and apparatus
CN105787560A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Dialogue data interaction processing method and device based on recurrent neural network
CN106055662A (en) * 2016-06-02 2016-10-26 竹间智能科技(上海)有限公司 Emotion-based intelligent conversation method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006039120A (en) * 2004-07-26 2006-02-09 Sony Corp Interactive device and interactive method, program and recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1647069A (en) * 2002-04-11 2005-07-27 株式会社PtoPA Conversation control system and conversation control method
CN104217226A (en) * 2014-09-09 2014-12-17 天津大学 Dialogue act identification method based on deep neural networks and conditional random fields
CN105704013A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Context-based topic updating data processing method and apparatus
CN105787560A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Dialogue data interaction processing method and device based on recurrent neural network
CN106055662A (en) * 2016-06-02 2016-10-26 竹间智能科技(上海)有限公司 Emotion-based intelligent conversation method and system

Also Published As

Publication number Publication date
CN106875940A (en) 2017-06-20

Similar Documents

Publication Publication Date Title
CN106875940B (en) Machine self-learning construction knowledge graph training method based on neural network
CN111897941B (en) Dialog generation method, network training method, device, storage medium and equipment
CN111966800B (en) Emotion dialogue generation method and device and emotion dialogue model training method and device
US20180329884A1 (en) Neural contextual conversation learning
CN108228576B (en) Text translation method and device
CN114510570B (en) Intention classification method, device and computer equipment based on small sample corpus
CN107451230A (en) A kind of answering method and question answering system
CN109344242B (en) A dialogue question answering method, device, equipment and storage medium
CN116991982B (en) Interactive dialogue method, device, equipment and storage medium based on artificial intelligence
CN113111190B (en) Knowledge-driven dialogue generation method and device
CN108595436A (en) The generation method and system of emotion conversation content, storage medium
CN111324736B (en) Man-machine dialogue model training method, man-machine dialogue method and system
CN114333790B (en) Data processing method, device, equipment, storage medium and program product
KR20220066554A (en) Method, apparatus and computer program for buildding knowledge graph using qa model
CN110674276B (en) Robot self-learning method, robot terminal, device and readable storage medium
CN111126552A (en) Intelligent learning content pushing method and system
KR20220063476A (en) Architecture for generating qa pairs from contexts
CN110597968A (en) Reply selection method and device
CN111046157A (en) A method and system for generating general English human-computer dialogue based on balanced distribution
CN116822633A (en) Model reasoning method and device based on self-cognition and electronic equipment
CN114297399A (en) Knowledge graph generation method, system, storage medium and electronic device
CN118861218A (en) Sample data generation method, device, electronic device and storage medium
CN110955765A (en) Corpus construction method and apparatus of intelligent assistant, computer device and storage medium
CN111783434B (en) Method and system for improving noise immunity of reply generation model
CN118981523A (en) Model training method, response sentence generation method, device and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wang Dongliang

Inventor after: Liu Yingbo

Inventor after: Wang Hongbin

Inventor after: Jiang Yuji

Inventor after: Yao Xing

Inventor after: Yu Yanlong

Inventor after: Li Xiaowen

Inventor after: Zhang Zhiwei

Inventor after: Zhang Lingling

Inventor before: Liu Yingbo

Inventor before: Wang Dongliang

Inventor before: Wang Hongbin

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant