WO2015075975A1 - 対話制御装置及び対話制御方法 - Google Patents
対話制御装置及び対話制御方法 Download PDFInfo
- Publication number
- WO2015075975A1 WO2015075975A1 PCT/JP2014/070768 JP2014070768W WO2015075975A1 WO 2015075975 A1 WO2015075975 A1 WO 2015075975A1 JP 2014070768 W JP2014070768 W JP 2014070768W WO 2015075975 A1 WO2015075975 A1 WO 2015075975A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- intention
- transition
- dialogue
- unit
- dialog
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/268—Morphological analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/027—Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1815—Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Definitions
- the present invention relates to a dialog control apparatus and a dialog control method for performing a dialog based on an input natural language and executing a command according to a user's intention.
- a method for guiding the system to achieve the purpose by dialogue is disclosed as a method for achieving the purpose even if the user does not remember the command for achieving the purpose.
- One way to achieve this is to construct a dialogue scenario in advance in a tree structure, and follow the intermediate nodes from the root of the tree structure (hereinafter referred to as node activation for transition on the tree structure). Once the end node is reached, there is a way for the user to achieve the goal. Which of the tree structure of the dialogue scenario is followed depends on the keyword held by each node of the tree structure, and which keyword is included in the user's utterance of the intention transition destination activated at that time To decide.
- each scenario holds a plurality of keywords that characterize the scenario, thereby selecting which scenario from the first user's utterance. And decide whether to proceed with the dialogue.
- the user selects a different scenario based on multiple keywords assigned to multiple scenarios and routes A method of changing the topic by proceeding with the dialogue is disclosed.
- the conventional dialog control apparatus is configured as described above, it is possible to select a new scenario when transition is impossible.
- the tree structure scenario created based on the functional design of the system is different from the expression representing the function assumed by the user, the user is selected during a conversation using the tree structure scenario when a scenario is selected.
- the uttered content is an utterance that is not assumed by the scenario, it is assumed that there is a possibility of another scenario, and a plausible scenario is selected from the utterance content.
- priority is given to the selection of the ongoing scenario. Therefore, there is a problem that transition is not performed even when another scenario is more likely.
- the present invention has been made to solve the above-described problems, and an object of the present invention is to provide an interactive control device that can perform appropriate transitions even for unexpected inputs and execute appropriate commands. To do.
- the dialogue control device activates an intention estimation unit for estimating an input intention based on data obtained by converting an input in a natural language into a morpheme string, and data having the intention in a hierarchical structure.
- the intention estimation weight determination unit that determines the intention estimation weight of the intention estimated by the intention estimation unit, and the intention estimation unit's estimation result is corrected according to the intention estimation weight determined by the intention estimation weight determination unit.
- a transition node determining unit that determines an intention to be newly activated by transition
- a dialog turn generating unit that generates a dialog turn from one or more intentions activated by the transition node determining unit
- a dialog turn generation unit that generates a dialog turn from one or more intentions activated by the transition node determining unit
- a dialog turn generation unit perform Of management, control at least one processing, by repeating this control, finally, in which a dialogue control unit for executing the set command.
- the dialogue control device of the present invention determines the intention estimation weight of the estimated intention, modifies the estimation result of the intention according to the intention estimation weight, and determines the intention to make a new transition and activate. Therefore, an appropriate transition is performed even for an unexpected input, and an appropriate command can be executed.
- FIG. 1 is a block diagram showing a dialogue control apparatus according to Embodiment 1 of the present invention.
- 1 includes a voice input unit 1, a dialog control unit 2, a voice output unit 3, a voice recognition unit 4, a morpheme analysis unit 5, an intention estimation model 6, an intention estimation unit 7, an intention hierarchy graph data 8, An intention estimation weight determination unit 9, a transition node determination unit 10, a dialogue scenario data 11, a dialogue history data 12, a dialogue turn generation unit 13, and a speech synthesis unit 14 are provided.
- the voice input unit 1 is an input unit that receives voice input by the dialog control device.
- the dialogue control unit 2 is a control unit that controls the voice recognition unit 4 to the voice synthesis unit 14 to advance the dialogue and finally execute a command assigned to the intention.
- the voice output unit 3 is an output unit that performs voice output with the dialogue control device.
- the voice recognition unit 4 is a processing unit that recognizes the voice input from the voice input unit 1 and converts it into text.
- the morpheme analysis unit 5 is a processing unit that divides the recognition result recognized by the speech recognition unit 4 into morphemes.
- the intention estimation model 6 is data of an intention estimation model for estimating an intention using a morphological analysis result analyzed by the morphological analysis unit 5.
- the intention estimation unit 7 is a processing unit that receives the morphological analysis result analyzed by the morpheme analysis unit 5 and outputs the intention estimation result using the intention estimation model 6, and a set of scores representing the intention and the likelihood of the intention. Output a list of
- a method such as a maximum entropy method can be used.
- feature an independent word “target, setting” (hereinafter referred to as “feature”) is extracted from the morphological analysis result, and the correct answer “destination”
- feature an independent word “target, setting”
- intention estimation using the maximum entropy method is performed.
- the intention estimation weight determination unit 9 is a processing unit that determines a weight to be added to the intention score estimated by the intention estimation unit 7 from the intention hierarchy information of the intention hierarchy graph data 8 and the activated intention information.
- the transition node determination unit 10 re-evaluates the list of intentions and intention scores estimated by the intention estimation unit 7 with the weights determined by the intention estimation weight determination unit 9, thereby enabling the intentions to be activated next (a plurality of intents). (Including cases).
- the dialogue scenario data 11 is data of a dialogue scenario that describes what one or more intentions selected by the transition node determination unit 10 should be executed next.
- the dialogue history data 12 is dialogue history data for storing a dialogue state.
- the dialogue history data 12 holds information for returning to the previous state when the operation is changed according to the previous state or when the user denies the confirmation dialogue.
- the dialog turn generation unit 13 receives the one or more intentions selected by the transition node determination unit 10 and uses the dialog scenario data 11, the dialog history data 12, and the like to generate and execute a system response. This is a dialog turn generation unit that generates a scenario such as determination and waiting for the next input from the user.
- the voice synthesizer 14 is a processing unit that generates a synthesized voice by using the system response generated by the dialogue turn generator 13 as an input.
- Fig. 2 shows an example of intention hierarchy data assuming car navigation.
- nodes 21 to 30 and 86 are intention nodes representing intentions of the intention hierarchy.
- the intention node 21 is a root node at the top of the intention hierarchy, and an intention node 22 representing a group of navigation functions hangs below the intention node 21.
- the intention 81 is an example of a special intention set during the transition link.
- the intentions 82 and 83 are special intentions when a confirmation is requested from the user during the dialogue.
- the intention 84 is a special intention for returning one dialog state
- the intention 85 is a special intention for stopping the conversation.
- FIG. 3 shows an example of the dialogue in the first embodiment.
- “U:” at the beginning of the line represents the user's utterance.
- “S:” represents a response from the system.
- 31, 33, 35, 37, and 39 are system responses, and 32, 34, 36, and 38 are user utterances, which indicate that the conversation progresses in order.
- FIG. 4 is an example of a transition showing what kind of intention node transition occurs as the dialogue of FIG. 3 progresses.
- 28 is an intention activated by the user utterance 32
- 25 is an intention activated again by the user utterance 34
- 26 is an intention activated by the user utterance 38
- 41 is preferentially intended when the intention node 28 is activated This is a presumed priority intention estimation range.
- Reference numeral 42 denotes a transitioned link.
- FIG. 5 is an explanatory diagram showing an example of the intention estimation result and an example of an expression for correcting the intention estimation result according to the conversation state.
- Expression 51 shows a score correction expression of the intention estimation result
- 52 to 56 are intention estimation results.
- FIG. 6 is a diagram of a dialogue scenario stored in the dialogue scenario data 11. It describes what kind of system response is made to the activated intention node and what kind of command execution is performed on the device operated by the dialog control device.
- 61 to 67 are dialogue scenarios for the intended nodes.
- 68 and 69 are interactive scenarios that are registered when it is desired to describe a system response for selection when a plurality of intention nodes are activated. In general, when a plurality of intention nodes are activated, connection is made using a response prompt before execution of the dialogue scenario of each intention node.
- FIG. 7 shows the dialogue history data 12, and reference numerals 71 to 77 indicate backtrack points for each intention.
- FIG. 8 is a flowchart showing the flow of dialogue in the first embodiment.
- the dialogue is executed.
- FIG. 9 is a flowchart showing a flow of dialog turn generation in the first embodiment.
- a dialogue turn is generated when only one intention node is activated.
- a system response for selecting the activation intention node is added to the dialog turn in step ST30.
- the operation of the dialogue control apparatus will be described.
- the following operation will be described on the assumption that the input (input using one or more keywords or sentences) is a natural language voice.
- the following description will be made assuming that the user's utterance is correctly recognized without misrecognition.
- a dialog is started using an utterance start button that is not explicitly shown.
- none of the intention nodes in the intention hierarchy graph of FIG. 2 are in an activated state.
- step ST11 if the user utters the utterance 32 “I want to change the route”, the voice is input from the voice input unit 1 and converted into text by the voice recognition unit 4.
- the voice recognition ends, the process proceeds to step ST12, and “I want to change the route” is passed to the morpheme analyzer 5.
- the morpheme analysis unit 5 analyzes the recognition result and performs morpheme analysis such as “root / noun, a / particle, change / noun (sa-variant connection), shi / verb, tai / auxiliary verb”.
- step ST13 the process moves to step ST13, and the result of the morphological analysis is passed to the intention estimation unit 7, and intention estimation is performed using the intention estimation model 6.
- the intention estimation unit 7 extracts features used for intention estimation from the morphological analysis trace results.
- step ST13 a list of features “route, setting” is extracted from the morphological analysis result of the recognition result of the utterance example 32, and the intention estimation unit 7 performs intention estimation based on the feature.
- step ST14 the process proceeds to step ST14, and the list of intention and score pairs estimated by the intention estimation unit 7 is passed to the transition node determination unit 10, and the score is corrected.
- the process moves to ST15, and a transition node to be activated is determined.
- a score correction formula 51 is used to correct the score.
- i represents intention
- s i represents the score of intention i.
- the transition node determination unit 10 determines an activation intention set.
- the operation of the transition node determination unit 10 includes, for example, the following intention node determination method.
- (C) When the maximum score is less than 0.1, the activation is not performed because the intention cannot be understood. In the case of the first embodiment, in the situation where the speech “I want to change the route” is performed, the maximum score is 0. Therefore, only the intention “route selection [type ?]” Is activated in the transition node determination unit 10.
- step ST ⁇ b> 16 the next turn processing list is generated based on the content written in the dialog scenario data 11 in the dialog turn generation unit 13. .
- the processing flow of FIG. 9 is obtained.
- step ST21 of FIG. 9 since the intention node activated is only the intention node 28, the process proceeds to step ST22. Since there is no DB search condition in the dialogue scenario 61 of the intention node 28, the process proceeds to step ST28. Since no command is defined in the dialogue scenario 61, the process moves to step ST27, and a system response for selecting the lower intention nodes 29, 30 and the like of the intention node 28 is generated.
- step ST16 the dialogue control unit 2 receives the dialogue turn, and sequentially processes the processes added to the dialogue turn.
- the speech of the system response 33 is created by the speech synthesizer 14 and output from the speech output unit 3.
- the intention estimation result 55 is determined to be the intention of the user's utterance, and the activation node is set as the intention node 25.
- the dialog turn generation unit 13 generates a dialog turn based on the fact that the activation intention node has transitioned and that there is no link from the transition source. Since it moves to a place where there is no transition, it will be executed after confirmation.
- the dialogue turn generation unit 13 uses the dialogue scenario 67 to change “$ genre $” of the post-execution prompt “$ genre $ near current location” to “Ramen shop”. ”To generate a system interaction response that reads“ Find a ramen shop near your current location ”.
- the DB search “SearchDB (current location, ramen shop)” is added to the dialog turn to receive it, and the system selects “Please select from the list”.
- the response is added to the dialogue turn as a response, and the next process is started (step ST22 ⁇ step ST23 ⁇ step ST24 ⁇ step ST25 in FIG. 9). If there is only one search result as a result of the DB search, the process moves to step ST26, a system response notifying that the search result is one is added to the dialogue turn, and the process moves to step ST27. .
- the dialogue control unit 2 outputs a system response 37 “searched for a ramen shop near the current location. Please select from the list.” To display a list of ramen stores searched for the database, and the user Waiting to speak.
- the system response 39 “I made a route through XX ramen” is added to the dialogue turn (step ST22 ⁇ step ST28 ⁇ step ST29 ⁇ step ST27 in FIG. 9).
- the dialogue control unit 2 executes the received dialogue turns in order.
- the waypoint addition is executed, and a synthesized sound is output as “I made ramen a waypoint”. Since this dialog turn includes command execution, the dialog is terminated and the first utterance start waiting state is returned.
- the intention estimation unit that estimates the input intention based on the data obtained by converting the natural language input into the morpheme string, and the data having the intention in a hierarchical structure And an intention estimation weight determination unit for determining an intention estimation weight of the intention estimated by the intention estimation unit based on the intention activated at the time of the target, and an intention estimation weight determined by the intention estimation weight determination unit
- the estimation result of the intention estimation unit is corrected, and a transition node determination unit that determines an intention to be newly activated by transition, and a conversation turn from one or more intentions activated by the transition node determination unit
- the intention estimation unit, the intention estimation weight determination unit, the transition node determination unit And at least one of the processes performed by the dialog turn generation unit, and by repeating this control, a dialog control unit that executes the set command is provided.
- the dialogue control device that performs the dialogue by estimating the intention of the input in the natural language and executes the command set as a result, the input in the natural language is performed.
- Intent estimated in the intention estimation step based on the intention inference step that estimates the intent of the input based on the data converted into columns, and the intentionally activated data at the target time
- Intention estimation weight determination step to determine the intention estimation weight of the target
- a new transition and activation intent are determined
- Transition node determination step for generating a dialog
- a dialog turn generation step for generating a dialog turn from one or more intentions activated in the transition node determination step
- FIG. FIG. 10 is a configuration diagram illustrating the dialogue control apparatus according to the second embodiment.
- the command history data 15 is data for storing commands executed so far together with execution times.
- the history considering dialogue turn generation unit 16 generates a dialogue turn using the command history data 15 in addition to the function of the dialogue turn generation unit 13 of the first embodiment using the dialogue scenario data 11 and the dialogue history data 12. It is a processing unit.
- FIG. 11 shows an example of the dialogue in the second embodiment.
- 101, 103, 105, 106, 108, 109, 111, 113, 115 are system responses
- 102, 104, 107, 110, 112, 114 are user utterances.
- FIG. 12 is a diagram showing an example of the intention estimation result.
- 121 to 124 are intention estimation results.
- FIG. 13 is an example of the command history data 15.
- the command history data 15 includes a command execution history list 15a and a command misunderstanding possibility list 15b.
- the command execution history in the command execution history list 15a records the result of command execution with time.
- the command misunderstanding possibility list 15b is a list that is registered when an intention that is not an execution intention among the option intentions in the command execution history is executed within a predetermined time.
- FIG. 14 is a flowchart of a process for adding data to the command history data 15 when a turn is generated by the history considering dialogue turn generation unit 16 according to the second embodiment.
- FIG. 15 is a flowchart showing a process as to whether or not confirmation is to be made to the user when the intention to execute a command is determined by the history considering dialogue turn generation unit 16.
- the basic operation in the second embodiment is the same as that in the first embodiment, but the difference from the first embodiment is that the operation of the dialog turn generation unit 13 is performed by adding the command history data 15 and considering the history. This is the operation of the dialog turn generation unit 16. That is, the difference from the first embodiment is that, when the misinterpretation intention is finally selected as an intention with a command definition in the system response, a confirmation is not made instead of generating a scenario to be executed directly. Is to generate a dialogue turn to take.
- the dialogue in the second embodiment shows a case where the user does not understand the application well, adds a registered place with the intention of setting the destination, and later notices and sets the destination again.
- the overall flow of the dialog is the same as that of the first embodiment and follows the flow of FIG. 8, and thus the description of the same operation as that of the first embodiment is omitted. Also, the generation of the dialog turn is the same as the flow of FIG.
- the transition node determination unit 10 determines the intention node to be activated based on the intention estimation result.
- the intention node to be activated is determined under the same conditions as those in the first embodiment, it becomes (b), and the intention nodes 26, 27, and 86 are activated.
- the intended node is not activated. For example, if the destination is not set, the intended node 26 is not activated because the waypoint cannot be set.
- the destination node is not set and the intention node 26 is not activated.
- Step ST21 Step ST30.
- the finally completed scenario is transferred to the dialogue control unit 2, and a system response 103 is output, and the user is awaited to speak.
- the intention node 86 is selected as the intention estimation result
- the dialogue scenario 65 is selected, and the command “Add (registered place) is selected.
- Step ST21 ⁇ step ST22 ⁇ step ST28 ⁇ step ST29 in FIG. 9).
- Step ST27 the history considering dialogue turn generation unit 16 determines whether to register in the command execution history according to the flow of FIG.
- step ST31 it is determined whether the intention number immediately before executing the command is 0 or 1.
- step ST36 the command execution history 131 is added to the command execution history list.
- step ST37 when an option intention that has not been executed within a certain period of time is executed, it is registered in the command misinterpretability list 15b. Since the execution history 132 does not exist, the process ends without doing anything.
- step ST31 the process moves to step ST32. Since there is no immediately preceding intention in step ST32, the process moves to step ST33, and the command execution history 132 is registered in step ST36.
- step ST37 When the command execution history is registered, in step ST37, if an intention that has not been selected is selected among ambiguous option intentions within a certain time (for example, 10 minutes), there is a possibility that the user may misunderstand. If there is, the process moves to step ST38 and is registered in the command misunderstanding possibility list 15b. Since there is a possibility that the destination setting is misunderstood as the registered place setting from the command execution histories 131 and 132, a command misunderstanding possibility 133 is added, and the number of times of confirmation and the number of correct intention executions are set to 1.
- step ST42 a system response 113 urging confirmation is generated, “ ⁇ Center is not a destination but a registered location. Are you sure?”.
- step ST43 the number of confirmations is incremented by 1, and the process ends.
- step ST44 when the scheduled execution intention does not exist in the command misunderstanding possibility list 15b, the process moves to step ST44 to execute the scheduled execution intention.
- the destination is set without using the word “Registration”, and the correct answer intention is not increased.
- the number of executions will increase. That is, of the misinterpretation intentions present in the command misinterpretation list 15b, the intentions that have not become execution intentions are not executed within a certain time.
- the correct answer execution count / check count exceeds 2, for example, the command misunderstanding possibility list data is deleted to stop the check, so that the dialog can be smoothly advanced.
- a dialog turn is generated from one or more intentions activated by the transition node determination unit, and the dialog Record the command executed as a result of the above, and turn the dialogue using the list registered when the intention that is not the execution intention among the option intentions in the command execution history is executed within a certain period of time. Since a history-considering dialogue turn generation unit for generating a command is provided, an appropriate transition can be performed and an appropriate command can be executed even if the user may misunderstand the command.
- the history-considering dialogue turn generation unit confirms when an intention that is not an execution intention among the option intentions in the command execution history is executed within a certain time.
- a dialog turn to be generated is generated, and after the dialog turn is generated, among the intention intentions existing in the list, the intention that has not been executed is not executed within a certain time, and this is repeated a set number of times Deletes the list and stops generating interactive turns to confirm, so if the user doesn't understand the appropriate command, it can take appropriate action, while the user When it is understood, it is possible to prevent performing unnecessary check.
- FIG. 16 is a configuration diagram illustrating the dialogue control apparatus according to the third embodiment.
- the dialogue control apparatus shown in the figure includes an additional transition link data 17 and a transition link control unit 18 in addition to the voice input unit 1 to the voice synthesis unit 14. Since the configurations of the voice input unit 1 to the voice synthesis unit 14 are the same as those of the first embodiment, description thereof is omitted here.
- the additional transition link data 17 is data in which a transition link when an unexpected transition is executed is recorded.
- the transition link control unit 18 is a control unit that adds data to the additional transition link data 17 and changes intention hierarchy data based on the additional transition link data 17.
- FIG. 17 shows an example of the dialogue in the third embodiment.
- the utterance in FIG. 17 is an example of the dialog executed at another time after the utterance in FIG. 3 is performed and the command is executed.
- 171, 173, 175, 177, 178, 180, 182, 184, 186 are system responses
- 172, 174, 176, 179, 181, 183, 185 are user utterances, Indicates that it is progressing.
- FIG. 18 is an example of the intention estimation result in the third embodiment. Reference numerals 191 to 195 denote intention estimation results.
- FIG. 19 is an example of the additional transition link data 17.
- 201, 202 and 203 are additional transition links.
- FIG. 20 is a flowchart illustrating processing when the transition link control unit 18 performs transition link integration processing.
- FIG. 21 is an example of intention hierarchy data after integration.
- the transition of link 42 in FIG. 4 is selected.
- the intention estimation result 191 is converted into the data of the additional transition link data 17 through the intention estimation weight determination unit 9 and the transition link control unit 18.
- the dialog in FIG. 17 continues.
- the dialog is started by the system response 171, and the user utters the user utterance 172 “I want to change the route” in the same way as the dialog of FIG. 3.
- the intention estimation unit 7 generates the intention estimation result 52 of FIG. 5, the intention node 28 is selected, and the system response 173 is output in the same way as the dialog of FIG. 3 to wait for the user's utterance.
- the intention estimation results 192 and 193 are obtained.
- the transition intention is calculated by assuming that the transition link 42 exists, and the intention estimation results 194 and 195 are obtained.
- the transition node determination unit 10 activates only the intention node 25 as a transition node. Since the dialog turn generation unit 13 proceeds with the transition link 42 being present, the system response 175 is added to the scenario without confirmation from the user, and the process is transferred to the dialog control unit 2.
- the dialogue scenario 63 is selected, and there is a command, so the command is executed and the processing ends.
- 1 is added to the number of transitions of the additional transition link 201.
- step ST51 When the number of transitions of the additional transition link is updated, it is determined whether the link can be changed to a higher intention in the intention hierarchy according to the flow of FIG.
- step ST51 since the number of transitions of the additional transition link 201 is increased by 1, the transition destination where the transition source of the additional transition link 201 matches is extracted.
- N 2.
- the condition of N in step ST51 is 3, there is no corresponding upper hierarchy intention in step ST52, so “YES” and the process is ended.
- step ST52 since it is “NO”, the process moves to step ST53. Since the main intention of the upper hierarchy intention is common to “peripheral search”, “YES” is set.
- step ST54 Since the main intention of the upper hierarchy intention is common to “peripheral search”, “YES” is set.
- the transition source Since there is a transition control unit that adds the link information of the transition destination from and the transition node determination unit treats the link added by the transition control unit in the same way as a normal link and decides the intention. Appropriate transitions are made to the input, and an appropriate command can be executed.
- the transition link control unit is not expected when there are a plurality of transitions to unexpected intentions and a plurality of unexpected intentions have a common intention as a parent node. Since the transition to the intention is replaced with the transition to the parent node, the command desired by the user can be executed with less interaction.
- Embodiments 1 to 3 the description has been given in Japanese. However, by changing the feature extraction method for intention estimation for each language, various languages such as English, German, and Chinese can be used. It is possible to apply to.
- the input natural language text can be analyzed using a method such as pattern matching. It is also possible to directly execute the intention estimation process after extracting the facility $, $ address $, etc.
- the input is described as voice input.
- input means such as a keyboard without using voice recognition as input means.
- Embodiments 1 to 3 intention estimation is performed by processing the speech recognition result text in the morphological analysis unit. However, if the speech recognition engine result itself includes the morphological analysis result, the information is used directly. Intention estimation.
- Embodiments 1 to 3 have been described using an example in which a learning model based on the maximum entropy method is assumed as an intention estimation method, the intention estimation method is not limited.
- the dialogue control apparatus and the dialogue control method according to the present invention prepare a plurality of dialogue scenarios configured in advance in a tree structure, and from one tree-structure scenario to another tree-structure scenario based on the dialogue with the user.
- a plurality of dialogue scenarios configured in advance in a tree structure, and from one tree-structure scenario to another tree-structure scenario based on the dialogue with the user.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Machine Translation (AREA)
- Navigation (AREA)
Abstract
Description
実施の形態1.
図1は、この発明の実施の形態1による対話制御装置を示す構成図である。
図1に示す対話制御装置は、音声入力部1、対話制御部2、音声出力部3、音声認識部4、形態素解析部5、意図推定モデル6、意図推定部7、意図階層グラフデータ8、意図推定重み決定部9、遷移ノード決定部10、対話シナリオデータ11、対話履歴データ12、対話ターン生成部13、音声合成部14を備えている。
図6は対話シナリオデータ11に格納されている対話シナリオの図である。活性化した意図ノードに対して、どのようなシステム応答を行うか、また対話制御装置が操作する機器にどのようなコマンド実行を行うかが記述されている。61~67は意図ノードに対する対話シナリオである。一方、68,69は、複数の意図ノードが活性化している場合に、選択をさせるためのシステム応答を記述したい場合に登録しておく対話シナリオである。一般には、複数の意図ノードが活性化した場合は、それぞれの意図ノードの対話シナリオの実行前応答プロンプトを使って接続する。
図7は、対話履歴データ12であり、71~77は、各意図に対するバックトラックポイントを示している。
図9は実施の形態1における対話ターン生成の流れを示すフローチャートである。ステップST21からステップST29までのステップに従うことで、意図ノードが1つだけ活性化した場合の対話ターンが生成される。一方、意図ノードが複数活性化した場合は、ステップST30において、活性化意図ノード選択のためのシステム応答を対話ターンに追加する。
(a)最大スコアが0.6以上の場合は、最大スコアのノードを1つだけ活性化
(b)最大スコアが0.6未満の場合は、スコアが0.1以上のノードを複数活性化
(c)最大スコアが0.1未満の場合は、意図理解できなかったとして活性化しない
実施の形態1の場合、「ルートを変更したい」の発話が行われた状況では、最大スコアが0.972となるので、意図「ルート選択[タイプ=?]」だけが遷移ノード決定部10で活性化する。
結果として、意図ノード26「経由地設定[施設=$施設$]」の対話シナリオ63が選択され、コマンド「Add(経由地,○○ラーメン)」を対話ターンに追加する。続いて、システム応答39「○○ラーメンを経由地にしました」を対話ターンに追加する(図9におけるステップST22→ステップST28→ステップST29→ステップST27)。
図10は、実施の形態2の対話制御装置を示す構成図である。図中、音声入力部1~対話履歴データ12及び音声合成部14は実施の形態1と同様であるため、対応する部分に同一符号を付してその説明を省略する。
コマンド履歴データ15は、これまで実行したコマンドを実行時刻と共に記憶しておくデータである。また、履歴考慮対話ターン生成部16は、対話シナリオデータ11、対話履歴データ12を用いる実施の形態1の対話ターン生成部13の機能に加えて、コマンド履歴データ15を用いて対話ターンを生成する処理部である。
図14は、実施の形態2における履歴考慮対話ターン生成部16でターンを生成したときのコマンド履歴データ15へのデータ追加処理のフローチャートである。また、図15は履歴考慮対話ターン生成部16でコマンド実行予定意図が決まったときに、ユーザに確認を取るかどうかについての処理を示すフローチャートである。
正解実行回数/確認回数が、例えば2を超えた時点でコマンド誤解可能性リストのデータを削除して確認をやめるようにすることで、対話を円滑に進めることが出来る。
図16は、実施の形態3の対話制御装置を示す構成図である。図示の対話制御装置は音声入力部1~音声合成部14に加えて追加遷移リンクデータ17と遷移リンク制御部18とを備えている。音声入力部1~音声合成部14の構成は実施の形態1と同様であるため、ここでの説明は省略する。追加遷移リンクデータ17は、想定外遷移を実行した場合の遷移リンクを記録したデータである。また、遷移リンク制御部18は、追加遷移リンクデータ17へのデータの追加や、追加遷移リンクデータ17に基づく意図階層データの変更を行う制御部である。
図19は、追加遷移リンクデータ17の例である。201,202,203は追加遷移リンクである。
図20は、遷移リンク制御部18で、遷移リンクの統合処理を行う場合の処理を示すフローチャートである。
図21は、統合後の意図階層データ例である。
実施の形態3における最初の対話は、図3の対話内容であり、システム応答39により「経由地設定[施設=$施設$]」決定されコマンドが実行されるが、そこまでの対話の中で図4のリンク42の遷移が選択される。ここで、遷移ノード決定部10で遷移先が決定された時点で、意図推定重み決定部9と遷移リンク制御部18を介して意図推定結果191を、追加遷移リンクデータ17の追加遷移リンクのデータとして追加する。
Claims (6)
- 自然言語による入力を形態素列に変換したデータに基づいて当該入力の意図を推定する意図推定部と、
意図を階層構造としたデータと対象とする時点で活性化している意図とを元に、前記意図推定部で推定された意図の意図推定重みを決定する意図推定重み決定部と、
前記意図推定重み決定部で決定された前記意図推定重みに従って前記意図推定部の推定結果を修正した上で、新たに遷移して活性化する意図を決定する遷移ノード決定部と、
前記遷移ノード決定部で活性化した1つまたは複数の意図から対話のターンを生成する対話ターン生成部と、
前記対話ターン生成部で生成された対話のターンにより新たな自然言語による入力が与えられた場合、前記意図推定部、前記意図推定重み決定部、前記遷移ノード決定部および前記対話ターン生成部が行う処理のうち、少なくともいずれかの処理を制御し、当該制御を繰り返すことにより、最終的に、設定されたコマンドを実行する対話制御部とを備えたことを特徴とする対話制御装置。 - 対話ターン生成部に代えて、前記遷移ノード決定部で活性化した1つまたは複数の意図から対話のターンを生成すると共に、前記対話の結果として実行したコマンドを記録しておき、かつ、コマンド実行履歴中の選択肢意図のうち実行意図とならなかった意図が一定時間以内に実行された場合に登録されるリストを用いて対話のターンを生成する履歴考慮対話ターン生成部を備えたことを特徴とする請求項1記載の対話制御装置。
- 履歴考慮対話ターン生成部は、コマンド実行履歴中の選択肢意図のうち実行意図とならなかった意図が一定時間以内に実行された場合に確認を行う対話ターンを生成し、当該対話ターンの生成後、前記リストに存在する選択肢意図のうち、前記実行意図とならなかった意図が一定時間以内に実行されることがなく、かつ、これが設定回数繰り返された場合は当該リストを削除すると共に、前記確認を行う対話ターンの生成を停止することを特徴とする請求項2記載の対話制御装置。
- 遷移ノード決定部で決定した意図が、意図階層で定義されたリンクで無い想定外意図への遷移であった場合に遷移元から遷移先のリンク情報を追加する遷移制御部を有し、
前記遷移ノード決定部は、前記遷移制御部で追加されたリンクを通常リンクと同様に扱って遷移する意図を決定することを特徴とする請求項1記載の対話制御装置。 - 前記遷移リンク制御部は、前記想定外意図への遷移が複数あり、かつ、当該複数の想定外意図が共通の意図を親ノードとして持つ場合、前記想定外意図への遷移を前記親ノードへの遷移に置き換えること特徴とする請求項4記載の対話制御装置。
- 自然言語による入力の意図を推定して対話を行い、その結果として設定されたコマンドを実行する対話制御装置を用い、
前記自然言語による入力を形態素列に変換したデータに基づいて当該入力の意図を推定する意図推定ステップと、
意図を階層構造としたデータと対象とする時点で活性化している意図とを元に、前記意図推定ステップで推定された意図の意図推定重みを決定する意図推定重み決定ステップと、
前記意図推定重み決定ステップで決定された前記意図推定重みに従って前記意図推定ステップの推定結果を修正した上で、新たに遷移して活性化する意図を決定する遷移ノード決定ステップと、
前記遷移ノード決定ステップで活性化した1つまたは複数の意図から対話のターンを生成する対話ターン生成ステップと、
前記対話ターン生成ステップで生成された対話のターンにより新たな自然言語による入力が与えられた場合、前記意図推定ステップ、前記意図推定重み決定ステップ、前記遷移ノード決定ステップおよび前記対話ターン生成ステップのうち、少なくともいずれかのステップを制御し、当該制御を繰り返すことにより、最終的に、設定されたコマンドを実行する対話制御ステップとを備えたことを特徴とする対話制御方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015549010A JP6073498B2 (ja) | 2013-11-25 | 2014-08-06 | 対話制御装置及び対話制御方法 |
CN201480057853.7A CN105659316A (zh) | 2013-11-25 | 2014-08-06 | 对话控制装置和对话控制方法 |
US14/907,719 US20160163314A1 (en) | 2013-11-25 | 2014-08-06 | Dialog management system and dialog management method |
DE112014005354.6T DE112014005354T5 (de) | 2013-11-25 | 2014-08-06 | Dialog-management-system und dialog-management-verfahren |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013242944 | 2013-11-25 | ||
JP2013-242944 | 2013-11-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015075975A1 true WO2015075975A1 (ja) | 2015-05-28 |
Family
ID=53179254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/070768 WO2015075975A1 (ja) | 2013-11-25 | 2014-08-06 | 対話制御装置及び対話制御方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20160163314A1 (ja) |
JP (1) | JP6073498B2 (ja) |
CN (1) | CN105659316A (ja) |
DE (1) | DE112014005354T5 (ja) |
WO (1) | WO2015075975A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018513405A (ja) * | 2015-08-17 | 2018-05-24 | 三菱電機株式会社 | 音声言語理解システム |
JP2019036171A (ja) * | 2017-08-17 | 2019-03-07 | Kddi株式会社 | 対話シナリオコーパスの作成支援システム |
CN117496973A (zh) * | 2024-01-02 | 2024-02-02 | 四川蜀天信息技术有限公司 | 一种提升人机对话交互体验感的方法、装置、设备及介质 |
JP7462995B1 (ja) | 2023-10-26 | 2024-04-08 | Starley株式会社 | 情報処理システム、情報処理方法及びプログラム |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105070288B (zh) * | 2015-07-02 | 2018-08-07 | 百度在线网络技术(北京)有限公司 | 车载语音指令识别方法和装置 |
US10083451B2 (en) | 2016-07-08 | 2018-09-25 | Asapp, Inc. | Using semantic processing for customer support |
US10453074B2 (en) | 2016-07-08 | 2019-10-22 | Asapp, Inc. | Automatically suggesting resources for responding to a request |
DE102016008855A1 (de) * | 2016-07-20 | 2018-01-25 | Audi Ag | Verfahren zum Durchführen einer Sprachübertragung |
JP2018054790A (ja) * | 2016-09-28 | 2018-04-05 | トヨタ自動車株式会社 | 音声対話システムおよび音声対話方法 |
KR101934280B1 (ko) * | 2016-10-05 | 2019-01-03 | 현대자동차주식회사 | 발화내용 분석 장치 및 방법 |
US10109275B2 (en) | 2016-12-19 | 2018-10-23 | Asapp, Inc. | Word hash language model |
US10650311B2 (en) | 2016-12-19 | 2020-05-12 | Asaap, Inc. | Suggesting resources using context hashing |
JP6873805B2 (ja) * | 2017-04-24 | 2021-05-19 | 株式会社日立製作所 | 対話支援システム、対話支援方法、及び対話支援プログラム |
US10762423B2 (en) | 2017-06-27 | 2020-09-01 | Asapp, Inc. | Using a neural network to optimize processing of user requests |
CN107240398B (zh) * | 2017-07-04 | 2020-11-17 | 科大讯飞股份有限公司 | 智能语音交互方法及装置 |
JP2019057123A (ja) * | 2017-09-21 | 2019-04-11 | 株式会社東芝 | 対話システム、方法、及びプログラム |
KR101932263B1 (ko) * | 2017-11-03 | 2018-12-26 | 주식회사 머니브레인 | 적시에 실질적 답변을 제공함으로써 자연어 대화를 제공하는 방법, 컴퓨터 장치 및 컴퓨터 판독가능 기록 매체 |
CN107832293B (zh) * | 2017-11-07 | 2021-04-09 | 北京灵伴即时智能科技有限公司 | 一种面向非自由谈话式汉语口语的对话行为分析方法 |
US10497004B2 (en) | 2017-12-08 | 2019-12-03 | Asapp, Inc. | Automating communications using an intent classifier |
JP2019106054A (ja) | 2017-12-13 | 2019-06-27 | 株式会社東芝 | 対話システム |
US10489792B2 (en) | 2018-01-05 | 2019-11-26 | Asapp, Inc. | Maintaining quality of customer support messages |
US10210244B1 (en) | 2018-02-12 | 2019-02-19 | Asapp, Inc. | Updating natural language interfaces by processing usage data |
US10169315B1 (en) | 2018-04-27 | 2019-01-01 | Asapp, Inc. | Removing personal information from text using a neural network |
US10776582B2 (en) * | 2018-06-06 | 2020-09-15 | International Business Machines Corporation | Supporting combinations of intents in a conversation |
US11216510B2 (en) | 2018-08-03 | 2022-01-04 | Asapp, Inc. | Processing an incomplete message with a neural network to generate suggested messages |
US11501763B2 (en) * | 2018-10-22 | 2022-11-15 | Oracle International Corporation | Machine learning tool for navigating a dialogue flow |
US11551004B2 (en) | 2018-11-13 | 2023-01-10 | Asapp, Inc. | Intent discovery with a prototype classifier |
US10747957B2 (en) | 2018-11-13 | 2020-08-18 | Asapp, Inc. | Processing communications using a prototype classifier |
US11043214B1 (en) * | 2018-11-29 | 2021-06-22 | Amazon Technologies, Inc. | Speech recognition using dialog history |
WO2020110249A1 (ja) * | 2018-11-29 | 2020-06-04 | 三菱電機株式会社 | 対話装置、対話方法、及び対話プログラム |
CN111737408B (zh) * | 2019-03-25 | 2024-05-03 | 阿里巴巴集团控股有限公司 | 基于剧本的对话方法、设备及电子设备 |
CN110377716B (zh) * | 2019-07-23 | 2022-07-12 | 百度在线网络技术(北京)有限公司 | 对话的交互方法、装置及计算机可读存储介质 |
US11425064B2 (en) | 2019-10-25 | 2022-08-23 | Asapp, Inc. | Customized message suggestion with user embedding vectors |
US20210158810A1 (en) * | 2019-11-25 | 2021-05-27 | GM Global Technology Operations LLC | Voice interface for selection of vehicle operational modes |
CN111538802B (zh) * | 2020-03-18 | 2023-07-28 | 北京三快在线科技有限公司 | 会话处理方法、装置、电子设备 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004251998A (ja) * | 2003-02-18 | 2004-09-09 | Yukihiro Ito | 対話理解装置 |
WO2007013521A1 (ja) * | 2005-07-26 | 2007-02-01 | Honda Motor Co., Ltd. | ユーザと機械とのインタラクションを実施するための装置、方法、およびプログラム |
JP2008203559A (ja) * | 2007-02-20 | 2008-09-04 | Toshiba Corp | 対話装置及び方法 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6490698B1 (en) * | 1999-06-04 | 2002-12-03 | Microsoft Corporation | Multi-level decision-analytic approach to failure and repair in human-computer interactions |
JP4363076B2 (ja) * | 2002-06-28 | 2009-11-11 | 株式会社デンソー | 音声制御装置 |
US7302383B2 (en) * | 2002-09-12 | 2007-11-27 | Luis Calixto Valles | Apparatus and methods for developing conversational applications |
US8265939B2 (en) * | 2005-08-31 | 2012-09-11 | Nuance Communications, Inc. | Hierarchical methods and apparatus for extracting user intent from spoken utterances |
CN101266793B (zh) * | 2007-03-14 | 2011-02-02 | 财团法人工业技术研究院 | 通过对话回合间上下文关系来减少辨识错误的装置与方法 |
JP4547721B2 (ja) * | 2008-05-21 | 2010-09-22 | 株式会社デンソー | 自動車用情報提供システム |
WO2010126321A2 (ko) * | 2009-04-30 | 2010-11-04 | 삼성전자주식회사 | 멀티 모달 정보를 이용하는 사용자 의도 추론 장치 및 방법 |
US8892419B2 (en) * | 2012-04-10 | 2014-11-18 | Artificial Solutions Iberia SL | System and methods for semiautomatic generation and tuning of natural language interaction applications |
CN103077165A (zh) * | 2012-12-31 | 2013-05-01 | 威盛电子股份有限公司 | 自然语言对话方法及其系统 |
US9665564B2 (en) * | 2014-10-06 | 2017-05-30 | International Business Machines Corporation | Natural language processing utilizing logical tree structures |
-
2014
- 2014-08-06 CN CN201480057853.7A patent/CN105659316A/zh active Pending
- 2014-08-06 JP JP2015549010A patent/JP6073498B2/ja active Active
- 2014-08-06 US US14/907,719 patent/US20160163314A1/en not_active Abandoned
- 2014-08-06 DE DE112014005354.6T patent/DE112014005354T5/de not_active Withdrawn
- 2014-08-06 WO PCT/JP2014/070768 patent/WO2015075975A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004251998A (ja) * | 2003-02-18 | 2004-09-09 | Yukihiro Ito | 対話理解装置 |
WO2007013521A1 (ja) * | 2005-07-26 | 2007-02-01 | Honda Motor Co., Ltd. | ユーザと機械とのインタラクションを実施するための装置、方法、およびプログラム |
JP2008203559A (ja) * | 2007-02-20 | 2008-09-04 | Toshiba Corp | 対話装置及び方法 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018513405A (ja) * | 2015-08-17 | 2018-05-24 | 三菱電機株式会社 | 音声言語理解システム |
JP2019036171A (ja) * | 2017-08-17 | 2019-03-07 | Kddi株式会社 | 対話シナリオコーパスの作成支援システム |
JP7462995B1 (ja) | 2023-10-26 | 2024-04-08 | Starley株式会社 | 情報処理システム、情報処理方法及びプログラム |
CN117496973A (zh) * | 2024-01-02 | 2024-02-02 | 四川蜀天信息技术有限公司 | 一种提升人机对话交互体验感的方法、装置、设备及介质 |
CN117496973B (zh) * | 2024-01-02 | 2024-03-19 | 四川蜀天信息技术有限公司 | 一种提升人机对话交互体验感的方法、装置、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN105659316A (zh) | 2016-06-08 |
DE112014005354T5 (de) | 2016-08-04 |
JP6073498B2 (ja) | 2017-02-01 |
US20160163314A1 (en) | 2016-06-09 |
JPWO2015075975A1 (ja) | 2017-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6073498B2 (ja) | 対話制御装置及び対話制御方法 | |
US10037758B2 (en) | Device and method for understanding user intent | |
WO2016067418A1 (ja) | 対話制御装置および対話制御方法 | |
JP4267385B2 (ja) | 統計的言語モデル生成装置、音声認識装置、統計的言語モデル生成方法、音声認識方法、およびプログラム | |
US9449599B2 (en) | Systems and methods for adaptive proper name entity recognition and understanding | |
US20080010070A1 (en) | Spoken dialog system for human-computer interaction and response method therefor | |
JP2017058673A (ja) | 対話処理装置及び方法と知能型対話処理システム | |
JP4186992B2 (ja) | 応答生成装置、方法及びプログラム | |
JP2001109493A (ja) | 音声対話装置 | |
JP2005321730A (ja) | 対話システム、対話システム実行方法、及びコンピュータプログラム | |
JP2006349954A (ja) | 対話システム | |
KR20210130024A (ko) | 대화 시스템 및 그 제어 방법 | |
JP6070809B1 (ja) | 自然言語処理装置及び自然言語処理方法 | |
JP2007041319A (ja) | 音声認識装置および音声認識方法 | |
EP3005152B1 (en) | Systems and methods for adaptive proper name entity recognition and understanding | |
US20060136195A1 (en) | Text grouping for disambiguation in a speech application | |
JP4798039B2 (ja) | 音声対話装置および方法 | |
JP4634156B2 (ja) | 音声対話方法および音声対話装置 | |
US11804225B1 (en) | Dialog management system | |
JP4486413B2 (ja) | 音声対話方法、音声対話装置、音声対話プログラム、これを記録した記録媒体 | |
JP2009198871A (ja) | 音声対話装置 | |
JP4537755B2 (ja) | 音声対話システム | |
JP2000330588A (ja) | 音声対話処理方法、音声対話処理システムおよびプログラムを記憶した記憶媒体 | |
KR20210032200A (ko) | 다중 언어 대화 서비스 제공 장치 및 방법 | |
WO2009147745A1 (ja) | 検索装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14863985 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015549010 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14907719 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112014005354 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14863985 Country of ref document: EP Kind code of ref document: A1 |