[go: up one dir, main page]

CN111611357B - Configuration method of man-machine conversation system, multi-round conversation configuration platform and electronic equipment - Google Patents

Configuration method of man-machine conversation system, multi-round conversation configuration platform and electronic equipment Download PDF

Info

Publication number
CN111611357B
CN111611357B CN201910141210.5A CN201910141210A CN111611357B CN 111611357 B CN111611357 B CN 111611357B CN 201910141210 A CN201910141210 A CN 201910141210A CN 111611357 B CN111611357 B CN 111611357B
Authority
CN
China
Prior art keywords
configuration
user
dialogue
state
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910141210.5A
Other languages
Chinese (zh)
Other versions
CN111611357A (en
Inventor
庞胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN201910141210.5A priority Critical patent/CN111611357B/en
Publication of CN111611357A publication Critical patent/CN111611357A/en
Application granted granted Critical
Publication of CN111611357B publication Critical patent/CN111611357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention belongs to the technical field of man-machine interaction, and particularly relates to a configuration method of a man-machine dialogue system, a multi-round dialogue configuration platform and electronic equipment. The configuration method comprises the following steps: receiving a configuration request input by a user; responding to the configuration request to display a configuration interface; receiving a first configuration operation for a target task, which is input by the user on the configuration interface, wherein the first configuration operation is used for configuring the functional attribute of the state points of the finite state machine and the functional attribute of the directed state edges between the state points; and completing the dialogue management logic configuration of the man-machine dialogue system based on the finite state machine in response to the first configuration operation. The configuration method enables a user to complete dialogue management logic configuration of the man-machine dialogue system through the state points and the directed state edges among the state points in the finite state machine in an interactive mode, so that dialogue scene development is more convenient and visual, and flexibility is good.

Description

Configuration method of man-machine conversation system, multi-round conversation configuration platform and electronic equipment
Technical Field
The invention belongs to the technical field of man-machine interaction, and particularly relates to a configuration method of a man-machine dialogue system, a multi-round dialogue configuration platform and electronic equipment.
Background
With the development of mobile intelligent terminals and information network technologies, people use man-machine interaction applications in more and more scenes. The customer service is extremely important to the service industry, as the number of users of a product increases, the demand for customer service also increases, the problem solving efficiency of customer service directly influences the user experience, the problem of customer consultation customer service often has a large number of repeated problems, for some complex scenes, intelligent conversation robots are generated, conversation call robots are task-oriented (problems), the problem of customer consultation customer service is dynamically changed along with the product, and the development of conversation scenes is inconvenient for the development of the conversation robots, and has important significance for the user operation and the improvement of user satisfaction.
Disclosure of Invention
In view of this, the embodiment of the application provides a configuration method of a man-machine dialogue system, a multi-wheel dialogue configuration platform and an electronic device, and a finite state machine is created in an interactive manner to configure business logic, so that dialogue scene development is more convenient and visual.
Embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides a method for configuring a man-machine interactive system, including: receiving a configuration request input by a user; responding to the configuration request to display a configuration interface; receiving a first configuration operation input by the user on the configuration interface and aiming at a target task, wherein the first configuration operation is used for configuring functional attributes of state points of a finite state machine and functional attributes of directed state edges between the state points, the state points represent dialogue state points of a man-machine dialogue system for completing the target task, the directed state edges represent dialogue states of the man-machine dialogue system and can be transferred from one end dialogue state point of the directed state edges to the other end dialogue state point of the directed state edges, and the functional attributes of the directed state edges are used for limiting conditions meeting state transfer; and completing the dialogue management logic configuration of the man-machine dialogue system based on the finite state machine in response to the first configuration operation. In the embodiment of the application, the dialogue management logic configuration of the man-machine dialogue system is completed by creating the state points and the directed state edges among the state points in the finite state machine in an interactive mode, so that the dialogue scene is developed more conveniently and intuitively, and the flexibility is good.
With reference to a possible implementation manner of the first aspect embodiment, receiving a first configuration operation for a target task input by the user on the configuration interface includes: receiving dialogue state point creation operation of the target task input by the user on the configuration interface; responding to the dialogue state point creation operation, generating dialogue state points and displaying the dialogue state points; receiving a function attribute configuration operation of the user aiming at each dialogue state point; receiving the directed state edge creation operation input by the user; generating and displaying directed state edges between state points corresponding to the directed state edge creation operations based on the directed state edge creation operations; and receiving configuration operation of the directed state edges input by the user, wherein the configuration operation of the directed state edges is used for configuring transition conditions of the directed state edges. In the embodiment of the application, when the service logic of the man-machine conversation system is configured in an interactive mode, the service logic configuration of the man-machine conversation system for a new scene can be rapidly completed by creating a conversation state point, defining the functional attribute of the conversation state point, creating a directed state edge of the conversation state point, defining the configuration of the directed state edge such as a transfer condition of the directed state edge, and a function call or an API call after the transfer condition is met, and the like.
With reference to a possible implementation manner of the first aspect embodiment, receiving a first configuration operation for a target task input by the user on the configuration interface further includes: and receiving the configuration operation of function call or API call, which is input by the user and is performed after each directed state edge meets the transfer condition. In the embodiment of the application, the user can also improve the flexibility of dialogue scene development by configuring the function call or API call after each directed state edge meets the transfer condition.
With reference to a possible implementation manner of the embodiment of the first aspect, when the configuration operation on the directed state edge is used to configure a third party API call after the transfer condition is met, the method further includes: displaying an API configuration interface, and receiving the name, address, request mode and return parameter settings of the third party API input by the user on the API configuration interface; the third party API is configured based on the settings of the name, address, request mode, and return parameters. In the embodiment of the application, the user can configure the third-party API according to actual needs, so that the method is convenient for a scene developer to develop a new scene and has the flexibility of a code configuration platform.
With reference to a possible implementation manner of the embodiment of the first aspect, when the configuration operation on the directed state edge is used to configure a function call after the transfer condition is satisfied, the method further includes: displaying a function configuration interface, and receiving the names, the function contents and the settings of return parameters of the functions input by the user in the function configuration interface; the function is configured based on the name of the function, the content of the function, and the settings of the return parameters. In the embodiment of the application, a user can configure the required functions according to actual needs, so that the method is convenient for a scene developer to develop a new scene and has the flexibility of a code configuration platform.
With reference to a possible implementation manner of the first aspect embodiment, before receiving a first configuration operation for a target task input by the user on the configuration interface, the method further includes: receiving a target task creation request input by the user on the configuration interface; and completing the creation of the target task according to the dialogue field, the dialogue intention and the dialogue slot carried in the target task creation request.
With reference to a possible implementation manner of the first aspect embodiment, after the displaying a configuration interface in response to the configuration request, the method further includes: receiving a second configuration operation input by the user on the configuration interface, wherein the second configuration operation is used for configuring NLU parameters of a natural language understanding module of the man-machine conversation system, and the NLU parameters comprise a conversation field, a conversation intention and a conversation slot; and responding to the second configuration operation to complete NLU parameter configuration of the natural language understanding module of the man-machine conversation system. In the embodiment of the application, the user can add the intention (intent) and the slot (slot) in the created field to complete the configuration of the natural language understanding module of the man-machine conversation system, and the method does not need to consider the problem of switching among a plurality of fields because the field is also considered, so that the configuration program is simplified.
With reference to a possible implementation manner of the embodiment of the first aspect, after completing NLU parameter configuration of a natural language understanding module of the human-machine conversation system in response to the second configuration operation, the method further includes: responding to a test instruction which is input by the user on the configuration interface and aims at the natural language understanding module, and generating a test address; and issuing the test address.
With reference to a possible implementation manner of the first aspect embodiment, after completing the finite state machine based dialog management logic configuration of the human-machine dialog system in response to the first configuration operation, the method further includes: receiving a third configuration operation input by the user on the configuration interface, wherein the third configuration operation is used for configuring natural language generation parameters of the man-machine conversation system; and responding to the third configuration operation to complete the natural language generation parameter configuration of the man-machine conversation system.
With reference to a possible implementation manner of the first aspect embodiment, the configuration interface displays a component selection list, where the component selection list includes a plurality of selection components with different purposes; receiving a third configuration operation input by the user on the configuration interface, including: receiving a component determining instruction input by the user in the component selection list, and displaying a corresponding component configuration interface; and receiving the component configuration operation input by the user on the configuration interface.
With reference to one possible implementation manner of the embodiment of the first aspect, after configuring a dialogue management module, a natural language understanding module, and a natural language generating module in a man-machine dialogue system corresponding to at least two target tasks respectively, the method further includes: and issuing a unified dialogue interface through which a user can perform dialogue with any one of the man-machine dialogue systems respectively corresponding to at least two target tasks. In the embodiment of the application, the man-machine dialogue system applied to the unnecessary scene can be issued through the same dialogue interface, namely, a developer only needs to configure one dialogue interface, all application scenes configured by the developer can be triggered as long as the developer accesses the dialogue interface, and then the application scenes can be corresponding to the corresponding scenes through field identification, so that the developer does not need to consider the problem of switching a plurality of scenes when configuring, and the automatic switching of multiple scenes is realized.
In a second aspect, an embodiment of the present application further provides a multi-round dialog configuration platform, including: the configuration module is used for receiving a configuration request input by a user; the configuration interface is also used for responding to the configuration request to display a configuration interface; the method comprises the steps of receiving a first configuration operation input by a user on a configuration interface and aiming at a target task, wherein the first configuration operation is used for configuring functional attributes of state points of a finite state machine and functional attributes of directed state edges between the state points, the state points represent dialogue state points of a man-machine dialogue system for completing the target task, the directed state edges represent dialogue states of the man-machine dialogue system and can be transferred from one end dialogue state point of the directed state edges to the other end dialogue state point of the directed state edges, and the functional attributes of the directed state edges are used for limiting conditions meeting state transfer; and the finite state machine-based dialog management logic configuration of the man-machine dialog system is completed in response to the first configuration operation.
With reference to a possible implementation manner of the second aspect embodiment, the configuration module is further configured to: receiving dialogue state point creation operation of the target task input by the user on the configuration interface; responding to the dialogue state point creation operation, generating dialogue state points and displaying the dialogue state points; receiving a function attribute configuration operation of the user aiming at each dialogue state point; receiving the directed state edge creation operation input by the user; generating and displaying directed state edges between state points corresponding to the directed state edge creation operations based on the directed state edge creation operations; and receiving configuration operation of the directed state edges input by the user, wherein the configuration operation of the directed state edges is used for configuring transition conditions of the directed state edges.
With reference to a possible implementation manner of the second aspect embodiment, the configuration module is further configured to: and receiving the configuration operation of function call or API call, which is input by the user and is performed after each directed state edge meets the transfer condition.
With reference to a possible implementation manner of the second aspect embodiment, when the configuration operation on the directed state edge is used to configure a third party API call after the transfer condition is met, the configuration module is further configured to: receiving the name, address, request mode and return parameter settings of the third party API input by the user on an API configuration interface; the third party API is configured based on the settings of the name, address, request mode, and return parameters.
With reference to a possible implementation manner of the second aspect embodiment, when the configuration operation on the directed state edge is used to configure a function call after the transfer condition is met, the configuration module is further configured to: receiving the name, the function content and the return parameter settings of the function input by the user in a function configuration interface; the function is configured based on the name of the function, the content of the function, and the settings of the return parameters.
With reference to a possible implementation manner of the second aspect embodiment, the configuration module is further configured to: receiving a target task creation request input by the user on the configuration interface; and completing the creation of the target task according to the dialogue field, the dialogue intention and the dialogue slot carried in the target task creation request.
With reference to a possible implementation manner of the second aspect embodiment, the configuration module is further configured to: receiving a second configuration operation input by the user on the configuration interface, wherein the second configuration operation is used for configuring NLU parameters of a natural language understanding module of the man-machine conversation system, and the NLU parameters comprise a conversation field, a conversation intention and a conversation slot; and responding to the second configuration operation to complete NLU parameter configuration of the natural language understanding module of the man-machine conversation system.
With reference to a possible implementation manner of the second aspect embodiment, the configuration module is further configured to: responding to a test instruction which is input by the user on the configuration interface and aims at the natural language understanding module, and generating a test address; and issuing the test address.
With reference to a possible implementation manner of the second aspect embodiment, the configuration module is further configured to: receiving a third configuration operation input by the user on the configuration interface, wherein the third configuration operation is used for configuring natural language generation parameters of the man-machine conversation system; and responding to the third configuration operation to complete the natural language generation parameter configuration of the man-machine conversation system.
With reference to a possible implementation manner of the second aspect embodiment, the configuration interface displays a component selection list, where the component selection list includes a plurality of selection components with different purposes, and the configuration module is further configured to: receiving a component determining instruction input by the user in the component selection list, and displaying a corresponding component configuration interface; and receiving the component configuration operation input by the user on the configuration interface.
With reference to one possible implementation manner of the second aspect embodiment, after configuring the man-machine conversation systems corresponding to at least two target tasks respectively, the configuration module is further configured to: and issuing a unified dialogue interface through which a user can perform dialogue with any one of the man-machine dialogue systems respectively corresponding to at least two target tasks.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method as provided by the embodiments of the first aspect and/or any of the possible implementations in combination with the embodiments of the first aspect when executed.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. The above and other objects, features and advantages of the present invention will become more apparent from the accompanying drawings. Like reference numerals refer to like parts throughout the several views of the drawings. The drawings are not intended to be drawn to scale, with emphasis instead being placed upon illustrating the principles of the invention.
Fig. 1 shows a flowchart of a configuration method of a man-machine interaction system provided by an embodiment of the present application.
Fig. 2 shows a schematic configuration diagram of a finite state machine according to an embodiment of the present application.
FIG. 3 illustrates an interface diagram for task creation provided by an embodiment of the present application.
Fig. 4 shows an interface schematic diagram of the third party API configuration provided by the embodiment of the present application.
Fig. 5 shows an interface schematic of a configuration function according to an embodiment of the present application.
Fig. 6 shows an interface schematic diagram of a configuration natural language generation module according to an embodiment of the present application.
Fig. 7 is an interface schematic diagram of a natural language generating module according to an embodiment of the present application.
Fig. 8 is a schematic functional structure diagram of a multi-round dialogue configuration platform according to an embodiment of the present application.
Fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. It is to be understood that the drawings are designed solely for the purposes of illustration and description and not as a definition of the limits of the application. It should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present application, it should be noted that the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance. Furthermore, the term "and/or" in the present application is merely an association relationship describing the association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone.
For the development of multi-round multi-phone robots, there are two development modes at present, one is based on code configuration and hard coding, and the other is based on platform dialogue configuration. The development mode based on the code configuration is flexible, and a developer can complete complex business logic and dialogue logic in a program. However, the development mode is unfavorable for optimization and rapid iteration of the dialogue scene, scene developers are limited to program developers, and as the service demand increases, the requirement of rapid iteration and expansion of the dialogue scene is difficult to meet when a new scene is developed based on the mode completely. The platform-based configuration can facilitate scene developers to develop new scenes, has low development cost, and generally adopts a mode of sequentially inquiring and clarifying dialogue slots to finish call-back. The platform type development can be convenient for product managers or operators to develop, and development cost is reduced, but flexibility is sacrificed, for some complex scenes, the dialogue templates are immobilized by only inquiring slots or clarifying slots in a fixed mode, and for scenes with calling requirements for third-party interfaces, the existing platform configuration mode is difficult to support.
It should be noted that, the above solutions have all the drawbacks that the inventors have obtained after practice and careful study, and therefore, the discovery process of the above problems and the solutions proposed by the embodiments of the present application below for the above problems should all be the contribution of the inventors to the present application in the process of the present application.
In view of this, the embodiment of the application provides a new multi-round dialogue configuration platform, integrates the advantages of code configuration (or development) and platform configuration modes, namely, the flexibility of the code development mode is reserved, and the multi-round new scene development is more convenient and visual. The multi-round dialogue configuration platform provides a third party API call interface, a code writing interface, multi-field switching, an online interface and a test interface, and a user can complete multi-round dialogue new scene development and online in one step.
The multi-round dialogue configuration platform provided by the embodiment of the application is divided into three modules from the framework: front end, back end and dialog engine. The front end is used for directly interacting with a user, and data operated by the user can be synchronized to the back end; the backend may control the dialog engine to load user-configured dialog logic and the like. For example, the data flow of a user operating a certain dialogue scene is: the user operates at the front-end- > the back-end- > the dialog engine. The architecture may be either a CS or Client/Server architecture or a BS or Browser/Server architecture, which is not further limited herein.
The dialog engine is divided into three main modules: NLU (Natural Language Understanding ) module, DM (Dialog Management, dialog management) module, and NLG (Natural Language Generation ) module. The NLU module is mainly responsible for scene (field) recognition, intention recognition and slot extraction; the DM module is responsible for dialogue management, controls the process of man-machine dialogue, the DM decides the reaction to the user at the moment according to dialogue history information, the most common application is multi-round dialogue driven by the task, the user carries clear purposes such as meal ordering, ticket ordering and the like, the user demand is complex in general, a plurality of limiting conditions are needed to be stated in a plurality of rounds, on one hand, the user can continuously modify or perfect own demand in the dialogue process, and on the other hand, when the demand stated by the user is not specific or clear enough, the machine can also help the user to find a satisfactory result through inquiry, clarification or confirmation. The DM module takes the output of the NLU module as input, outputs the action and updates context, and the NLG module takes the action and generates natural language according to the action to be displayed to the user.
The user can complete the development and online of the man-machine conversation system of the new scene of the multi-round conversation based on the multi-round conversation configuration platform according to the self requirement. The configured conversation process can be analyzed by a conversation engine, and a conversation robot capable of interacting according to the conversation process is generated. Among them, the conversation robot is academic called a conversation agent, which is defined as a conversation agent which is a software program that interprets and responds to sentences made by users in a common natural language. This process will be described in connection with the configuration method of the flow chart man-machine conversation system shown in fig. 1.
Step S101: and receiving a configuration request input by a user.
The configuration request is used for enabling the back end of the multi-round dialogue configuration platform to return to a configuration interface, and the configuration interface is used for being configured by a user according to requirements so as to meet the requirements of the user.
The configuration request includes account information and/or IP (Internet Protocol ) address information of the user. And returning the corresponding configuration interface through the account information and/or the IP address information.
Step S102: and responding to the configuration request to display a configuration interface.
The user operates on the front-end line, the data is synchronized to the back-end, and the back-end responds to the configuration request to display the configuration interface after receiving the configuration request input by the user.
Optionally, the configuration interface is an interface where various functional components are preset, for example, the configuration interface displays components such as a task list, an interface list, a function list, an action configuration, an NLU configuration, and the like. The user can display the corresponding interface by selecting different functional components, if a task list is selected, each task created by the user is displayed in the task list, and if not, the task is empty. And selecting one of the tasks to view the basic information of the task, entering a logic diagram editing interface if clicking the logic diagram, and enabling a user to complete the creation of a finite state machine and the conversation management logic configuration for the task on the interface. For another example, when the interface list is selected, a corresponding API configuration interface is displayed, and the user inputs a required parameter in the input box, so that the interface can be configured, and the interface is used for configuring a third party API request, where the input parameter may be dialogue information (intent, slot, etc.) in a state point in the finite state machine, intermediate information (output generated by other API calls or output of function calls), or some static information. API requests support common request modes (e.g., POST, GET, PUT, etc.). For another example, selecting a list of functions may display the processing functions built into the list platform (the platform may have some commonly used information processing functions built into it, but not all the scenes), although the user may be able to customize the functions required, such as the python function may be written to customize the functions required. For example, selecting the action configuration, a corresponding action configuration interface is displayed, and each component selection list is displayed in the interface, so that the user can perform function configuration on the corresponding action configuration interface, and in addition, the required function components can be customized. For another example, selecting the NLU configuration may enter an NLU configuration editing interface where a user may implement configuration of dialog fields, dialog intents, and dialog slots.
Step S103: and receiving a first configuration operation input by the user on the configuration interface and aiming at a target task.
The first configuration operation is an operation triggered by a user according to the option information displayed on the configuration interface.
The first configuration operation is used for configuring the functional attributes of the state points of the finite state machine and the functional attributes of the directed state edges between the state points. Among these, finite state machines are mathematical models that represent finite states and transitions and actions between these states, the role of which is mainly to describe the sequence of states an object experiences during its lifecycle, and how to respond to various events from the outside world. The system comprises a state point and a directed state edge for connecting the state point, wherein the state point characterizes a dialogue state point of a man-machine dialogue system for completing the target task, namely a dialogue state point of a dialogue robot. The directed state edge characterizes the dialogue state of the man-machine dialogue system and can be transferred from one dialogue state point of the directed state edge to the other dialogue state point of the directed state edge, namely, the directed state edge is used for representing the jump relation between the state points, such as transferring from the current state to the next state according to the direction of the directed state edge, and the directed state edge can still keep the original state, wherein the current state refers to the state in which the current state is located, the next state refers to the new state to which the condition is to be transferred after being met, the next state is relative to the current state, and once the next state is activated, the next state is converted into the new 'current state'. The functional attribute of the directed state edge is a condition for limiting the satisfaction of the state transition, and when a condition is satisfied, an action is triggered or a state transition is performed. That is, the conversation robot uses logic to make conversation, that is, judges that the condition of state jump is not satisfied according to the sentence input by the user and the current state, and jumps to the next conversation state point. Optionally, receiving a first configuration operation for a target task input by the user on the configuration interface, including: receiving a logic diagram editing instruction input by the user on the configuration interface, and displaying a logic diagram editing interface; receiving dialogue state point creation operation of the target task input by the user on the logic diagram editing interface; responding to the dialogue state point creation operation, generating dialogue state points and displaying the dialogue state points on the logic diagram editing interface; receiving a function attribute configuration operation of the user aiming at each dialogue state point; receiving a directed state edge creation operation input by the user on the logic diagram editing interface; generating and displaying directed state edges between state points corresponding to the directed state edge creation operations based on the directed state edge creation operations; and receiving configuration operation of the directed state edges input by the user, wherein the configuration operation of the directed state edges is used for configuring transition conditions of the directed state edges. That is, the user may complete creation of the state points in the finite state machine through the visual interface, then configure the created state points, then create the directed state edges, and then configure the created directed state edges, such as configuring transition conditions that characterize the relationship of jumps between the state points.
Optionally, receiving a first configuration operation for a target task input by the user on the configuration interface, and further includes: and receiving the configuration operation of function call or API call, which is input by the user and is performed after each directed state edge meets the transfer condition. That is, the user may also configure the function call and the API call after the transfer condition is satisfied to configure the output action of the directed state edge.
Step S104: and completing the dialogue management logic configuration of the man-machine dialogue system based on the finite state machine in response to the first configuration operation.
Receiving a first configuration operation aiming at a target task and input by a user on the configuration interface, responding to the first configuration operation to complete dialogue management logic configuration of a man-machine dialogue system based on a finite state machine, namely to complete the management logic configuration of a dialogue management module of the dialogue robot, in other words, the dialogue management logic of the dialogue robot is realized by the finite state machine, namely, according to sentences and the current state input by the user, judging that the condition of state jump is not satisfied, and jumping to the next dialogue state point is completed.
As an optional implementation manner, completing the management logic configuration of the session management module of the session robot in response to the first configuration operation includes: responding to a logic diagram editing instruction input by the user on the configuration interface, and displaying a logic diagram editing interface; responding to dialogue state point creation operation input by the user on the logic diagram editing interface, generating dialogue state points and displaying the dialogue state points on the logic diagram editing interface; responding to the functional attribute configuration operation of the user aiming at each dialogue state point, and completing the corresponding functional attribute configuration of the dialogue robot; responding to the directed state edge creation operation input by the user on the logic diagram editing interface, and generating and displaying directed state edges between state points corresponding to the directed state edge creation operation; and responding to the configuration operation of the directed state edges input by the user, and completing the corresponding function attribute configuration of the dialogue robot, wherein the directed state edge configuration instruction is used for configuring the transfer condition of each directed state edge, and the function call or the API call after the transfer condition is met.
Optionally, when the function attribute of the directed state edge of each dialogue state point is configured, the logic diagram editing interface displays a callback list, and a selection menu of the callback list comprises a plurality of built-in APIs and functions. At this time, the configuration operation of the directed state edge in response to the user input includes: responding to the transfer condition configuration operation of the directed state edge input by the user, and completing the transfer condition configuration of the corresponding directed state edge; and responding to the call instruction which is input in the callback list by the user and is specific to each directed state edge after the transfer condition is met, and completing function call or API call of the corresponding directed state edge after the transfer condition is met. That is, when the user configures the transfer condition of each directed state edge, the function call or the API call after the transfer condition is satisfied, the transfer condition may be configured in the logic diagram editing interface of the corresponding directed state edge, and then the function call or the API call after the transfer condition is satisfied is selected from the callback list.
Wherein for ease of understanding it may be described in connection with the configuration diagram shown in fig. 2. When configuring the transition condition of the status edge, the "condition" button in fig. 2 may be clicked, so that the configuration of the transition condition may be implemented, where an exemplary diagram of the non-configured transition condition is shown in fig. 2. When the function call or the API call after meeting the transfer condition is configured, the existing API call or the custom function call in the callback list is selected.
The process of step S104 is also called a configuration process of a dialog management module, when a user wants to configure a management logic configuration of the dialog management module of the dialog robot for the target task, on the multi-round dialog configuration platform, a configurator may select a target task that has been created, since the domain of the target task may be updated, there may be multiple versions, after the creation of the target task is completed, a domain version needs to be created, for each domain version, there may be a pair of dialog state diagrams (composed of a state point and a directed state edge) corresponding to each domain version, the configurator may connect the required dialog state point by creating an anchor point between the dialog state point and the drag dialog state point, and after the connection, there may be a directed edge (also called a directed state edge) after the dialog state point is connected, and a condition for transition, a function call or an API call after the transition condition may be configured on the edge.
The rectangular blocks in fig. 2 are dialogue status points, small points in the dialogue status points are anchor points, and the connecting lines with arrows in fig. 2 are directed status edges.
After the user completes the configuration of the dialogue management module, the user can modify, delete, release and the like. For example, when the user selects a test instruction for the dialog management module on the configuration interface, the back end will respond to the test instruction for the dialog management module input by the user on the configuration interface, and will generate a test address to issue to the test environment for testing. The publishing may be to a test environment or to an online environment. Wherein it must be released to the test environment before it can be released to the online environment.
Optionally, before receiving the first configuration operation for the target task input by the user on the configuration interface, the method further includes: and receiving a target task creation request input by the user on the configuration interface, and completing creation of the target task according to the dialogue field, the dialogue intention and the dialogue slot carried in the target task creation request. That is, the user may enter the task list interface after clicking the task list, at this time, the user may create a task in the task interface, for example, click a button similar to "create task" or "+", and then enter the task creation interface, where the user may input parameters, for example, the user may create a domain on the interface, fill in all intents (points) and slots (slots) that will be involved in the domain when creating the domain, and finally click a button similar to "determine" or save after filling in, so that the creation of the target task may be completed. The process of task creation may be referred to in fig. 3.
When the configuration operation on the status edge is used for configuring the third party API call after the transfer condition is met, the API call selected from the callback list is set in advance. Alternatively, the third party API may be configured by: receiving the name, address, request mode and return parameter settings of a third party API input by a user on an API configuration interface; the third party API is configured based on the settings of the name, address, request mode, and return parameters. The user can enter the API configuration interface by clicking the interface list on the configuration interface, and then input parameters such as the name, address, request mode, return parameters and the like of the third-party API on the API configuration interface, so that the rear end completes the configuration of the third-party API based on the parameters. The specific configuration process can be referred to a configuration interface shown in fig. 4, for example, the "query weather" in the figure is the name of the API, the request mode is POST, the request parameter is "cityid", and the address is "http: //100.90.144.143: 8004/kbqa).
When the configuration operation on the state edge is used for configuring the function call after the transition condition is met, the function call selected from the callback list is also preset, and optionally, the function can be configured in the following manner: receiving the name, the function content and the return parameter settings of the function input by a user in a function configuration interface; the function is configured based on the name of the function, the content of the function, and the settings of the return parameters. The user can enter a function configuration interface by clicking the function list on the configuration interface, the name of the set function, the content of the function, the return parameters and other contents are input into the interface, and the rear end completes the configuration of the function based on the parameters. For a specific configuration process, reference may be made to the configuration interface shown in fig. 5, where "generated number" is the name of the function.
Optionally, after the displaying of the configuration interface in response to the configuration request, the method further comprises: receiving a second configuration operation input by the user on the configuration interface, wherein the second configuration operation is used for configuring NLU parameters of a natural language understanding module of the conversation robot, and the NLU parameters comprise a conversation field, a conversation intention and a conversation slot position; and responding to the second configuration operation to complete NLU parameter configuration of the natural language understanding module of the conversation robot. That is, the user may configure the natural language understanding module of the conversation robot, i.e., the NLU module. The user can create fields under the natural language understanding module, and for each created field, an intention (intent) and slot (slot) can be added in the field. When creating the intention, the user is required to add the basic corpus of the intention, and the more the corpus is added, the more accurate the intention recognition is. Slots, such as time slots, place slots and the like, which are built in the platform can be selected when the slots are created, and common slots are added by selecting built-in slot options. The natural language understanding module also supports custom slots, which have two ways: slots that match through perfect matching and slots that match through regular expressions.
Optionally, after completing the NLU parameter configuration of the natural language understanding module of the conversation robot in response to the second configuration operation, the method further comprises: responding to a test instruction which is input by the user on the configuration interface and aims at the natural language understanding module, and generating a test address; and issuing the test address so as to issue the natural language understanding module to a test environment for testing. After the user has added all intents and slots in a domain, the system can directly train the intent classification of the domain. Because the NLU configuration module is an independent module, the module can be directly released to a test environment or an online environment through a platform after training is completed. Wherein, the user must release the test environment to the online environment after testing.
Optionally, after completing the finite state machine based dialog management logic configuration of the human-machine dialog system in response to the first configuration operation, the method further comprises: receiving a third configuration operation input by the user on the configuration interface, wherein the third configuration operation is used for configuring natural language generation parameters of the man-machine conversation system; and responding to the third configuration operation to complete the natural language generation parameter configuration of the conversation robot. That is, after the management logic configuration of the dialogue management module of the dialogue robot is completed, the user can configure the natural language generation module of the dialogue robot, and since the interaction between the multi-round dialogue robot and the user is mainly in the form of speaking and card information, some protocols related to front-end display exist in the NLG module, and the protocols can be abstracted into specific components, such as a list selection component, an order selection component, a progress viewing component and the like. That is, the user clicks the action configuration to perform an action configuration interface, where the interface displays a component selection list, where the component selection list includes a plurality of selection components with different purposes, and the user may perform a functional configuration on the selection components, and at this time, receive a third configuration operation input by the user on the configuration interface, where the third configuration operation includes: receiving a component determining instruction input by the user in the component selection list, and displaying a corresponding component configuration interface; and receiving the component configuration operation input by the user on the configuration interface. The specific configuration process can be described with reference to the schematic diagrams shown in fig. 6 and fig. 7, after the task created by the user is the weather inquiry task, after the management logic configuration of the dialogue management module of the dialogue robot is completed by the method shown in fig. 1, when the user clicks the action configuration, the action configuration interface will display the component list shown in fig. 6, and when the user selects "request_date" in fig. 6, the interface shown in fig. 7 will be displayed, and at this time, the user can define the functions of the component on the interface.
After the configuration of each module is completed based on the target task, the platform can synchronize the configuration information to the test environment and the online environment. For example, a zookeeper (a distributed application coordination service) is used to maintain the IP addresses of the test machine and the on-line machine, and both the on-line machine and the test machine have a command server (HTTP service) that registers the IP address of the running machine to the zookeeper at startup. The command server is used for receiving instructions sent by the rear end of the multi-round dialogue platform, when a user operates the front end of the platform to be online, the rear end receives an online request, then reads an IP address list registered on a target environment in the zookeeper, sends the instructions to the corresponding command server, and finally the machine where the command server is located completes state synchronization.
Optionally, after the man-machine conversation systems corresponding to at least two target tasks are configured, that is, after the conversation management module, the natural language understanding module, and the natural language generating module in the conversation robot corresponding to at least two target tasks are configured, the method further includes: and issuing a unified dialogue interface through which a user can perform dialogue with any one of the man-machine dialogue systems respectively corresponding to at least two target tasks. After the multi-wheel dialogue configuration platform provided by the embodiment of the application is configured to be applied to dialogue robots in different scenes, the dialogue robots in different scenes can be issued through the same dialogue interface, namely a developer only needs to configure one dialogue interface, and can start all application scenes configured by the developer only by accessing the dialogue interface, and then the application scenes can be corresponding to the corresponding scenes through field identification.
The embodiment of the invention also provides a multi-round dialogue configuration platform, as shown in fig. 8, which comprises: the dialogue robot and configuration module, wherein, the dialogue robot includes: a dialogue management module, a natural language understanding module and a natural language generating module. The configuration module is used for configuring the dialogue management module, the natural language understanding module and the natural language generating module so as to realize the development and online of the dialogue robot.
The configuration module is used for receiving a configuration request input by a user when configuring the dialogue management module; the configuration interface is also used for responding to the configuration request to display a configuration interface; the method comprises the steps of receiving a first configuration operation input by a user on a configuration interface and aiming at a target task, wherein the first configuration operation is used for configuring functional attributes of state points of a finite state machine and functional attributes of directed state edges between the state points, the state points represent dialogue state points of a man-machine dialogue system for completing the target task, the directed state edges represent dialogue states of the man-machine dialogue system and can be transferred from one end dialogue state point of the directed state edges to the other end dialogue state point of the directed state edges, and the functional attributes of the directed state edges are used for limiting conditions meeting state transfer; and the management logic configuration module is also used for responding to the first configuration operation to complete the management logic configuration of the dialogue management module of the dialogue robot.
Optionally, the configuration module is further configured to: receiving a logic diagram editing instruction input by the user on the configuration interface, and displaying a logic diagram editing interface; receiving dialogue state point creation operation of the target task input by the user on the logic diagram editing interface; responding to the dialogue state point creation operation, generating dialogue state points and displaying the dialogue state points on the logic diagram editing interface; receiving a function attribute configuration operation of the user aiming at each dialogue state point; receiving a directed state edge creation operation input by the user on the logic diagram editing interface; generating and displaying directed state edges between state points corresponding to the directed state edge creation operations based on the directed state edge creation operations; and receiving configuration operation of the directed state edges input by the user, wherein the configuration operation of the directed state edges is used for configuring transition conditions of the directed state edges.
The configuration module is further configured to: and receiving the configuration operation of function call or API call, which is input by the user and is performed after each directed state edge meets the transfer condition.
Optionally, when the configuration operation on the directed state edge is used to configure the third party API call after the transfer condition is satisfied, the configuration module is further configured to: receiving the name, address, request mode and return parameter settings of the third party API input by the user on an API configuration interface; the third party API is configured based on the settings of the name, address, request mode, and return parameters.
Optionally, when the configuration operation on the directed state edge is used to configure the function call after the transition condition is met, the configuration module is further configured to: receiving the name, the function content and the return parameter settings of the function input by the user in a function configuration interface; the function is configured based on the name of the function, the content of the function, and the settings of the return parameters.
Optionally, the configuration module is further configured to: receiving a target task creation request input by the user on the configuration interface; and completing the creation of the target task according to the dialogue field, the dialogue intention and the dialogue slot carried in the target task creation request.
When the natural language understanding module is configured, the configuration module is further configured to: receiving a second configuration operation input by the user on the configuration interface, wherein the second configuration operation is used for configuring NLU parameters of a natural language understanding module of the conversation robot, and the NLU parameters comprise a conversation field, a conversation intention and a conversation slot position; and responding to the second configuration operation to complete NLU parameter configuration of the natural language understanding module of the conversation robot.
Optionally, the configuration module is further configured to: responding to a test instruction which is input by the user on the configuration interface and aims at the natural language understanding module, and generating a test address; and issuing the test address.
When the natural language generating module is configured, the configuration module is further configured to: receiving a third configuration operation input by the user on the configuration interface, wherein the third configuration operation is used for configuring NLG parameters of a natural language generation module of the conversation robot; and responding to the third configuration operation to complete NLG parameter configuration of the natural language generation module of the conversation robot.
Optionally, the configuration interface displays a component selection list, where the component selection list includes a plurality of selection components with different purposes, and the configuration module is further configured to: receiving a component determining instruction input by the user in the component selection list, and displaying a corresponding component configuration interface; and receiving the component configuration operation input by the user on the configuration interface.
After the dialogue management module, the natural language understanding module and the natural language generating module in the dialogue robot corresponding to at least two target tasks are configured, when the dialogue robot is issued, the configuration module is further configured to: and issuing a unified conversation interface through which a user can carry out conversation with any conversation robot in the conversation robots respectively corresponding to at least two target tasks.
The implementation principle and the generated technical effects of the multi-round dialogue configuration platform provided by the embodiment of the application are the same as those of the foregoing method embodiment, and for brevity, reference may be made to the corresponding content in the foregoing method embodiment.
Fig. 9 is a schematic diagram of exemplary hardware and software components of an electronic device according to an embodiment of the present application, as shown in fig. 9.
The electronic device 100 may be a general purpose computer or a special purpose computer, both of which may be used to implement the configuration method of the human-machine conversation system of the present application. Although only one computer is shown, the functionality described herein may be implemented in a distributed fashion across multiple similar platforms for convenience to balance processing loads.
For example, the electronic device 100 may include a network port 110 connected to a network, one or more processors 120 for executing program instructions, a communication bus 130, and various forms of storage media 140, such as magnetic disk, ROM, or RAM, or any combination thereof. By way of example, the computer platform may also include program instructions stored in ROM, RAM, or other types of non-transitory storage media, or any combination thereof. The method of the present application may be implemented in accordance with these program instructions. The electronic device 200 also includes an Input/Output (I/O) interface 150 between a computer and other Input/Output devices (e.g., keyboard, display screen).
For ease of illustration, only one processor is depicted in the electronic device 100. It should be noted, however, that the electronic device 100 of the present application may also include a plurality of processors, and thus steps performed by one processor described in the present application may also be performed jointly by a plurality of processors or separately. For example, if the processor of the electronic device 100 performs step a and step B, it should be understood that step a and step B may also be performed by two different processors together or performed separately in one processor. For example, the first processor performs step a, the second processor performs step B, or the first processor and the second processor together perform steps a and B.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
The embodiment of the application also provides a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program executes the steps of the configuration method of the man-machine interaction system in the embodiment of the method when being run by a processor. The specific implementation may refer to a method embodiment, which is not described herein.
Specifically, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, or the like, and when the program code on the storage medium is executed, the configuration method of the human-machine interaction system shown in the above embodiment can be executed.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a notebook computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (24)

1. A method for configuring a human-machine interactive system, comprising:
receiving a configuration request input by a user;
responding to the configuration request to display a configuration interface, wherein the configuration interface is an interface provided with various functional components in advance, and different functional components are selected to display corresponding interfaces, and a task list component, an interface list component, a function list component, an action configuration component and an NLU configuration component are displayed on the configuration interface;
receiving a first configuration operation input by the user on the configuration interface and aiming at a target task, wherein the first configuration operation is used for configuring functional attributes of state points of a finite state machine and functional attributes of directed state edges between the state points, the state points represent dialogue state points of a man-machine dialogue system for completing the target task, the directed state edges represent dialogue states of the man-machine dialogue system and can be transferred from one end dialogue state point of the directed state edges to the other end dialogue state point of the directed state edges, and the functional attributes of the directed state edges are used for limiting conditions meeting state transfer;
And completing the dialogue management logic configuration of the man-machine dialogue system based on the finite state machine in response to the first configuration operation.
2. The method of claim 1, wherein receiving a first configuration operation for a target task entered by the user on the configuration interface comprises:
receiving dialogue state point creation operation of the target task input by the user on the configuration interface;
responding to the dialogue state point creation operation, generating dialogue state points and displaying the dialogue state points;
receiving a function attribute configuration operation of the user aiming at each dialogue state point;
receiving the directed state edge creation operation input by the user;
generating and displaying directed state edges between state points corresponding to the directed state edge creation operations based on the directed state edge creation operations;
and receiving configuration operation of the directed state edges input by the user, wherein the configuration operation of the directed state edges is used for configuring transition conditions of the directed state edges.
3. The method of claim 1, wherein receiving a first configuration operation for a target task entered by the user on the configuration interface further comprises:
And receiving the configuration operation of function call or API call, which is input by the user and is performed after each directed state edge meets the transfer condition.
4. The method of claim 3, wherein, when the configuring operation on the directed state edge is used to configure a third party API call after a transition condition is satisfied, the method further comprises:
an API configuration interface is displayed that is configured to,
receiving the name, address, request mode and return parameter settings of the third party API input by the user on the API configuration interface;
the third party API is configured based on the settings of the name, address, request mode, and return parameters.
5. A method according to claim 3, wherein, when the configuration operation on the directed state edge is used to configure a function call after a transition condition is satisfied, the method further comprises:
a function configuration interface is displayed and the function configuration interface,
receiving the name, the function content and the return parameter settings of the function input by the user in the function configuration interface;
the function is configured based on the name of the function, the content of the function, and the settings of the return parameters.
6. The method of claim 1, wherein prior to receiving the first configuration operation for the target task entered by the user on the configuration interface, the method further comprises:
Receiving a target task creation request input by the user on the configuration interface;
and completing the creation of the target task according to the dialogue field, the dialogue intention and the dialogue slot carried in the target task creation request.
7. The method of claim 1, wherein after the displaying of a configuration interface in response to the configuration request, the method further comprises:
receiving a second configuration operation input by the user on the configuration interface, wherein the second configuration operation is used for configuring NLU parameters of a natural language understanding module of the man-machine conversation system, and the NLU parameters comprise a conversation field, a conversation intention and a conversation slot;
and responding to the second configuration operation to complete NLU parameter configuration of the natural language understanding module of the man-machine conversation system.
8. The method of claim 7, wherein after completing NLU parameter configuration of a natural language understanding module of the human-machine dialog system in response to the second configuration operation, the method further comprises:
responding to a test instruction which is input by the user on the configuration interface and aims at the natural language understanding module, and generating a test address;
And issuing the test address.
9. The method of claim 1, wherein after completing the finite state machine based dialog management logic configuration of the human-machine dialog system in response to the first configuration operation, the method further comprises:
receiving a third configuration operation input by the user on the configuration interface, wherein the third configuration operation is used for configuring natural language generation parameters of the man-machine conversation system;
and responding to the third configuration operation to complete the natural language generation parameter configuration of the man-machine conversation system.
10. The method of claim 9, wherein a component selection list is displayed on the configuration interface, the component selection list comprising a plurality of different purpose selection components; receiving a third configuration operation input by the user on the configuration interface, including:
receiving a component determining instruction input by the user in the component selection list, and displaying a corresponding component configuration interface;
and receiving the component configuration operation input by the user on the configuration interface.
11. The method according to claim 1, wherein after configuring the human-machine conversation systems respectively corresponding to at least two of the target tasks, the method further comprises:
And issuing a unified dialogue interface through which a user can perform dialogue with any one of the man-machine dialogue systems respectively corresponding to at least two target tasks.
12. A multi-round dialog configuration platform, comprising: the configuration module is configured to be configured to,
the configuration module is used for receiving a configuration request input by a user;
the configuration interface is an interface preset with various functional components, and different functional components are selected to display corresponding interfaces, wherein the configuration interface displays a task list component, an interface list component, a function list component, an action configuration component and an NLU configuration component;
the method comprises the steps of receiving a first configuration operation input by a user on a configuration interface and aiming at a target task, wherein the first configuration operation is used for configuring functional attributes of state points of a finite state machine and functional attributes of directed state edges between the state points, the state points represent dialogue state points of a man-machine dialogue system for completing the target task, the directed state edges represent dialogue states of the man-machine dialogue system and can be transferred from one end dialogue state point of the directed state edges to the other end dialogue state point of the directed state edges, and the functional attributes of the directed state edges are used for limiting conditions meeting state transfer;
And the finite state machine-based dialog management logic configuration of the man-machine dialog system is completed in response to the first configuration operation.
13. The platform of claim 12, wherein the configuration module is further configured to:
receiving dialogue state point creation operation of the target task input by the user on the configuration interface;
responding to the dialogue state point creation operation, generating dialogue state points and displaying the dialogue state points;
receiving a function attribute configuration operation of the user aiming at each dialogue state point;
receiving the directed state edge creation operation input by the user;
generating and displaying directed state edges between state points corresponding to the directed state edge creation operations based on the directed state edge creation operations;
and receiving configuration operation of the directed state edges input by the user, wherein the configuration operation of the directed state edges is used for configuring transition conditions of the directed state edges.
14. The platform of claim 12, wherein the configuration module is further configured to: and receiving the configuration operation of function call or API call, which is input by the user and is performed after each directed state edge meets the transfer condition.
15. The platform of claim 14, wherein when the configuration operation on the directed state edge is used to configure a third party API call after a transfer condition is satisfied, the configuration module is further configured to:
receiving the name, address, request mode and return parameter settings of the third party API input by the user on an API configuration interface;
the third party API is configured based on the settings of the name, address, request mode, and return parameters.
16. The platform of claim 14, wherein when the configuration operation on the directed state edge is used to configure a function call after a transition condition is satisfied, the configuration module is further configured to:
receiving the name, the function content and the return parameter settings of the function input by the user in a function configuration interface;
the function is configured based on the name of the function, the content of the function, and the settings of the return parameters.
17. The platform of claim 12, wherein the configuration module is further configured to:
receiving a target task creation request input by the user on the configuration interface;
and completing the creation of the target task according to the dialogue field, the dialogue intention and the dialogue slot carried in the target task creation request.
18. The platform of claim 12, wherein the configuration module is further configured to:
receiving a second configuration operation input by the user on the configuration interface, wherein the second configuration operation is used for configuring NLU parameters of a natural language understanding module of the man-machine conversation system, and the NLU parameters comprise a conversation field, a conversation intention and a conversation slot;
and responding to the second configuration operation to complete NLU parameter configuration of the natural language understanding module of the man-machine conversation system.
19. The platform of claim 18, wherein the configuration module is further configured to:
responding to a test instruction which is input by the user on the configuration interface and aims at the natural language understanding module, and generating a test address;
and issuing the test address.
20. The platform of claim 18, wherein the configuration module is further configured to:
receiving a third configuration operation input by the user on the configuration interface, wherein the third configuration operation is used for configuring natural language generation parameters of the man-machine conversation system;
and responding to the third configuration operation to complete the natural language generation parameter configuration of the man-machine conversation system.
21. The platform of claim 20, wherein the configuration interface has a component selection list displayed thereon, the component selection list including a plurality of different purpose selection components, the configuration module further configured to:
receiving a component determining instruction input by the user in the component selection list, and displaying a corresponding component configuration interface;
and receiving the component configuration operation input by the user on the configuration interface.
22. The platform of claim 12, wherein after configuring the man-machine conversation systems respectively corresponding to the at least two target tasks, the configuration module is further configured to:
and issuing a unified dialogue interface through which a user can perform dialogue with any one of the man-machine dialogue systems respectively corresponding to at least two target tasks.
23. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method of configuring a human-machine interaction system of any of claims 1-11 when executed.
24. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the method of configuring a human-machine interaction system according to any one of claims 1-11.
CN201910141210.5A 2019-02-25 2019-02-25 Configuration method of man-machine conversation system, multi-round conversation configuration platform and electronic equipment Active CN111611357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910141210.5A CN111611357B (en) 2019-02-25 2019-02-25 Configuration method of man-machine conversation system, multi-round conversation configuration platform and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910141210.5A CN111611357B (en) 2019-02-25 2019-02-25 Configuration method of man-machine conversation system, multi-round conversation configuration platform and electronic equipment

Publications (2)

Publication Number Publication Date
CN111611357A CN111611357A (en) 2020-09-01
CN111611357B true CN111611357B (en) 2023-08-15

Family

ID=72197579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910141210.5A Active CN111611357B (en) 2019-02-25 2019-02-25 Configuration method of man-machine conversation system, multi-round conversation configuration platform and electronic equipment

Country Status (1)

Country Link
CN (1) CN111611357B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723189B (en) * 2020-06-23 2021-11-16 贝壳找房(北京)科技有限公司 Interactive question and answer prompting method and device, storage medium and electronic equipment
CN111984355A (en) * 2020-08-20 2020-11-24 第四范式(北京)技术有限公司 Method and device for realizing man-machine multi-turn conversation
CN114238066A (en) * 2020-09-09 2022-03-25 华为技术有限公司 Task testing method based on man-machine conversation, related equipment and storage medium
CN112231027A (en) * 2020-09-27 2021-01-15 中国建设银行股份有限公司 Task type session configuration method and system
CN112800195B (en) * 2021-01-18 2024-04-16 南京奥拓电子科技有限公司 Configuration method and system of conversation robot
CN113127618B (en) * 2021-04-16 2023-09-01 北京奇艺世纪科技有限公司 Data processing method and device, electronic equipment and storage medium
CN113468303B (en) * 2021-06-25 2022-05-17 贝壳找房(北京)科技有限公司 Dialogue interaction processing method and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246687A (en) * 2008-03-20 2008-08-20 北京航空航天大学 An intelligent voice interaction system and interaction method
CN108415710A (en) * 2018-03-14 2018-08-17 苏州思必驰信息科技有限公司 Method and system for publishing and calling API on intelligent dialogue development platform
CN108804536A (en) * 2018-05-04 2018-11-13 科沃斯商用机器人有限公司 Human-computer dialogue and strategy-generating method, equipment, system and storage medium
CN108984157A (en) * 2018-07-27 2018-12-11 苏州思必驰信息科技有限公司 Technical ability configuration and call method and system for voice dialogue platform
CN109002510A (en) * 2018-06-29 2018-12-14 北京百度网讯科技有限公司 A kind of dialog process method, apparatus, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9214156B2 (en) * 2013-08-06 2015-12-15 Nuance Communications, Inc. Method and apparatus for a multi I/O modality language independent user-interaction platform

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246687A (en) * 2008-03-20 2008-08-20 北京航空航天大学 An intelligent voice interaction system and interaction method
CN108415710A (en) * 2018-03-14 2018-08-17 苏州思必驰信息科技有限公司 Method and system for publishing and calling API on intelligent dialogue development platform
CN108804536A (en) * 2018-05-04 2018-11-13 科沃斯商用机器人有限公司 Human-computer dialogue and strategy-generating method, equipment, system and storage medium
CN109002510A (en) * 2018-06-29 2018-12-14 北京百度网讯科技有限公司 A kind of dialog process method, apparatus, equipment and medium
CN108984157A (en) * 2018-07-27 2018-12-11 苏州思必驰信息科技有限公司 Technical ability configuration and call method and system for voice dialogue platform

Also Published As

Publication number Publication date
CN111611357A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111611357B (en) Configuration method of man-machine conversation system, multi-round conversation configuration platform and electronic equipment
JP5248964B2 (en) Method and system for generating screen elements or data objects for wireless applications
CN110088751B (en) Search results integrated with interactive dialog service interface
CN112136124A (en) Dependency graph conversation modeling for human-machine conversation sessions with computer-implemented automated assistants
US20110078599A1 (en) Modification Free UI Injection into Business Application
EP1604280B1 (en) Flexible multi-agent system architecture
JP2019219737A (en) Interactive server, interactive method and interactive program
CN113095056B (en) Generation method, processing method, device, electronic equipment and medium
CN112506854A (en) Method, device, equipment and medium for storing page template file and generating page
CN119127019A (en) Agent-based human-computer interaction method, device, electronic equipment and medium
CN106844467B (en) Data display method and device
US8656293B1 (en) Configuring mobile devices
WO2024123352A1 (en) Constraining generation of automated assistant suggestions based on application running in foreground
CN103164217B (en) Independent data entity for back-end system
CN117015781A (en) Generating a natural language interface from a graphical user interface
CN114721643A (en) Construction method of application program, display method, device and equipment of application page
CN103917944B (en) System and method for dynamically updating folder contents in device
CN111782992A (en) Display control method, device, equipment and readable storage medium
CA2538531C (en) System and method for applying workflow of generic services to component based applications for devices
CN113806596B (en) Operation data management method and related device
CN113326035B (en) Data processing method, device, electronic equipment and computer storage medium
US20050114147A1 (en) System and method for creating business process models by multi-modal conversation
JP2002259469A (en) Computer-readable recording medium recording a program for managing CAD data and program
CN119537543A (en) Instant information inquiry task processing method and related device based on multi-agent collaboration
AU2008202421B2 (en) System and method for building wireless applications with intelligent mapping between user interface and data components

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant