CN107016046A - The intelligent robot dialogue method and system of view-based access control model displaying - Google Patents
The intelligent robot dialogue method and system of view-based access control model displaying Download PDFInfo
- Publication number
- CN107016046A CN107016046A CN201710089553.2A CN201710089553A CN107016046A CN 107016046 A CN107016046 A CN 107016046A CN 201710089553 A CN201710089553 A CN 201710089553A CN 107016046 A CN107016046 A CN 107016046A
- Authority
- CN
- China
- Prior art keywords
- dialogue
- keyword
- intelligent robot
- data
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a kind of intelligent robot dialogue method of view-based access control model displaying and system.The intelligent robot is provided with robot operating system, and this method includes:The view data of surrounding environment is obtained, and described image data are parsed to identify the object included in surrounding environment;Extract the corresponding keyword of each object being resolved to;Keyword is input to dialogue generation model, generation dialogue output data is simultaneously exported.In accordance with the invention it is possible to make robot expand conversation content according to current scene, intelligent and with user the interaction capabilities of robot are improved.
Description
Technical field
The present invention relates to field in intelligent robotics, more particularly to a kind of intelligent robot dialogue side of view-based access control model displaying
Method and system.
Background technology
With the continuous development of scientific technology, the introducing of information technology, computer technology and artificial intelligence technology, machine
Industrial circle is progressively walked out in the research of people, gradually extend to the neck such as medical treatment, health care, family, amusement and service industry
Domain.And people for the requirement of robot also conform to the principle of simplicity the multiple mechanical action of substance be promoted to anthropomorphic question and answer, independence and with
The intelligent robot that other robot is interacted, man-machine interaction also just turns into the key factor for determining intelligent robot development.
The robot applied in home scenarios is usually defeated according to user during speech exchange is carried out with user
The voice messaging entered is multiplexed family back and forth, although can bring interest to user to a certain extent, but be due to existing machine
People is only capable of passively exporting conversation sentence according to the question and answer storehouse of itself, and the sentence content of output is more deficient, causes to reduce
User uses the interest of robot, is unfavorable for robot and user and carries out prolonged dialogue, intelligent robot and
Personification awaits improving.
Therefore, need badly and a kind of robot dialogue solution that can improve intelligent robot and personification is provided.
The content of the invention
One of technical problems to be solved by the invention are to need offer one kind to improve intelligent robot and personification
Property robot dialogue solution.
In order to solve the above-mentioned technical problem, embodiments herein provide firstly a kind of intelligence of view-based access control model displaying
Robot dialogue method, the intelligent robot is provided with robot operating system, and this method includes:Obtain the figure of surrounding environment
As data, and described image data are parsed to identify the object included in surrounding environment;It is each that extraction is resolved to
The corresponding keyword of object;Keyword is input to dialogue generation model, generation dialogue output data is simultaneously exported.
Preferably, this method also comprises the following steps:By dialog history interactive information input to dialogue generation model in
Multiple keywords are filtered, select the keyword occurred in dialog history interactive information to generate dialogue output data.
Preferably, under passive dialogue mode, also the dialogue interaction data that user currently inputs is inputted to the dialogue
In generation model.
Preferably, under active interlocution pattern, in addition to:The current behavior posture of user is parsed, is obtained and user behavior
The corresponding keyword of posture, keyword corresponding with user behavior posture is inputted into dialogue generation model.
Preferably, the dialogue generation model is coding decoder.
Embodiments herein additionally provides a kind of intelligent robot conversational system of view-based access control model displaying, the intelligence
Robot is provided with robot operating system, and the intelligent robot conversational system includes:Image data analyzing module, it obtains week
The view data in collarette border, and described image data are parsed to identify the object included in surrounding environment;It is crucial
Word extraction module, it extracts the corresponding keyword of each object being resolved to;Dialogue data output module, keyword is input to by it
Talk with generation model, generation dialogue output data is simultaneously exported.
Preferably, the dialogue data output module, dialog history interactive information is further inputted to dialogue and generated by it
To be filtered to multiple keywords in model, the keyword occurred in selection dialog history interactive information is defeated to generate dialogue
Go out data.
Preferably, the dialogue data output module, it further under passive dialogue mode, also currently inputs user
Dialogue interaction data input to it is described dialogue generation model in.
Preferably, described image data resolution module, it is current that it parses further under active interlocution pattern, also user
Behavior posture;The keyword extracting module, it further obtains keyword corresponding with user behavior posture;The number of sessions
According to output module, it further inputs keyword corresponding with user behavior posture into dialogue generation model.
Preferably, the dialogue generation model is coding decoder.
Compared with prior art, one or more of such scheme embodiment can have the following advantages that or beneficial effect
Really:
In embodiments of the present invention, the view data that robot is presently in surrounding environment is obtained first, and to the figure
As data are parsed to identify the object included in surrounding environment, then keyword corresponding with each object is input to
Talk with generation model, generation dialogue output data is simultaneously exported.According to the above method, robot can be made to be expanded according to current scene
Conversation content, improves intelligent and with user the interaction capabilities of robot.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
Obtain it is clear that or being understood by implementing technical scheme.The purpose of the present invention and other advantages can by
Specifically noted structure and/or flow are realized and obtained in specification, claims and accompanying drawing.
Brief description of the drawings
Accompanying drawing is used for providing to the technical scheme of the application or further understanding for prior art, and constitutes specification
A part.Wherein, the accompanying drawing of expression the embodiment of the present application is used for the technical side for explaining the application together with embodiments herein
Case, but do not constitute the limitation to technical scheme.
Fig. 1 is that the flow of the intelligent robot dialogue method of the view-based access control model displaying according to first embodiment of the invention is shown
It is intended to.
Fig. 2 is that the flow of the intelligent robot dialogue method of the view-based access control model displaying according to second embodiment of the invention is shown
It is intended to.
Fig. 3 is that the flow of the intelligent robot dialogue method of the view-based access control model displaying according to third embodiment of the invention is shown
It is intended to.
Fig. 4 is that the structure of the intelligent robot conversational system of the view-based access control model displaying according to fourth embodiment of the invention is shown
It is intended to.
Embodiment
Describe embodiments of the present invention in detail below with reference to drawings and Examples, how the present invention is applied whereby
Technological means solves technical problem, and reaches the implementation process of relevant art effect and can fully understand and implement according to this.This Shen
Each feature that please be in embodiment and embodiment, can be combined with each other under the premise of not colliding, the technical scheme formed
Within protection scope of the present invention.
In addition, the flow of accompanying drawing the step of illustrate can such as one group computer executable instructions computer system
It is middle to perform.And, although logical order is shown in flow charts, but in some cases, can be with different from herein
Order performs shown or described step.
In existing field in intelligent robotics, most of robots can carry out single interactive voice with user, complete
Simple question and answer behavior is carried out into the task of user's imparting or with user.But, existing this interactive mode is general
It is actively to be asked a question by user, wakes up what robot was interacted therewith, and the content of interaction is mostly with existing question and answer data
Carried out based on storehouse, this interactive mode is more dull, and robot is intelligent poor, greatly reduces user and uses
The interest of intelligent robot.The embodiment of the present invention proposes a kind of new robot dialogue method, in order to increase robot with
Language enrichment degree when user links up, this method can make robot in such as home scenarios in real time or in discontinuity detection scene
Object, such as cup, desk, the corresponding keyword active/passive of object then in current scene is defeated to user
Go out the bigger sentence of information content, improve the interest that user uses robot.
In addition, when the scene keyword that robot is extracted is multiple, dialog history interactive information pair can also be utilized
Multiple keywords are filtered, and dialogue data is produced according to the keyword occurred in dialog history interactive information.Due to going through
Intersection record when history talks with user before interactive information is (including active user and other users) with robot interactive, its
In indicate interest-degree of the user to some things, the dialogue output data determined by the history mutual information, Neng Gouti
The interaction interest of high user.
, can be with if inputting the dialogue output carried out based on user in addition, robot with user when engaging in the dialogue
The dialogue interaction data that user is currently inputted is inputted into dialogue generation model, talks with scene pair of the generation model based on input
Answer the dialogue interaction data that keyword and user currently input, or scene correspondence keyword, dialog history record and user
Dialogue interaction data export corresponding dialogue output data.So, robot can more accurately respond user, improve
The Experience Degree of user.
Robot with user when being engaged in the dialogue, if robot initiates dialogue in active, then robot can also be parsed
The behavior posture of user, obtains keyword corresponding with behavior posture, can be corresponding with scene before crucial by the keyword
Word, dialog history record are together as the input value of dialogue generation model, the higher dialogue output data of the output degree of accuracy so that
Dialogue between robot and user is more intelligent, to improve the Experience Degree of user.
It should be noted that the keyword being related in the embodiment of the present invention, which is mainly text, describes vocabulary, such as scene pair
" keyword " answered, it is mainly object names term or other terms related to the title in scene, for posture
Corresponding " keyword ", can be the content of the further implication represented by posture, for example, the posture of user is support cheek state,
Its keyword can be " thinking " etc..
Existing dialog model has pre-defined " answering set ", contains many answers, also has some bases
The question sentence and context of input, and for picking out the heuristic rule suitably answered.The model will not produce new text
Word, it can only pick out a more suitable answer from pre-defined " answering set "." the dialogue generation of the present embodiment
Model " is independent of pre-defined answer set, and it can produce new answer according to the content of input." coding in this example
Decoder " is to generate question sentence by coding and decoding to answer.Therefore, " the dialogue generation model " being related in this example is a coding
Decoder, its basic framework is sequence-to-sequence models, i.e. the model of " sentence to sentence ".
First embodiment
Fig. 1 is the flow signal of the example one for the intelligent robot dialogue method for being related to the view-based access control model displaying of the present invention
Figure, the intelligent robot is preferably the robot for being provided with robot operating system, however, other have voice, expression, action
Intelligent robot (or equipment) etc. ability to express, without using the robot operating system can also realize the present embodiment.
Illustrate each step involved by this method below with reference to Fig. 1.
In step s 110, the view data of surrounding environment is obtained, and view data is parsed with around identifying
Object included in environment.
In the case where user's needs engage in the dialogue interaction, start the dialogue function of intelligent robot.Robot is called certainly
Image class data in the image capture device of body, such as camera, collection current environment are made up of multiple image data
Video class data.Then, the view data to acquisition is parsed.Specifically, mainly to view data or video data
Pre-processed.Wherein, the pretreatment for image mainly includes the processing such as denoising, the geometric distortion corrections such as filtering.It is logical
Cross image preprocessing, it is possible to reduce the complexity of successive image processing, improve treatment effeciency.Then, known using existing object
Other algorithm all identifies all objects being related in view data, for example, by way of window is scanned, being carried out to picture
The scaling of several setting ranks handles to repeat identification.First extracted in the identification process of object feature in image to
Amount, then carries out Classification and Identification, i.e. characteristic vector to object using SVM algorithm and classifies.
It is readily appreciated that, the situation of any object is not included because robot local environment there may be, or collect
Object in image, which exists, to be blocked and can not accurately identify, then now robot converts the angle of camera or moved again
Image is gathered, object identification is then carried out.
In the step s 120, the corresponding keyword of each object being resolved to is extracted.
In brief, after object is identified, generate the corresponding text of each object and describe vocabulary.If for example, identifying
Object is cup, then the keyword extracted is " cup ".
In step s 130, keyword is input to dialogue generation model, generation dialogue output data is simultaneously exported.Wherein,
Dialogue generation model is coding decoder.
In this example, the input of dialogue generation model is the object key under dialog history record and current session environment
Word, after object keyword is inputted to the dialogue generation model, the dialogue output related to the object keyword can be generated
Data, such as related to the object identified intellectual data (introduction or artist's brief introduction that a such as width is drawn), recreational number
According in (such as the English song related to " flower "), historical record data (such as before which people the used desk) extremely
It is one of few.Robot generates corresponding voice messaging according to these dialogue output datas and is sent to user so as to actively enter to user
Row interaction.
Second embodiment
Fig. 2 is the flow signal of the example two for the intelligent robot dialogue method for being related to the view-based access control model displaying of the present invention
Scheme, the method for the embodiment is mainly included the following steps that, wherein, by the step similar to first embodiment with identical label mark
Note, and its particular content is repeated no more, only difference step is specifically described.
In step s 110, the view data of surrounding environment is obtained, and view data is parsed with around identifying
Object included in environment.
In the step s 120, the corresponding keyword of each object being resolved to is extracted.
In step S140, multiple keywords are judged whether, step S150 is performed if it there are multiple keywords, it is no
Then perform step S130.
In step S150, dialog history interactive information inputs into dialogue generation model to carry out multiple keywords
Filter, selects the keyword occurred in dialog history interactive information to generate dialogue output data.
Specifically, it is possible to use word similarity algorithm is crucial with each to calculate the vocabulary in dialog history interactive information
Similarity value between word, then filters to the corresponding keyword of multiple objects according to Similarity value, determines history pair
The keyword occurred in words interactive information, dialogue output is generated based on the keyword occurred in dialog history interactive information
Data.When carrying out filtering keys word except word similarity algorithm is used above, relatively simple terminology match side can also be used
Method, to detect the uniformity of vocabulary, the keyword occurred in dialog history interactive information is determined according to uniformity.
If in addition, the quantity of the keyword occurred in dialog history interactive information is multiple, counting keyword
Occurrence number, will appear from the most keyword of number of times as the input of dialogue generation model.
In step s 130, keyword is input to dialogue generation model, generation dialogue output data is simultaneously exported.
3rd embodiment
Fig. 3 is the flow signal of the example three for the intelligent robot dialogue method for being related to the view-based access control model displaying of the present invention
Scheme, the method for the embodiment is mainly included the following steps that, wherein, by the step similar to first embodiment with identical label mark
Note, and its particular content is repeated no more, only difference step is specifically described.
In step s 110, the view data of surrounding environment is obtained, and view data is parsed with around identifying
Object included in environment.
In the step s 120, the corresponding keyword of each object being resolved to is extracted.
In step S140, multiple keywords are judged whether, step S220 is performed if it there are multiple keywords, it is no
Then perform step S210.
Step S210, dialogue generation model is input to by keyword.
Step S220, dialog history interactive information is inputted into dialogue generation model to carry out multiple keywords
Filter.
In step S230, it is passive dialogue mode to judge whether dialogue, if so, then performing step S240, is otherwise performed
Step S250.
" passive dialogue mode " refers to that user first initiates dialogue, and robot carries out a kind of dialogue mode of response, " actively right
Words pattern " is then with " passive dialogue mode " on the contrary, refer to that robot initiates dialogue, user carries out the dialogue mode of response.One
In individual example, robot can be by judging in the first opinion dialogue, and whether user first exports the mode of voice messaging to judge to work as
Whether preceding dialogue is passive dialogue mode.
In step S240, under passive dialogue mode, also the dialogue interaction data that user currently inputs is inputted to described
Talk with generation model.
Acquisition dialogue interaction data is identified to the voice messaging of user, by the data input to dialogue generation model
In, keyword and dialogue interaction data of the dialogue generation model based on input, generation dialogue output data, and it is converted into voice letter
Breath is sent to user.
In step s 250, under active interlocution pattern, the current behavior posture of parsing user is obtained and user behavior appearance
The corresponding keyword of state, keyword corresponding with user behavior posture is inputted into dialogue generation model.
In this example, behavior posture includes the expression and behavior act of user.By taking expression as an example, taken the photograph by using robot
As head collects mankind's facial expression image, be then converted into can analyze data, recycle image procossing, the technology such as artificial intelligence to enter
Row expression mood analysis.During facial expression is understood, it usually needs the delicate change to expression detects, such as face
Buccinator muscle meat, the change of mouth and choose eyebrow etc..
It can be applied in advance using the method Lai Jiao robots of machine learning as how a kind of by the way of most of faces
To recognize and follow the trail of facial expression, and face mood data storehouse is set up according to learning outcome.For example, robot can first in study
The facial expression that indignation, fear and these three sad negative emotions that face is mentioned are shown, then sets up and these faces
Express one's feelings corresponding data message, can be by the Facial Expression Image and database by acquisition when carrying out negative emotions identification
Information be compared determine user whether be in negative emotions.What if the negative emotions expression detected with setting matched
During human face expression, it is determined that expression mood is negative emotions.Then, the related keyword of facial expression is extracted, if for example, detection
During to the human face expression matched with sad facial expression image, it is determined that user is in sad state, the then key extracted
Word is " sadness ".
Similarly, when detecting the action of user, also take with human facial expression recognition similar mode to determine that user works as
Preceding action, then extracts the keyword related to action.It should be noted that robot can parse the face of user simultaneously
Expression and act, alternative one can also be only parsed, herein according to needing to be set.
After acquisition keyword corresponding with user behavior posture, will keyword corresponding with user behavior posture input to
Talk with generation model, dialogue generation model is based on keyword set (the corresponding pass of keyword and behavior posture after filtering
Keyword) generation dialogue output data, and be converted into voice messaging and be sent to user.
Fourth embodiment
Fig. 4 is that the structure of the intelligent robot conversational system of the view-based access control model displaying according to fourth embodiment of the invention is shown
It is intended to, the intelligent robot is preferably the robot for being provided with robot operating system.As shown in figure 4, the embodiment of the present application
Intelligent robot conversational system 500 mainly includes:Image data analyzing module 510, keyword extracting module 520 and dialogue data
Output module 530.
Image data analyzing module 510, it obtains the view data of surrounding environment, and described image data are parsed
To identify the object included in surrounding environment.Described image data resolution module 510, it is further in active interlocution pattern
Under, also parse the current behavior posture of user.
Keyword extracting module 520, it extracts the corresponding keyword of each object being resolved to.The keyword extracting module
520, it further obtains keyword corresponding with user behavior posture.
Dialogue data output module 530, keyword is input to dialogue generation model by it, and generation dialogue output data is simultaneously defeated
Go out.The dialogue data output module 530, it is further inputted dialog history interactive information into dialogue generation model with right
Multiple keywords are filtered, and select the keyword occurred in dialog history interactive information to generate dialogue output data.Institute
Dialogue data output module 530 is stated, it is further under passive dialogue mode, the dialogue interaction data for also currently inputting user
Input is into the dialogue generation model.The dialogue data output module 530, it further will be corresponding with user behavior posture
Keyword input to dialogue generation model in.The dialogue generation model is coding decoder.
By rationally setting, the intelligent robot conversational system 500 of the present embodiment can perform first embodiment, second in fact
Each step of example and 3rd embodiment is applied, here is omitted.
Because the method for the present invention describes what is realized in computer systems.The computer system can for example be set
In the control core processor of robot.For example, method described herein can be implemented as what can be performed with control logic
Software, it is performed by the CPU in robot operating system.Function as described herein, which can be implemented as being stored in non-transitory, to be had
Programmed instruction set in shape computer-readable medium.When implemented in this fashion, the computer program includes one group of instruction,
When group instruction is run by computer, it, which promotes computer to perform, can implement the method for above-mentioned functions.FPGA can be temporary
When or be permanently mounted in non-transitory tangible computer computer-readable recording medium, for example ROM chip, computer storage,
Disk or other storage mediums.In addition to being realized with software, logic as described herein can utilize discrete parts, integrated electricity
Road, programmable the patrolling with programmable logic device (such as, field programmable gate array (FPGA) or microprocessor) combined use
Volume, or embodied including any other equipment that they are combined.All such embodiments are intended to fall under the model of the present invention
Within enclosing.
It should be understood that disclosed embodiment of this invention is not limited to specific structure disclosed herein, process step
Or material, and the equivalent substitute for these features that those of ordinary skill in the related art are understood should be extended to.It should also manage
Solution, term as used herein is only used for describing the purpose of specific embodiment, and is not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means special characteristic, the structure described in conjunction with the embodiments
Or during characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs
Apply example " or " embodiment " same embodiment might not be referred both to.
While it is disclosed that embodiment as above, but described content is only to facilitate understanding the present invention and adopting
Embodiment, is not limited to the present invention.Any those skilled in the art to which this invention pertains, are not departing from this
On the premise of the disclosed spirit and scope of invention, any modification and change can be made in the implementing form and in details,
But the scope of patent protection of the present invention, still should be subject to the scope of the claims as defined in the appended claims.
Claims (10)
1. a kind of intelligent robot dialogue method of view-based access control model displaying, the intelligent robot is provided with robot manipulation system
System, this method includes:
The view data of surrounding environment is obtained, and described image data are parsed to identify included in surrounding environment
Object;
Extract the corresponding keyword of each object being resolved to;
Keyword is input to dialogue generation model, generation dialogue output data is simultaneously exported.
2. intelligent robot dialogue method according to claim 1, it is characterised in that this method also comprises the following steps:
Dialog history interactive information is inputted into dialogue generation model to filter multiple keywords, dialog history is selected
The keyword occurred in interactive information generates dialogue output data.
3. intelligent robot dialogue method according to claim 1 or 2, it is characterised in that
Under passive dialogue mode, also the dialogue interaction data that user currently inputs is inputted into the dialogue generation model.
4. intelligent robot dialogue method according to claim 1 or 2, it is characterised in that under active interlocution pattern, also
Including:The current behavior posture of user is parsed, keyword corresponding with user behavior posture is obtained, will be with user behavior posture pair
The keyword answered is inputted into dialogue generation model.
5. intelligent robot dialogue method according to claim 1, it is characterised in that the dialogue generation model is coding
Decoder.
6. a kind of intelligent robot conversational system of view-based access control model displaying, the intelligent robot is provided with robot manipulation system
System, the intelligent robot conversational system includes:
Image data analyzing module, it obtains the view data of surrounding environment, and described image data are parsed to recognize
The object gone out included in surrounding environment;
Keyword extracting module, it extracts the corresponding keyword of each object being resolved to;
Dialogue data output module, keyword is input to dialogue generation model by it, and generation dialogue output data is simultaneously exported.
7. intelligent robot conversational system according to claim 6, it is characterised in that
The dialogue data output module, it is further inputted dialog history interactive information into dialogue generation model with to many
Individual keyword is filtered, and selects the keyword occurred in dialog history interactive information to generate dialogue output data.
8. the intelligent robot conversational system according to claim 6 or 7, it is characterised in that
The dialogue data output module, it is further under passive dialogue mode, and the dialogue for also currently inputting user is interacted
Data input is into the dialogue generation model.
9. the intelligent robot conversational system according to claim 6 or 7, it is characterised in that
Described image data resolution module, it further parses the current behavior posture of user under active interlocution pattern, also;
The keyword extracting module, it further obtains keyword corresponding with user behavior posture;
The dialogue data output module, it further inputs keyword corresponding with user behavior posture to dialogue generation mould
In type.
10. intelligent robot conversational system according to claim 6, it is characterised in that the dialogue generation model is volume
Code decoder.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710089553.2A CN107016046A (en) | 2017-02-20 | 2017-02-20 | The intelligent robot dialogue method and system of view-based access control model displaying |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710089553.2A CN107016046A (en) | 2017-02-20 | 2017-02-20 | The intelligent robot dialogue method and system of view-based access control model displaying |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107016046A true CN107016046A (en) | 2017-08-04 |
Family
ID=59440792
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710089553.2A Pending CN107016046A (en) | 2017-02-20 | 2017-02-20 | The intelligent robot dialogue method and system of view-based access control model displaying |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107016046A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107491435A (en) * | 2017-08-14 | 2017-12-19 | 深圳狗尾草智能科技有限公司 | Method and device based on Computer Automatic Recognition user feeling |
CN107862058A (en) * | 2017-11-10 | 2018-03-30 | 北京百度网讯科技有限公司 | Method and apparatus for generating information |
CN108228794A (en) * | 2017-12-29 | 2018-06-29 | 三角兽(北京)科技有限公司 | Apparatus for management of information, information processing unit and automatically reply/comment method |
CN108961431A (en) * | 2018-07-03 | 2018-12-07 | 百度在线网络技术(北京)有限公司 | Generation method, device and the terminal device of facial expression |
CN110196931A (en) * | 2019-06-28 | 2019-09-03 | 北京蓦然认知科技有限公司 | A kind of dialogue generation method and device based on iamge description |
CN110297617A (en) * | 2019-06-28 | 2019-10-01 | 北京蓦然认知科技有限公司 | A kind of initiating method and device of active interlocution |
CN113515590A (en) * | 2021-04-21 | 2021-10-19 | 洛阳青鸟网络科技有限公司 | Intelligent robot response method and device based on big data |
CN113656562A (en) * | 2020-11-27 | 2021-11-16 | 话媒(广州)科技有限公司 | Multi-round man-machine psychological interaction method and device |
WO2024088039A1 (en) * | 2022-10-28 | 2024-05-02 | 华为技术有限公司 | Man-machine dialogue method, dialogue network model training method and apparatus |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015158878A1 (en) * | 2014-04-17 | 2015-10-22 | Aldebaran Robotics | Methods and systems of handling a dialog with a robot |
CN105868827A (en) * | 2016-03-25 | 2016-08-17 | 北京光年无限科技有限公司 | Multi-mode interaction method for intelligent robot, and intelligent robot |
CN105894873A (en) * | 2016-06-01 | 2016-08-24 | 北京光年无限科技有限公司 | Child teaching method and device orienting to intelligent robot |
CN105912128A (en) * | 2016-04-29 | 2016-08-31 | 北京光年无限科技有限公司 | Smart robot-oriented multimodal interactive data processing method and apparatus |
CN105913039A (en) * | 2016-04-26 | 2016-08-31 | 北京光年无限科技有限公司 | Visual-and-vocal sense based dialogue data interactive processing method and apparatus |
CN106055662A (en) * | 2016-06-02 | 2016-10-26 | 竹间智能科技(上海)有限公司 | Emotion-based intelligent conversation method and system |
CN106097793A (en) * | 2016-07-21 | 2016-11-09 | 北京光年无限科技有限公司 | A kind of child teaching method and apparatus towards intelligent robot |
CN106297789A (en) * | 2016-08-19 | 2017-01-04 | 北京光年无限科技有限公司 | The personalized interaction method of intelligent robot and interactive system |
CN106294854A (en) * | 2016-08-22 | 2017-01-04 | 北京光年无限科技有限公司 | A kind of man-machine interaction method for intelligent robot and device |
CN106294678A (en) * | 2016-08-05 | 2017-01-04 | 北京光年无限科技有限公司 | The topic apparatus for initiating of a kind of intelligent robot and method |
CN106372195A (en) * | 2016-08-31 | 2017-02-01 | 北京光年无限科技有限公司 | Human-computer interaction method and device for intelligent robot |
-
2017
- 2017-02-20 CN CN201710089553.2A patent/CN107016046A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015158878A1 (en) * | 2014-04-17 | 2015-10-22 | Aldebaran Robotics | Methods and systems of handling a dialog with a robot |
CN105868827A (en) * | 2016-03-25 | 2016-08-17 | 北京光年无限科技有限公司 | Multi-mode interaction method for intelligent robot, and intelligent robot |
CN105913039A (en) * | 2016-04-26 | 2016-08-31 | 北京光年无限科技有限公司 | Visual-and-vocal sense based dialogue data interactive processing method and apparatus |
CN105912128A (en) * | 2016-04-29 | 2016-08-31 | 北京光年无限科技有限公司 | Smart robot-oriented multimodal interactive data processing method and apparatus |
CN105894873A (en) * | 2016-06-01 | 2016-08-24 | 北京光年无限科技有限公司 | Child teaching method and device orienting to intelligent robot |
CN106055662A (en) * | 2016-06-02 | 2016-10-26 | 竹间智能科技(上海)有限公司 | Emotion-based intelligent conversation method and system |
CN106097793A (en) * | 2016-07-21 | 2016-11-09 | 北京光年无限科技有限公司 | A kind of child teaching method and apparatus towards intelligent robot |
CN106294678A (en) * | 2016-08-05 | 2017-01-04 | 北京光年无限科技有限公司 | The topic apparatus for initiating of a kind of intelligent robot and method |
CN106297789A (en) * | 2016-08-19 | 2017-01-04 | 北京光年无限科技有限公司 | The personalized interaction method of intelligent robot and interactive system |
CN106294854A (en) * | 2016-08-22 | 2017-01-04 | 北京光年无限科技有限公司 | A kind of man-machine interaction method for intelligent robot and device |
CN106372195A (en) * | 2016-08-31 | 2017-02-01 | 北京光年无限科技有限公司 | Human-computer interaction method and device for intelligent robot |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107491435A (en) * | 2017-08-14 | 2017-12-19 | 深圳狗尾草智能科技有限公司 | Method and device based on Computer Automatic Recognition user feeling |
CN107491435B (en) * | 2017-08-14 | 2021-02-26 | 苏州狗尾草智能科技有限公司 | Method and device for automatically identifying user emotion based on computer |
CN107862058A (en) * | 2017-11-10 | 2018-03-30 | 北京百度网讯科技有限公司 | Method and apparatus for generating information |
CN107862058B (en) * | 2017-11-10 | 2021-10-22 | 北京百度网讯科技有限公司 | Method and apparatus for generating information |
CN108228794B (en) * | 2017-12-29 | 2020-03-31 | 三角兽(北京)科技有限公司 | Information management apparatus, information processing apparatus, and automatic replying/commenting method |
CN108228794A (en) * | 2017-12-29 | 2018-06-29 | 三角兽(北京)科技有限公司 | Apparatus for management of information, information processing unit and automatically reply/comment method |
CN108961431A (en) * | 2018-07-03 | 2018-12-07 | 百度在线网络技术(北京)有限公司 | Generation method, device and the terminal device of facial expression |
CN110297617A (en) * | 2019-06-28 | 2019-10-01 | 北京蓦然认知科技有限公司 | A kind of initiating method and device of active interlocution |
CN110196931B (en) * | 2019-06-28 | 2021-10-08 | 北京蓦然认知科技有限公司 | Image description-based dialog generation method and device |
CN110196931A (en) * | 2019-06-28 | 2019-09-03 | 北京蓦然认知科技有限公司 | A kind of dialogue generation method and device based on iamge description |
CN113656562A (en) * | 2020-11-27 | 2021-11-16 | 话媒(广州)科技有限公司 | Multi-round man-machine psychological interaction method and device |
CN113515590A (en) * | 2021-04-21 | 2021-10-19 | 洛阳青鸟网络科技有限公司 | Intelligent robot response method and device based on big data |
WO2024088039A1 (en) * | 2022-10-28 | 2024-05-02 | 华为技术有限公司 | Man-machine dialogue method, dialogue network model training method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107016046A (en) | The intelligent robot dialogue method and system of view-based access control model displaying | |
CN106985137B (en) | Multi-modal exchange method and system for intelligent robot | |
CN111563417B (en) | Pyramid structure convolutional neural network-based facial expression recognition method | |
Ma et al. | ElderReact: a multimodal dataset for recognizing emotional response in aging adults | |
CN110569795A (en) | An image recognition method, device and related equipment | |
CN104077579B (en) | Facial expression recognition method based on expert system | |
CN112699774A (en) | Method and device for recognizing emotion of person in video, computer equipment and medium | |
CN109117952B (en) | A Deep Learning-Based Approach for Robotic Emotional Cognition | |
Ashwin et al. | An e-learning system with multifacial emotion recognition using supervised machine learning | |
CN111339940B (en) | Video risk identification method and device | |
CN110909680A (en) | Facial expression recognition method and device, electronic equipment and storage medium | |
Rwelli et al. | Gesture based Arabic sign language recognition for impaired people based on convolution neural network | |
CN114821744A (en) | Expression recognition-based virtual character driving method, device and equipment | |
CN106873893A (en) | For the multi-modal exchange method and device of intelligent robot | |
CN111401116B (en) | Bimodal emotion recognition method based on enhanced convolution and space-time LSTM network | |
Salih et al. | Study of video based facial expression and emotions recognition methods | |
CN112784926A (en) | Gesture interaction method and system | |
CN109086351B (en) | Method for acquiring user tag and user tag system | |
Garg et al. | Facial expression recognition & classification using hybridization of ICA, GA, and neural network for human-computer interaction | |
Minu | A extensive survey on sign language recognition methods | |
Almana et al. | Real-time Arabic sign language recognition using CNN and OpenCV | |
CN111783587B (en) | Interaction method, device and storage medium | |
Wu et al. | Question-driven multiple attention (dqma) model for visual question answer | |
CN106897665A (en) | It is applied to the object identification method and system of intelligent robot | |
CN109086391B (en) | Method and system for constructing knowledge graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170804 |
|
RJ01 | Rejection of invention patent application after publication |