[go: up one dir, main page]

CN105425970A - Human-machine interaction method and device, and robot - Google Patents

Human-machine interaction method and device, and robot Download PDF

Info

Publication number
CN105425970A
CN105425970A CN201511016826.8A CN201511016826A CN105425970A CN 105425970 A CN105425970 A CN 105425970A CN 201511016826 A CN201511016826 A CN 201511016826A CN 105425970 A CN105425970 A CN 105425970A
Authority
CN
China
Prior art keywords
user
identity
information
voice
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511016826.8A
Other languages
Chinese (zh)
Other versions
CN105425970B (en
Inventor
闫振雷
纪婧文
杨雪慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hande Intelligent Technology Co ltd
Original Assignee
Shenzhen Lingyang Micro Server Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lingyang Micro Server Robot Technology Co Ltd filed Critical Shenzhen Lingyang Micro Server Robot Technology Co Ltd
Priority to CN201511016826.8A priority Critical patent/CN105425970B/en
Publication of CN105425970A publication Critical patent/CN105425970A/en
Application granted granted Critical
Publication of CN105425970B publication Critical patent/CN105425970B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention provides a human-machine interaction method and device, and a robot. The human-machine interaction method comprises the following steps: collecting multimedia information of a user, wherein the multimedia information comprises user image information and/or user voice information; determining the identity of the user and an interactive mode according to multimedia information stored in a preset database and the collected multimedia information, wherein the interactive mode comprises a voice interaction mode and/or an action interaction mode; obtaining an interactive content corresponding to the identity of the user to realize interaction with the user according to the determined interactive mode. According to the method provided by the embodiment of the invention, the interactive content corresponding to the identified identity of the user is obtained for interaction with the user, so that the diversity of human-computer interaction is realized, and the targeted man-machine interaction mode is realized.

Description

Man-machine interaction method and device and robot
Technical Field
The invention relates to the technical field of robots, in particular to a human-computer interaction method, a human-computer interaction device and a robot.
Background
With the rapid development of information technology, especially the development of internet, data informatization is going deeper and deeper. More and more affairs can be completed through the intelligent robot, and the intelligent robot plays various roles in daily life at present, so the intelligent service robot is bound to become the mainstream in the future to replace artificial service, and the development trend of using the intelligent robot to play the foreground role is the future development trend for companies.
Currently, various types of robots are provided in the related art, such as: the greeting robot is mainly used for actively calling and calling' you are! Welcome you to be coming, when a guest is identified to leave, the welcome robot can say 'you are good and welcome the next time to be coming', and the calling mode is single; for another example: the accompanying robot mainly executes corresponding actions according to the language information of the user, such as when the user says "hold me up to get up! "the accompanying robot will hold up the person from the bed with both hands and help the elderly to sit in a wheelchair.
However, in the course of implementing the present invention, the inventors found that at least the following problems exist in the related art: at present, most robots mainly achieve a asking or guiding function, and are single in function and low in intelligence degree.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a method and an apparatus for human-machine interaction, and a robot, so as to solve the above problems.
In a first aspect, an embodiment of the present invention provides a method for human-computer interaction, including:
the method comprises the following steps of collecting multimedia information of a user, wherein the multimedia information comprises: user image information and/or user voice information;
determining the identity and the interaction mode of the user according to multimedia information stored in a preset database and the collected multimedia information; wherein, the interaction mode comprises: voice interaction and/or action interaction;
and calling interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the acquiring the multimedia information of the user includes:
when monitoring that a user enters an image acquisition area, automatically acquiring multimedia information of the user;
or when the condition that the user triggers the acquisition option function is monitored, acquiring the multimedia information of the user.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the acquiring the multimedia information of the user includes: when a phone ring is identified, automatically answering the phone and collecting voice information of a user in a call;
the determining the identity and the interaction mode of the user according to the multimedia information stored in the preset database and the collected multimedia information comprises the following steps: recognizing a voice keyword according to the collected voice information of the user in the call; determining the identity and the current interaction mode of the user as action interaction according to the voice keywords and voice recognition information stored in a preset database, wherein the voice keywords comprise the name of the user and/or the name of a person to which the user needs to connect;
the calling of the interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user comprises: and calling a telephone switching number corresponding to the identity of the user, and switching the call to a terminal corresponding to the telephone switching number.
With reference to any one of the first aspect to the second possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the invoking of the interactive content corresponding to the identity of the user according to the determined interactive manner to interact with the user includes:
and recognizing the language type corresponding to the voice according to the voice information of the user, and calling interactive content corresponding to the identity of the user according to the determined interactive mode and the recognized language type to interact with the user.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the method for human-computer interaction further includes: when the doorbell sound is identified, the automatic door is started to open.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the invoking of the interactive content corresponding to the identity of the user according to the determined interactive manner to interact with the user includes:
searching interactive content corresponding to the identity of the user from a preset database according to a determined interactive mode, wherein one or more of the following are stored in the database in advance: the information of visiting clients and corresponding interactive contents, the information of staff and corresponding prompting instructions, the identities of identified strangers and corresponding image information are pre-prompted;
and calling the searched interactive content and interacting with the user.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the method for human-computer interaction further includes:
when multimedia information in the inspection process is collected, judging the abnormal condition of the collected multimedia information;
when the multimedia information is identified to be in a burst or abnormal condition, alarming, wherein the alarming at least comprises one of the following conditions: the alarm is given by voice or light on site, and the alarm is given to the customer service center or related personnel.
In a second aspect, an embodiment of the present invention further provides a method and an apparatus for human-computer interaction, where the apparatus includes:
the information acquisition module is used for acquiring multimedia information of a user, wherein the multimedia information comprises: user image information and/or user voice information;
the identity and interaction determining module is used for determining the identity and the interaction mode of the user according to the multimedia information stored in a preset database and the collected multimedia information; wherein, the interaction mode comprises: voice interaction and/or action interaction;
and the interactive content calling module is used for calling the interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, where the information acquisition module includes:
the first acquisition unit is used for automatically acquiring multimedia information of a user when the situation that the user enters an image acquisition area is monitored; or,
and the second acquisition unit is used for acquiring the multimedia information of the user when the condition that the user triggers the acquisition option function is monitored.
With reference to the second aspect, an embodiment of the present invention provides a second possible implementation manner of the second aspect, where the information acquisition module includes:
the call voice acquisition unit is used for automatically answering the call when a call ringtone is identified, and acquiring voice information of a user in the call;
the identity and interaction determination module comprises:
the keyword recognition unit is used for recognizing voice keywords according to the collected voice information of the user in the call;
the identity determining unit of the user is used for determining the identity of the user and the current interaction mode as action interaction according to the voice keywords and voice recognition information stored in a preset database, wherein the voice keywords comprise the name of the user and/or the name of a person to which the user needs to connect;
the interactive content calling module comprises:
and the transfer number calling unit is used for calling a telephone transfer number corresponding to the identity of the user and transferring the call to a terminal corresponding to the telephone transfer number.
With reference to any one of the second aspect to the second possible implementation manner of the second aspect, an embodiment of the present invention provides a third possible implementation manner of the second aspect, where the interactive content retrieving module includes:
the language type identification unit is used for identifying the language type corresponding to the voice according to the voice information of the user;
and the first interactive content calling unit is used for calling the interactive content corresponding to the identity of the user according to the determined interactive mode and the identified language type to interact with the user.
With reference to the second aspect, an embodiment of the present invention provides a fourth possible implementation manner of the second aspect, where the human-computer interaction apparatus further includes:
and the automatic door starting module is used for starting the automatic door to open when the doorbell sound is identified.
With reference to the second aspect, an embodiment of the present invention provides a fifth possible implementation manner of the second aspect, where the interactive content retrieving module includes:
the interactive content searching unit is used for searching interactive content corresponding to the identity of the user from a preset database according to a determined interactive mode, wherein one or more of the following are stored in the database in advance: the information of visiting clients and corresponding interactive contents, the information of staff and corresponding prompting instructions, the identities of identified strangers and corresponding image information are pre-prompted;
and the second interactive content calling unit is used for calling the searched interactive content and interacting with the user.
With reference to the second aspect, an embodiment of the present invention provides a sixth possible implementation manner of the second aspect, where the human-computer interaction apparatus further includes:
the abnormal condition judgment module is used for judging the abnormal condition of the collected multimedia information when the multimedia information in the inspection process is collected;
the alarm module is used for alarming when the multimedia information is identified to be in a burst or abnormal condition, wherein the alarm at least comprises one of the following: the alarm is given by voice or light on site, and the alarm is given to the customer service center or related personnel.
In a third aspect, an embodiment of the present invention further provides a robot, including: the man-machine interaction device.
In the method and the device for man-machine interaction provided by the embodiment of the invention, firstly, multimedia information of a user is collected, wherein the multimedia information comprises: user image information and/or user voice information; then, determining the identity and the interaction mode of the user according to the multimedia information stored in a preset database and the collected multimedia information; wherein, the interactive mode includes: voice interaction and/or action interaction; and finally, calling the interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user, calling the interactive content matched with the identified user identity according to the identified user identity, and further interacting with the user, so that the diversity of man-machine interaction is realized, and a man-machine interaction mode is realized in a targeted manner.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a human-machine interaction method provided by an embodiment of the invention;
FIG. 2 is a flow chart of another method for human-machine interaction provided by an embodiment of the invention;
fig. 3 is a schematic structural diagram illustrating a human-computer interaction device according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another human-computer interaction device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another human-computer interaction device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Considering that most of robots mainly achieve a question or guide function at present, the functions of the robots are single, and intelligence is not high, so that a human-computer interaction effect is poor. Based on this, the embodiment of the invention provides a method, a device and a robot for human-machine interaction, which are described below by embodiments.
As shown in fig. 1, an embodiment of the present invention provides a method for human-computer interaction, where the method includes steps S102 to S106, which are specifically as follows:
step S102: collecting multimedia information of a user, wherein the multimedia information can be user image information, user voice information, or user image information and user voice information which are collected simultaneously;
the method can acquire the multimedia information of the user in the following ways:
one mode is that the multimedia information of a user entering an image acquisition area is actively acquired by real-time monitoring, namely, the multimedia information of the user is automatically acquired when the situation that the user enters the image acquisition area is monitored;
the other mode is to passively collect the multimedia information of the user entering the image collecting area, namely when it is monitored that the user triggers the collecting option function, the multimedia information of the user is collected, and if it is detected that the user presses a collecting key or clicks a check box on a touch screen, the collecting action is started.
Step S104: determining the identity and the interaction mode of the user according to multimedia information stored in a preset database and the collected multimedia information; wherein, the above-mentioned interactive mode includes: voice interaction and/or action interaction;
specifically, the user image information collected in step S102 is compared with image information stored in a preset database, so as to determine the identity of the user; meanwhile, identifying keywords according to the collected voice information of the user, determining an interaction mode according to the identified keywords, and if the user is identified as a visitor and needs to find the working hours of a company XXX, the interaction mode is action interaction and voice interaction, and then calling corresponding interaction content and interacting through the following step S106: first asked in a fashion to receive guests and then the user is directed to a workstation of an employee of company XXX.
Step S106: and calling interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user.
After the user identity and the interaction mode are determined in step S104, step S106 is executed to retrieve the corresponding interaction content according to the user identity and the interaction mode and perform interaction, that is, retrieve the interaction content corresponding to the user identity in a preset database according to the identified user identity and the determined interaction mode for communication, for example: if the user is judged to be a company employee, then question XXX morning good/noon good; if the identity is not recognized, the user can be considered as a visitor, and then a question is asked, namely 'welcome to XXX company and ask what you need help'; and then, identifying a keyword according to the voice information of the user, retrieving a relevant answer from a preset database, communicating and interacting with the user, and in addition, under the condition that a matched answer is not retrieved, transferring to a customer service center, continuously serving visitors or people in a telephone by manual service, and simultaneously realizing face-to-face communication and interaction in a video call mode.
In the human-computer interaction method provided by the embodiment of the invention, firstly, multimedia information of a user is collected, wherein the multimedia information comprises: user image information and/or user voice information; then, determining the identity and the interaction mode of the user according to the multimedia information stored in a preset database and the collected multimedia information; wherein, the interactive mode includes: voice interaction and/or action interaction; and finally, calling interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user, wherein in the embodiment of the invention, the identity of the user is identified by using a face identification technology, the interactive content matched with the identified user is called according to the identified user identity, meanwhile, the keyword is identified by using a voice keyword identification technology, the interactive content is determined according to the keyword and the user identity, and then the interactive content interacts with the user, so that the diversity of man-machine interaction is realized, the flexibility of man-machine interaction is improved, different interactive contents are called for different users, and a mode of performing man-machine interaction in a targeted manner is realized.
Further, considering that there may be a guest calling the company at any time during the working hours or the working hours, based on this, fig. 2 shows another flow chart of the man-machine interaction method, which is as follows:
step S202: when a phone ring is identified, automatically answering the phone and collecting voice information of a user in a call;
step S204: recognizing a voice keyword according to the collected voice information of the user in the call;
step S206: determining the identity and the current interaction mode of the user as action interaction according to the voice keywords and voice recognition information stored in a preset database, wherein the voice keywords comprise the name of the user and/or the name of a person to which the user needs to connect;
step S208: and calling a telephone switching number corresponding to the identity of the user, and switching the call to a terminal corresponding to the telephone switching number.
The telephone switching number corresponding to the identity of the user can comprise a called mobile phone number, an extension number and a customer service center number, wherein if the identity of the user is XX for needing to contact company employees, the XX telephone is used as the telephone switching number, and if the XX telephone for company employees is not inquired in a preset database, the customer service center number is called as the telephone switching number.
In addition, the telephone answering process can interact with the user in the call to directly answer the consultation questions of the user, specifically, when the call is identified, the telephone can be automatically answered, keywords are identified according to the voice of the user in the call, then relevant answers are retrieved from a preset database to be communicated and interacted with the user in the call, and in addition, under the condition that matched answers are not retrieved, the user can be transferred to a customer service center to continue to serve the personnel in the call through manual service.
In the embodiment of the invention, by setting the function of automatically answering the call and combining the voice keyword recognition technology, the interactive content corresponding to the identity of the user can be output, and the call can be automatically transferred to the call transfer number required by the user in the call, so that the diversity of man-machine interaction is further realized, and the flexibility of man-machine interaction is improved.
Further, since there may be a plurality of language types of the collected users, in order to better improve the user experience, based on this, invoking the interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user includes:
and recognizing the language type corresponding to the voice according to the voice information of the user, and calling interactive content corresponding to the identity of the user according to the determined interactive mode and the recognized language type to interact with the user.
The method can also identify the appearance characteristics of the user according to the image information of the user, call the interactive content corresponding to the identity of the user according to the determined interactive mode and the language type corresponding to the identified appearance characteristics to interact with the user, can determine the language type in the two modes, can ensure that the determined language type is more accurate if the two modes are simultaneously adopted, and can determine the language type according to the multimedia information of the user in both modes of identifying the language type corresponding to the voice according to the voice information of the user and identifying the appearance characteristics of the user according to the image information of the user and then determining the language type according to the appearance characteristics, so that the multi-language environment communication can be realized.
In the embodiment of the invention, the language type corresponding to the voice is identified according to the collected language information of the user or the appearance characteristic of the user is identified according to the collected image information of the user, then the language type is determined according to the appearance characteristic, the corresponding language type is adopted to communicate and interact with the visitor, and the flexibility of man-machine interaction is further increased and the experience of the visitor is improved by adopting a multi-language environment communication mode.
Further, the method further comprises: when the doorbell sound is identified, the automatic door is started to open.
In the embodiment of the present invention, the automatic door can be automatically opened, specifically including: when the sound of the doorbell is identified, the automatic door is automatically opened, the collected image information of the user is recorded, and a targeted interaction mode is adopted, so that real-time supervision and subsequent tracking are facilitated.
In order to further increase the flexibility of human-computer interaction, the calling the interactive content corresponding to the identity of the user according to the determined interaction mode to interact with the user comprises:
searching interactive content corresponding to the identity of the user from a preset database according to a determined interactive mode, wherein one or more of the following are stored in the database in advance: the information of visiting clients and corresponding interactive contents, the information of staff and corresponding prompting instructions, the identities of identified strangers and corresponding image information are pre-prompted;
and calling the searched interactive content and interacting with the user.
Specifically, the information of the visiting client and the corresponding interactive content may be stored in a preset database in advance, for example: when a certain leader may be observed in the near future, the identity recognition data (face image and/or voice information) of the leader can be temporarily added, a corresponding identity recognition mode (face recognition and/or voice recognition) is added to the robot, and a man-machine interaction mode (such as a welcome mode specially set for the robot) corresponding to the identity recognition mode is added; if the user is judged to be an expert/professor for clinical pre-stored designs, then a question of "strongly welcome XXX expert/professor to guide XXX company" is asked; therefore, the robot can be controlled to perform some special interaction modes, and the flexibility and the practicability of the man-machine interaction are further improved;
the information of the staff to be prompted and the corresponding prompting instruction can be stored in a preset database in advance, for example, for internal staff, the required interactive service can be set on the robot, after the identity of the staff is recognized, some corresponding interactive modes can be provided for the staff with the identity, if the internal staff needs to be in an X meeting in the next day, meeting-opening reminding service can be set on the robot in advance, and after the robot stores the setting, the set man-machine interactive operation for reminding can be executed after the staff is recognized in the next day; for another example, the reminding operation can also be to remind whether to take a work card or not, or whether to arrive late according to the current time, and the like, and the flexibility and the practicability of man-machine interaction can be further enhanced by combining the identity recognition and the setting of a man-machine interaction mode;
the identity of the identified stranger and the corresponding image information can also be stored in a preset database in real time, and the method specifically comprises the following steps: the method comprises the steps of recording images of a person who enters a company for the first time and storing the identity of the person, automatically identifying the identity of the person who enters the company again, determining and outputting a communication statement according to the stored identity of the person, and carrying out communication and interaction corresponding to the identity of the person, wherein the communication statement comprises the following steps: the identity of the person is identified as the courier through the first communication, and the courier asks the company to ask the courier in an active manner for the second time.
In the embodiment of the invention, the user information and the corresponding interactive content are preset in the preset database, and the human-computer interaction mode is flexibly set, and meanwhile, the human face identity memory function is added, so that the flexibility of human-computer interaction is further increased, and the user experience is improved.
In addition, the method further comprises: when multimedia information in the polling process is collected, judging the abnormal condition of the collected multimedia information;
when the multimedia information is identified to be in a burst or abnormal condition, alarming, wherein the alarming at least comprises one of the following conditions: the alarm is given by voice or light on site, and the alarm is given to the customer service center or related personnel.
Specifically, when the emergency or abnormal condition is identified, the alarm module can be used for alarming, wherein the alarm mode can be on-site voice alarm, light flicker alarm, switching to a customer service center or switching to a preset mobile phone, so that real-time monitoring is carried out through a camera acquisition device on a robot, 24-hour day and night monitoring is realized, if a person breaks into the robot outside the working hours, an alarm system can be triggered in time, the robot can be pushed to a company foreground system and an APP, the building property security is informed in time, and the loss of company properties is avoided; meanwhile, the monitoring data are stored in the cloud server, the cloud server is communicated with the client in a wireless communication mode, and a user on the corresponding client can acquire required information from the cloud server at any time.
In summary, in the method for human-computer interaction provided in the embodiment of the present invention, first, multimedia information of a user is collected, where the multimedia information includes: user image information and/or user voice information; then, determining the identity and the interaction mode of the user according to the multimedia information stored in a preset database and the collected multimedia information; wherein, this interactive mode includes: voice interaction and/or action interaction; finally, the interactive content corresponding to the identity of the user is called to interact with the user according to the determined interactive mode, the embodiment of the invention identifies the identity of the user by using a face recognition technology, calls the interactive content matched with the user according to the identified user identity, simultaneously identifies the keywords by using a voice keyword recognition technology, determines the interactive content according to the keywords and the user identity, and further interacts with the user, so that the diversity of man-machine interaction is realized, the flexibility of man-machine interaction is improved, different interactive contents are called for different users, and a mode of performing man-machine interaction in a targeted manner is realized; furthermore, by setting the function of automatically answering the call and combining the voice keyword recognition technology, the interactive content corresponding to the identity of the user can be output, and the call can be automatically transferred to the call transfer number required by the user in the call, so that the diversity of man-machine interaction is further realized, and the flexibility of man-machine interaction is improved; furthermore, the language type corresponding to the voice is identified according to the collected language information of the user or the appearance characteristics of the user are identified according to the collected image information of the user, then the language type is determined according to the appearance characteristics, the corresponding language type is adopted to communicate and interact with the visitor, and the flexibility of man-machine interaction is further improved and the experience degree of the visitor is improved by adopting a multi-language environment communication mode; furthermore, the user information and the corresponding interactive content are preset in the preset database, the man-machine interaction mode is flexibly set, the face identity memory function is added, the flexibility of man-machine interaction is further improved, and the user experience is improved.
As shown in fig. 3, an embodiment of the present invention provides a human-computer interaction apparatus, including:
an information collecting module 402, configured to collect multimedia information of a user, where the multimedia information includes: user image information and/or user voice information;
an identity and interaction determining module 404, configured to determine an identity and an interaction manner of the user according to multimedia information stored in a preset database and the collected multimedia information; wherein, the above-mentioned interactive mode includes: voice interaction and/or action interaction;
and an interactive content retrieving module 406, configured to retrieve, according to the determined interactive manner, interactive content corresponding to the identity of the user to interact with the user.
Further, the information collecting module 402 includes:
the first acquisition unit is used for automatically acquiring the multimedia information of the user when the situation that the user enters an image acquisition area is monitored; or,
and the second acquisition unit is used for acquiring the multimedia information of the user when the condition that the user triggers the acquisition option function is monitored.
Further, as shown in fig. 4, the information collecting module 402 includes:
the call voice acquisition unit 4021 is used for automatically answering a call and acquiring voice information of a user in a call when a call ringtone is identified;
the identity and interaction determination module 404 includes:
a keyword recognition unit 4041, configured to recognize a voice keyword according to the collected voice information of the user in the call;
a user identity determining unit 4042, configured to determine, according to the voice keyword and voice recognition information stored in a preset database, that the identity of the user and the current interaction mode are action interactions, where the voice keyword includes a name of the user and/or a name of a person to whom the user needs to connect a call;
the interactive content retrieving module 406 includes:
the transfer number retrieving unit 4061 is configured to retrieve a call transfer number corresponding to the identity of the user, and transfer the call to a terminal corresponding to the call transfer number.
Further, as shown in fig. 5, the interactive content retrieving module 406 includes:
a language type recognition unit 4062, configured to recognize a language type corresponding to the voice according to the voice information of the user;
the first interactive content retrieving unit 4063 is configured to retrieve the interactive content corresponding to the identity of the user according to the determined interaction manner and the identified language type to interact with the user.
Further, the above human-computer interaction device further comprises:
and the automatic door starting module is used for starting the automatic door to open when the doorbell sound is identified.
Further, the interactive content retrieving module 406 includes:
the interactive content searching unit is used for searching interactive content corresponding to the identity of the user from a preset database according to a determined interactive mode, wherein one or more of the following are stored in the database in advance: the information of visiting clients and corresponding interactive contents, the information of staff and corresponding prompting instructions, the identities of identified strangers and corresponding image information are pre-prompted;
and the second interactive content calling unit is used for calling the searched interactive content and interacting with the user.
Further, the above human-computer interaction device further comprises:
the abnormal condition judging module is used for judging the abnormal condition of the collected multimedia information when the multimedia information in the polling process is collected;
the alarm module is used for alarming when the multimedia information is identified to be in a burst or abnormal condition, wherein the alarm at least comprises one of the following: the alarm is given by voice or light on site, and the alarm is given to the customer service center or related personnel.
An embodiment of the present invention further provides a robot, including: the man-machine interaction device.
Considering that most of robots mainly achieve a question or guide function at present, the functions of the robots are single and not high in intelligence, the functions which can be achieved by the existing robots are not enough to meet the requirements of supervision of foreground work and cannot replace foreground personnel to complete related work of a foreground, the robot provided by the embodiment of the invention is embedded with a face recognition function, the identity of the personnel entering a company is judged by acquiring image information of the personnel, then the image information is compared with image information stored in a database in advance, communication is carried out by using a preset communication mode, a voice keyword recognition function is embedded at the same time, the voice information of the personnel entering the company is acquired, then keywords are recognized, and interactive contents are called according to the keywords and the personnel identities, so that the robot provided by the embodiment of the invention can be used for replacing the personnel to accurately communicate with visitors, Interaction can better meet the actual requirements of foreground robots; and other scenes requiring man-machine interaction diversity and flexibility can be applied.
Further, a card punching system can be embedded into the robot, and an attendance record can be automatically generated, specifically including: through identifying the images of the personnel who enter and exit, the time of the personnel who enter and exit the company is recorded, and an attendance record list is generated, wherein the record items comprise the date, the working time and the working time, and the working time can be recorded by the company.
Further, a predetermined module may be embedded in the robot: such as meeting room reservations and the like.
Furthermore, a sound source positioning function can be embedded into the robot, and the robot can adjust steering in time according to the position of a sound source and always face the human face.
Based on the above analysis, compared with the robot in the related art, the human-computer interaction device and the robot provided in the embodiment of the present invention first collect multimedia information of a user, where the multimedia information includes: user image information and/or user voice information; then, determining the identity and the interaction mode of the user according to the multimedia information stored in a preset database and the collected multimedia information; wherein, this interactive mode includes: voice interaction and/or action interaction; finally, the interactive content corresponding to the identity of the user is called to interact with the user according to the determined interactive mode, the embodiment of the invention identifies the identity of the user by using a face recognition technology, calls the interactive content matched with the user according to the identified user identity, simultaneously identifies the keywords by using a voice keyword recognition technology, determines the interactive content according to the keywords and the user identity, and further interacts with the user, so that the diversity of man-machine interaction is realized, the flexibility of man-machine interaction is improved, different interactive contents are called for different users, and a mode of performing man-machine interaction in a targeted manner is realized; furthermore, by setting the function of automatically answering the call and combining the voice keyword recognition technology, the interactive content corresponding to the identity of the user can be output, and the call can be automatically transferred to the call transfer number required by the user in the call, so that the diversity of man-machine interaction is further realized, and the flexibility of man-machine interaction is improved; furthermore, the language type corresponding to the voice is identified according to the collected language information of the user or the appearance characteristics of the user are identified according to the collected image information of the user, then the language type is determined according to the appearance characteristics, the corresponding language type is adopted to communicate and interact with the visitor, and the flexibility of man-machine interaction is further improved and the experience degree of the visitor is improved by adopting a multi-language environment communication mode; furthermore, the user information and the corresponding interactive content are preset in the preset database, the man-machine interaction mode is flexibly set, the face identity memory function is added, the flexibility of man-machine interaction is further improved, and the user experience is improved.
The man-machine interaction device provided by the embodiment of the invention can be specific hardware on the equipment or software or firmware installed on the equipment. The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. A method of human-computer interaction, comprising:
the method comprises the following steps of collecting multimedia information of a user, wherein the multimedia information comprises: user image information and/or user voice information;
determining the identity and the interaction mode of the user according to multimedia information stored in a preset database and the collected multimedia information; wherein, the interaction mode comprises: voice interaction and/or action interaction;
and calling interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user.
2. The method of claim 1, wherein the collecting multimedia information of the user comprises:
when monitoring that a user enters an image acquisition area, automatically acquiring multimedia information of the user;
or when the condition that the user triggers the acquisition option function is monitored, acquiring the multimedia information of the user.
3. The method of claim 1, wherein the collecting multimedia information of the user comprises: when a phone ring is identified, automatically answering the phone and collecting voice information of a user in a call;
the determining the identity and the interaction mode of the user according to the multimedia information stored in the preset database and the collected multimedia information comprises the following steps: recognizing a voice keyword according to the collected voice information of the user in the call; determining the identity and the current interaction mode of the user as action interaction according to the voice keywords and voice recognition information stored in a preset database, wherein the voice keywords comprise the name of the user and/or the name of a person to which the user needs to connect;
the calling of the interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user comprises: and calling a telephone switching number corresponding to the identity of the user, and switching the call to a terminal corresponding to the telephone switching number.
4. The human-computer interaction method according to any one of claims 1-3, wherein the calling of the interaction content corresponding to the identity of the user in the determined interaction manner to interact with the user comprises:
and recognizing the language type corresponding to the voice according to the voice information of the user, and calling interactive content corresponding to the identity of the user according to the determined interactive mode and the recognized language type to interact with the user.
5. The method of human-computer interaction of claim 1, further comprising: when the doorbell sound is identified, the automatic door is started to open.
6. The method for human-computer interaction according to claim 1, wherein the invoking of the interactive content corresponding to the identity of the user in the determined interactive manner to interact with the user comprises:
searching interactive content corresponding to the identity of the user from a preset database according to a determined interactive mode, wherein one or more of the following are stored in the database in advance: the information of visiting clients and corresponding interactive contents, the information of staff and corresponding prompting instructions, the identities of identified strangers and corresponding image information are pre-prompted;
and calling the searched interactive content and interacting with the user.
7. The method of human-computer interaction of claim 1, further comprising:
when multimedia information in the inspection process is collected, judging the abnormal condition of the collected multimedia information;
when the multimedia information is identified to be in a burst or abnormal condition, alarming, wherein the alarming at least comprises one of the following conditions: the alarm is given by voice or light on site, and the alarm is given to the customer service center or related personnel.
8. A human-computer interaction device, comprising:
the information acquisition module is used for acquiring multimedia information of a user, wherein the multimedia information comprises: user image information and/or user voice information;
the identity and interaction determining module is used for determining the identity and the interaction mode of the user according to the multimedia information stored in a preset database and the collected multimedia information; wherein, the interaction mode comprises: voice interaction and/or action interaction;
and the interactive content calling module is used for calling the interactive content corresponding to the identity of the user according to the determined interactive mode to interact with the user.
9. The human-computer interaction device of claim 8, wherein the information collection module comprises:
the first acquisition unit is used for automatically acquiring multimedia information of a user when the situation that the user enters an image acquisition area is monitored; or,
and the second acquisition unit is used for acquiring the multimedia information of the user when the condition that the user triggers the acquisition option function is monitored.
10. The human-computer interaction device of claim 8, wherein the information collection module comprises:
the call voice acquisition unit is used for automatically answering the call when a call ringtone is identified, and acquiring voice information of a user in the call;
the identity and interaction determination module comprises:
the keyword recognition unit is used for recognizing voice keywords according to the collected voice information of the user in the call;
the identity determining unit of the user is used for determining the identity of the user and the current interaction mode as action interaction according to the voice keywords and voice recognition information stored in a preset database, wherein the voice keywords comprise the name of the user and/or the name of a person to which the user needs to connect;
the interactive content calling module comprises:
and the transfer number calling unit is used for calling a telephone transfer number corresponding to the identity of the user and transferring the call to a terminal corresponding to the telephone transfer number.
11. The human-computer interaction device according to any one of claims 8-10, wherein the interactive content retrieving module comprises:
the language type identification unit is used for identifying the language type corresponding to the voice according to the voice information of the user;
and the first interactive content calling unit is used for calling the interactive content corresponding to the identity of the user according to the determined interactive mode and the identified language type to interact with the user.
12. The human-computer interaction device of claim 8, further comprising:
and the automatic door starting module is used for starting the automatic door to open when the doorbell sound is identified.
13. The human-computer interaction device of claim 8, wherein the interactive content retrieving module comprises:
the interactive content searching unit is used for searching interactive content corresponding to the identity of the user from a preset database according to a determined interactive mode, wherein one or more of the following are stored in the database in advance: the information of visiting clients and corresponding interactive contents, the information of staff and corresponding prompting instructions, the identities of identified strangers and corresponding image information are pre-prompted;
and the second interactive content calling unit is used for calling the searched interactive content and interacting with the user.
14. The human-computer interaction device of claim 8, further comprising:
the abnormal condition judgment module is used for judging the abnormal condition of the collected multimedia information when the multimedia information in the inspection process is collected;
the alarm module is used for alarming when the multimedia information is identified to be in a burst or abnormal condition, wherein the alarm at least comprises one of the following: the alarm is given by voice or light on site, and the alarm is given to the customer service center or related personnel.
15. A robot, comprising: a human-computer interaction device as claimed in any one of claims 8 to 14.
CN201511016826.8A 2015-12-29 2015-12-29 A kind of method, apparatus and robot of human-computer interaction Active CN105425970B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511016826.8A CN105425970B (en) 2015-12-29 2015-12-29 A kind of method, apparatus and robot of human-computer interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511016826.8A CN105425970B (en) 2015-12-29 2015-12-29 A kind of method, apparatus and robot of human-computer interaction

Publications (2)

Publication Number Publication Date
CN105425970A true CN105425970A (en) 2016-03-23
CN105425970B CN105425970B (en) 2019-02-26

Family

ID=55504226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511016826.8A Active CN105425970B (en) 2015-12-29 2015-12-29 A kind of method, apparatus and robot of human-computer interaction

Country Status (1)

Country Link
CN (1) CN105425970B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106161155A (en) * 2016-06-30 2016-11-23 联想(北京)有限公司 A kind of information processing method and master terminal
CN106294813A (en) * 2016-08-15 2017-01-04 歌尔股份有限公司 A kind of method and apparatus of smart machine person recognition
CN106328139A (en) * 2016-09-14 2017-01-11 努比亚技术有限公司 Voice interaction method and voice interaction system
CN106462384A (en) * 2016-06-29 2017-02-22 深圳狗尾草智能科技有限公司 Multi-modal based intelligent robot interaction method and intelligent robot
CN106782606A (en) * 2017-01-17 2017-05-31 山东南工机器人科技有限公司 For the communication and interaction systems and its method of work of Dao Jiang robots
CN107220532A (en) * 2017-04-08 2017-09-29 网易(杭州)网络有限公司 For the method and apparatus by voice recognition user identity
CN107340859A (en) * 2017-06-14 2017-11-10 北京光年无限科技有限公司 The multi-modal exchange method and system of multi-modal virtual robot
CN107368720A (en) * 2016-05-11 2017-11-21 中华映管股份有限公司 Intuitive interaction device and interaction method thereof
CN107423398A (en) * 2017-07-26 2017-12-01 腾讯科技(上海)有限公司 Exchange method, device, storage medium and computer equipment
CN107846508A (en) * 2017-10-28 2018-03-27 上海爱优威软件开发有限公司 For the assisted memory method and system of forgetful crowd
CN107977072A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 What a kind of robot used form method, forms expert system and electronic equipment
CN108038937A (en) * 2017-11-22 2018-05-15 同观科技(深圳)有限公司 A kind of methods of exhibiting, device, terminal device and the storage medium of welcome's information
CN108197963A (en) * 2018-03-28 2018-06-22 广州市菲玛尔咨询服务有限公司 A kind of intelligent customer service manages system
CN109343706A (en) * 2018-09-18 2019-02-15 周文 A kind of interactive system and its implementation
CN109644303A (en) * 2016-08-29 2019-04-16 Groove X 株式会社 Identify the autonomous humanoid robot of behavior of Sounnd source direction
CN109756532A (en) * 2017-11-02 2019-05-14 阿里巴巴集团控股有限公司 For the response processing method and device of external staff
CN110177660A (en) * 2017-01-19 2019-08-27 夏普株式会社 Words and deeds control device, robot, the control method for controlling program and words and deeds control device
CN110324797A (en) * 2018-03-28 2019-10-11 上海博泰悦臻电子设备制造有限公司 Telephony intelligence forwarding method, system, vehicle device and storage medium
US10474947B2 (en) 2016-05-26 2019-11-12 Asustek Computer Inc. Control method, electronic device and non-transitory computer readable storage medium
CN110722568A (en) * 2019-11-01 2020-01-24 北京云迹科技有限公司 Robot control method, device, service robot and storage medium
CN111191074A (en) * 2019-12-10 2020-05-22 秒针信息技术有限公司 Member information query method and system based on voiceprint recognition
CN111988474A (en) * 2020-08-21 2020-11-24 国网山东省电力公司莱芜供电公司 Electric power intelligent scheduling system based on voice recognition function
CN112259076A (en) * 2020-10-12 2021-01-22 北京声智科技有限公司 Voice interaction method and device, electronic equipment and computer readable storage medium
CN118544367A (en) * 2024-07-29 2024-08-27 广州市高科通信技术股份有限公司 Intelligent welcome reception method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102300008A (en) * 2010-06-23 2011-12-28 上海博路信息技术有限公司 Automatic master station based on voice identification
CN103945351A (en) * 2013-01-17 2014-07-23 上海博路信息技术有限公司 Automatic switching system based on voice identification
US20140277735A1 (en) * 2013-03-15 2014-09-18 JIBO, Inc. Apparatus and methods for providing a persistent companion device
CN104951077A (en) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 Man-machine interaction method and device based on artificial intelligence and terminal equipment
CN105058393A (en) * 2015-08-17 2015-11-18 李泉生 Guest greeting robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102300008A (en) * 2010-06-23 2011-12-28 上海博路信息技术有限公司 Automatic master station based on voice identification
CN103945351A (en) * 2013-01-17 2014-07-23 上海博路信息技术有限公司 Automatic switching system based on voice identification
US20140277735A1 (en) * 2013-03-15 2014-09-18 JIBO, Inc. Apparatus and methods for providing a persistent companion device
CN104951077A (en) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 Man-machine interaction method and device based on artificial intelligence and terminal equipment
CN105058393A (en) * 2015-08-17 2015-11-18 李泉生 Guest greeting robot

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368720A (en) * 2016-05-11 2017-11-21 中华映管股份有限公司 Intuitive interaction device and interaction method thereof
US10474947B2 (en) 2016-05-26 2019-11-12 Asustek Computer Inc. Control method, electronic device and non-transitory computer readable storage medium
WO2018000280A1 (en) * 2016-06-29 2018-01-04 深圳狗尾草智能科技有限公司 Multi-mode based intelligent robot interaction method and intelligent robot
CN106462384A (en) * 2016-06-29 2017-02-22 深圳狗尾草智能科技有限公司 Multi-modal based intelligent robot interaction method and intelligent robot
CN106462384B (en) * 2016-06-29 2019-05-31 深圳狗尾草智能科技有限公司 Based on multi-modal intelligent robot exchange method and intelligent robot
CN106161155A (en) * 2016-06-30 2016-11-23 联想(北京)有限公司 A kind of information processing method and master terminal
US10963550B2 (en) 2016-08-15 2021-03-30 Goertek Inc. Method and apparatus for person identification by a smart device
WO2018032931A1 (en) * 2016-08-15 2018-02-22 歌尔股份有限公司 Person recognition method and apparatus of intelligent device
CN106294813A (en) * 2016-08-15 2017-01-04 歌尔股份有限公司 A kind of method and apparatus of smart machine person recognition
US11376740B2 (en) 2016-08-29 2022-07-05 Groove X, Inc. Autonomously acting robot that recognizes direction of sound source
CN109644303B (en) * 2016-08-29 2021-07-23 Groove X 株式会社 Robot for operating sounding body
CN109644303A (en) * 2016-08-29 2019-04-16 Groove X 株式会社 Identify the autonomous humanoid robot of behavior of Sounnd source direction
CN106328139A (en) * 2016-09-14 2017-01-11 努比亚技术有限公司 Voice interaction method and voice interaction system
CN106782606A (en) * 2017-01-17 2017-05-31 山东南工机器人科技有限公司 For the communication and interaction systems and its method of work of Dao Jiang robots
CN110177660A (en) * 2017-01-19 2019-08-27 夏普株式会社 Words and deeds control device, robot, the control method for controlling program and words and deeds control device
US11400601B2 (en) 2017-01-19 2022-08-02 Sharp Kabushiki Kaisha Speech and behavior control device, robot, storage medium storing control program, and control method for speech and behavior control device
CN110177660B (en) * 2017-01-19 2022-06-14 夏普株式会社 Language control device, robot, storage medium, and control method
CN107220532A (en) * 2017-04-08 2017-09-29 网易(杭州)网络有限公司 For the method and apparatus by voice recognition user identity
CN107340859B (en) * 2017-06-14 2021-04-06 北京光年无限科技有限公司 Multi-modal interaction method and system of multi-modal virtual robot
CN107340859A (en) * 2017-06-14 2017-11-10 北京光年无限科技有限公司 The multi-modal exchange method and system of multi-modal virtual robot
WO2019019935A1 (en) * 2017-07-26 2019-01-31 腾讯科技(深圳)有限公司 Interaction method, interaction terminal, storage medium, and computer device
US11069338B2 (en) 2017-07-26 2021-07-20 Tencent Technology (Shenzhen) Company Limited Interactive method, interactive terminal, storage medium, and computer device
CN107423398A (en) * 2017-07-26 2017-12-01 腾讯科技(上海)有限公司 Exchange method, device, storage medium and computer equipment
CN107423398B (en) * 2017-07-26 2023-04-18 腾讯科技(上海)有限公司 Interaction method, interaction device, storage medium and computer equipment
CN107977072B (en) * 2017-07-28 2021-06-08 北京物灵智能科技有限公司 Formation method for robot, formation expert system and electronic equipment
CN107977072A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 What a kind of robot used form method, forms expert system and electronic equipment
CN107846508A (en) * 2017-10-28 2018-03-27 上海爱优威软件开发有限公司 For the assisted memory method and system of forgetful crowd
CN109756532A (en) * 2017-11-02 2019-05-14 阿里巴巴集团控股有限公司 For the response processing method and device of external staff
CN108038937A (en) * 2017-11-22 2018-05-15 同观科技(深圳)有限公司 A kind of methods of exhibiting, device, terminal device and the storage medium of welcome's information
CN110324797A (en) * 2018-03-28 2019-10-11 上海博泰悦臻电子设备制造有限公司 Telephony intelligence forwarding method, system, vehicle device and storage medium
CN110324797B (en) * 2018-03-28 2022-04-15 博泰车联网科技(上海)股份有限公司 Intelligent telephone switching method, system, vehicle machine and storage medium
CN108197963A (en) * 2018-03-28 2018-06-22 广州市菲玛尔咨询服务有限公司 A kind of intelligent customer service manages system
CN109343706B (en) * 2018-09-18 2022-03-11 周文 Interactive system and implementation method thereof
CN109343706A (en) * 2018-09-18 2019-02-15 周文 A kind of interactive system and its implementation
CN110722568A (en) * 2019-11-01 2020-01-24 北京云迹科技有限公司 Robot control method, device, service robot and storage medium
CN111191074A (en) * 2019-12-10 2020-05-22 秒针信息技术有限公司 Member information query method and system based on voiceprint recognition
CN111988474A (en) * 2020-08-21 2020-11-24 国网山东省电力公司莱芜供电公司 Electric power intelligent scheduling system based on voice recognition function
CN111988474B (en) * 2020-08-21 2021-10-01 国网山东省电力公司莱芜供电公司 A Power Intelligent Dispatching System Based on Speech Recognition Function
CN112259076A (en) * 2020-10-12 2021-01-22 北京声智科技有限公司 Voice interaction method and device, electronic equipment and computer readable storage medium
CN112259076B (en) * 2020-10-12 2024-03-01 北京声智科技有限公司 Voice interaction method, voice interaction device, electronic equipment and computer readable storage medium
CN118544367A (en) * 2024-07-29 2024-08-27 广州市高科通信技术股份有限公司 Intelligent welcome reception method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN105425970B (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN105425970B (en) A kind of method, apparatus and robot of human-computer interaction
KR101827320B1 (en) Server for call center using artificial intelligence
US10284820B2 (en) Covert monitoring and recording of audio and video in controlled-environment facilities
CN105579993B (en) Data Integration mechanism for Internet of Things integrated platform
WO2017117879A1 (en) Personal identification processing method, apparatus and system
US20170289350A1 (en) System and method for initiating an emergency response
US9843916B2 (en) Systems and methods for automatic emergency contact routing
US9667763B1 (en) Family communications in a controlled-environment facility
CN109327631A (en) A kind of artificial customer service system of intelligence
US10242417B1 (en) Automated resident check-ins in controlled-environment areas using biometric identification
US20210304339A1 (en) System and a method for locally assessing a user during a test session
US11216900B1 (en) Investigation systems and methods employing positioning information from non-resident devices used for communications with controlled-environment facility residents
Leeney et al. Examining the forensic quality of police call-centre interviews
CN104954429A (en) Method of automatic help seeking system in danger
US12190559B2 (en) System and method for prioritizing similarities and differences during responses to incidents based on incident context
US10904297B1 (en) Controlled-environment facility resident and associated non-resident telephone number investigative linkage to e-commerce application program purchases
Liu et al. Value-inspired service design in elderly home-monitoring systems
Escher et al. Transparency for Bystanders in IoT regarding audiovisual Recordings
US10728383B1 (en) Controlled-environment facility resident pattern electronic communication detection and controlled-environment facility action
US10891813B2 (en) Cognitive vetting system for unsolicited residential visitors
De Backere et al. Social-aware event handling within the FallRisk project
KR102481444B1 (en) Social weak person management service providing system and method
US20250247468A1 (en) Automated alarm intake and classification
US20240378933A1 (en) Central Management System and Methods to Manage Facility Entrances
US20250191584A1 (en) Method, device and system of a voice responsive device based participative public engagement computing platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 Guangdong city of Shenzhen province Nanshan District Guangdong streets Houhai Road No. 2388 Yi financial science and technology building 12 room 06

Applicant after: SHENZHEN WEIFU ROBOT TECHNOLOGY Co.,Ltd.

Address before: 518000 Guangdong city of Shenzhen province Nanshan District Guangdong streets Houhai Road No. 2388 Yi financial science and technology building 12 room 06

Applicant before: SHENZHEN LINGYANG MICRO SERVER ROBOT TECHNOLOGY Co.,Ltd.

COR Change of bibliographic data
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220704

Address after: 518000 418a, building 3, sangtai Science Park, Xinwei community, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen hande Intelligent Technology Co.,Ltd.

Address before: 518000 room 06, 12 / F, Yihua financial technology building, 2388 Houhai Avenue, Yuehai street, Nanshan District, Shenzhen, Guangdong Province

Patentee before: SHENZHEN WEIFU ROBOT TECHNOLOGY Co.,Ltd.