CN114124597A - Control method, equipment and system of Internet of things equipment - Google Patents
Control method, equipment and system of Internet of things equipment Download PDFInfo
- Publication number
- CN114124597A CN114124597A CN202111263599.4A CN202111263599A CN114124597A CN 114124597 A CN114124597 A CN 114124597A CN 202111263599 A CN202111263599 A CN 202111263599A CN 114124597 A CN114124597 A CN 114124597A
- Authority
- CN
- China
- Prior art keywords
- user
- control
- scene
- target
- internet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000009471 action Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
- H04L12/282—Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/146—Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Telephonic Communication Services (AREA)
- Selective Calling Equipment (AREA)
Abstract
The embodiment of the application discloses a control method, equipment and a system of Internet of things equipment; the method comprises the following steps: receiving the voice of a user; acquiring user information of the user, wherein the user information comprises a user position and/or a user identity; determining a target scene identifier in a scene identifier set according to the user information; the scene identification set comprises space identifications and user identifications, the space identifications are obtained according to the positions of the Internet of things equipment in each control scene, and the user identifications are obtained according to users corresponding to each control scene; when a control scene corresponding to a target scene identifier exists in a first control scene obtained according to the voice, determining a target control scene according to the control scene corresponding to the target scene identifier; controlling the Internet of things equipment in the target control scene; the voice of the user for controlling the equipment is simplified, and the use experience of the user is improved.
Description
Technical Field
The application relates to the field of internet of things, in particular to a control method, equipment and system of internet of things equipment.
Background
With the continuous development and progress of the internet of things technology, the application of internet of things devices such as intelligent home appliances is increasingly popularized. The intelligent household appliances are household appliances formed by introducing a microprocessor, a sensor technology, a network communication technology and the like into household appliances, and common intelligent household appliances comprise lamps, air conditioners, refrigerators, sound equipment and the like.
In order to improve the intelligence of controlling various Internet of things devices, a user can control the state of the devices through voice. Generally, multiple control scenes are set for each internet of things device, and when different control scenes are executed through user voice, the same device can be controlled to be in different working states, or different devices can be controlled.
At present, in order to realize control of devices in different scenes, user voices are generally recognized and matched with voice keywords, so that a target control scene is obtained. The speech keywords typically include user actions, as well as other keywords that can be used to determine a target control scenario. However, the above voice control method for the device needs the voice of the user to provide more information, otherwise, the wrong device may be controlled, so that the voice of the user is more complicated, and the user experience is reduced.
Disclosure of Invention
In view of this, the present application provides a method, a device, and a system for controlling an internet of things device, so as to simplify a voice of a user for controlling the device and improve user experience.
In a first aspect, the present application provides a method for controlling an internet of things device, where the method includes:
receiving the voice of a user;
acquiring user information of the user, wherein the user information comprises a user position and/or a user identity;
determining a target scene identifier in a scene identifier set according to the user information; the scene identification set comprises space identifications and user identifications, the space identifications are obtained according to the positions of the Internet of things equipment in each control scene, and the user identifications are obtained according to users corresponding to each control scene;
when a control scene corresponding to a target scene identifier exists in a first control scene obtained according to the voice, determining a target control scene according to the control scene corresponding to the target scene identifier;
and controlling the Internet of things equipment in the target control scene.
Because the user information comprises the user position and/or the user identity of the user, the user information can more accurately reflect the requirements of the user; the space identification in the scene identification set is obtained according to the position of the internet of things equipment in each control scene, and the role identification in the scene identification set is obtained according to the user identity in each control scene. Therefore, the space identification and the role identification in the scene identification set can represent each control scene more accurately. Therefore, the target scene identification meeting the requirements of the user can be obtained by matching the user information with the scene identification in the scene identification set. And when a control scene corresponding to the target scene identification exists in a first control scene obtained according to the voice, determining a target control scene according to the control scene corresponding to the target scene identification. The voice of the user is not the only basis for determining the target control scene, and the voice can not include the part related to the user information, so that the voice used by the user for controlling the equipment can be simplified, and the use experience of the user is improved.
In one possible embodiment, the method further comprises:
and when a control scene corresponding to the target scene identification does not exist in a first control scene obtained according to the voice, determining a target control scene according to the first control scene.
In a possible implementation manner, before the obtaining the user information of the user including the user location and/or the user identity, the method further includes:
judging whether the semantic meaning of the voice contains a preset scene keyword or not;
the acquiring user information including a user location and/or a user identity of the user includes:
and when the semantics do not contain the preset scene keywords, acquiring the user information.
In one possible embodiment, the method further comprises:
and when the semantics comprise the preset scene key words, determining a target control scene according to the semantics.
In a possible implementation manner, the determining a target control scenario according to the control scenario corresponding to the target scenario identifier includes:
determining a control scene corresponding to the target scene identifier as a second control scene;
when the second control scenario includes a plurality of control scenarios,
and determining a target control scene according to the executed historical information of each control scene in the second control scenes.
In one possible embodiment, the history information of execution of each control scenario in the second control scenario includes:
the time at which each of the second control scenarios is executed,
or the like, or, alternatively,
a number of times each of the second control scenarios is executed within a preset time period.
In one possible embodiment, the method further comprises:
determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment;
determining the position distribution of the controlled equipment in each control scene in a preset area;
and determining the space identification corresponding to each control scene according to the position distribution.
In one possible embodiment, the method further comprises:
determining the identity of a target user in each control scene;
judging whether a target user in each control scene has a role identifier of the user created by the target user, wherein the role identifier of the user is used for representing the role of the user in a preset user set;
if so, determining user identifications corresponding to the control scenes respectively according to the identity identifications of the target users in the control scenes and the role identifications of the users in the control scenes.
In a second aspect, the present application provides an internet of things gateway device, where the internet of things gateway device is configured to execute any one of the above control methods for an internet of things device, so as to control the internet of things device.
In a third aspect, the present application provides an internet of things system, which includes the above-mentioned internet of things gateway device, and further includes one or more internet of things devices.
Drawings
Fig. 1 is a schematic structural diagram of an internet of things system provided in an embodiment of the present application;
fig. 2 is a flowchart of a control method for an internet of things device according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a control method for an internet of things device according to another embodiment of the present application.
Detailed Description
In order to facilitate understanding of the technical solutions provided in the embodiments of the present application, a control method, a device, and a system for an internet of things device provided in the embodiments of the present application are described below with reference to the accompanying drawings.
While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Other embodiments, which can be derived by those skilled in the art from the embodiments given herein without any inventive contribution, are also within the scope of the present application.
In the claims and specification of the present application and in the drawings accompanying the description, the terms "comprise" and "have" and any variations thereof, are intended to cover non-exclusive inclusions.
At present, in order to realize control of devices in different scenes, user voices are generally recognized and matched with voice keywords, so that a target control scene is obtained. The speech keywords typically include user actions, as well as other keywords that can be used to determine a target control scenario. However, the above voice control method for the device needs the voice of the user to provide more information, otherwise, the wrong device may be controlled, so that the voice of the user is more complicated, and the user experience is reduced.
Based on this, in the embodiment of the present application provided by the inventor, since the target scene identification is obtained according to the user information, the user information includes the user location and/or the user identity, and more information related to the control requirement of the user can be provided. Therefore, even under the condition that the voice of the user provides less information, the target control scene obtained by combining the voice of the user and the user information can be combined to use various sensing data, and the control requirements of the user can be better met, so that the voice of the user for controlling the equipment is simplified, and the use experience of the user is improved.
In order to improve the intelligence of controlling the internet of things equipment, a user can control the state of the equipment through voice. For example, each internet of things device is often one or more household devices. Generally, a plurality of control scenes are set for one or more household devices, and when different control scenes are executed through user voice, the same household device can be controlled to be in different working states, or different devices can be controlled.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an internet of things system provided in an embodiment of the present application, where an internet of things device may include a home device.
As shown in fig. 1, the internet of things system 100 includes an internet of things gateway device 101 and one or more internet of things devices. The Internet of things equipment execution control method controls one or more pieces of Internet of things equipment.
In fig. 1, an example of the internet of things system 100 includes an internet of things device 102, an internet of things device 103, and an internet of things device 104. In practical application, the internet of things system may further include one internet of things device, two internet of things devices, or more than three internet of things devices.
In some possible cases, the internet of things device may be a household device, such as an air conditioner, a lamp, a fan, a sound box, and the like.
Referring to fig. 2, fig. 2 is a flowchart of a method for controlling an internet of things device according to an embodiment of the present application.
As shown in fig. 2, the method for controlling the internet of things device in the embodiment of the present application includes S201 — a device 205.
S201, receiving voice of a user.
In S201, the user is a user who performs control of the device by the control scene. The user controls the device based on speech.
S202, obtaining user information including the user position and/or the user identity of the user.
In S202, the user position of the user refers to the position where the user is located in S201. The user identity of the user is used for identifying the identity of the user.
The user information of the user may include a user location and/or a user identity, in other words, the user information of the user may include a user location; the user information of the user may include a user identity; the user information of the user may include a user location and a user identity.
It will be appreciated that the user may also include other information about the user. The user information and the voice in S201 both belong to the user.
S203, determining a target scene identifier in a scene identifier set according to the user information; the scene identification in the scene identification set comprises a space identification and a user identification, the space identification is obtained according to the position of the internet of things equipment in each control scene, and the user identification is obtained according to the user corresponding to each control scene.
In S203, the scene identifier set is a set of scene identifiers, where the set includes a plurality of scene identifiers, and the plurality of scene identifiers may be divided into two types, one type is a spatial identifier, and the other type is a user identifier. The scene identification in the scene identification set corresponds to each control scene.
And determining a target scene identifier in the scene identifier set according to the user information. Specifically, the target scene identifier is a scene identifier in the scene identifier set, and may be one or more scene identifiers; the basis for determining the target scene identity from the set of scene identities is user information.
And the space identifiers in the scene identifier set are obtained according to the positions of the Internet of things equipment in each control scene.
The internet of things equipment in each control scene refers to the internet of things equipment of which the operation state is controlled when each control scene is executed. For each control scenario, the internet of things devices in the control scenario may include one or more devices.
The location of the internet of things device in each control scene refers to the location of the physical network device in each control scene. For each control scenario, it specifically refers to the spatial location where the controlled device or devices in the control scenario are located.
And the user identification in the scene identification set is obtained according to the user corresponding to each control scene. For each control scene, the user corresponding to the control scene is the user who completes control of the internet of things device by executing the control scene.
S204, when a control scene corresponding to the target scene identification exists in the first control scene obtained according to the voice, determining a target control scene according to the control scene corresponding to the target scene identification.
In S204, a first control scenario is derived from the user' S voice. In some possible cases, the first control scenario may include one or more control scenarios.
In some possible cases, the above-described first control scenario may be obtained in the following manner.
Recognizing the voice of a user to obtain the semantic meaning of the voice; acquiring all control scenes and voice keywords corresponding to all control scenes, and matching the semantics of the voice with preset voice keywords; and obtaining the first control scene according to the matched voice keywords, and forming a scene list conforming to the semantics at the moment.
Specifically, the above-described speech processing procedure can be realized by natural language processing technology (NLP).
The process of obtaining the first control scene from the user 'S voice is performed in S201 by receiving the user' S voice. Specifically, the process may be performed before the user information of the user is acquired, or may be performed after the user information of the user is acquired.
In order to realize the control of the device by voice, the information contained in the voice of the user is usually related to the control requirement of the user on the device, or is partially related to the control requirement of the user on the device. Therefore, one or more control scenes that conform to the speech semantics can be obtained from the speech of the user, i.e., the first control scene is obtained from the speech of the user.
When the voice of the user is simple and the provided information is less, the one or more control scenes obtained according to the voice do not necessarily meet the control requirements of the user, or do not necessarily meet the control requirements of the user.
The target scene identification is obtained according to the user information of the user and contains information related to the control requirement of the user. In S204, when a control scene corresponding to the target scene identifier exists in the first control scene obtained according to the voice, a target control scene is determined according to the control scene corresponding to the target scene identifier. In other words, in the first control scene obtained from the speech, the control scene corresponding to the target scene identifier is selected, and thus the target control scene is determined.
The target scene identification is obtained according to the user information of the user and contains information related to the control requirement of the user. Therefore, the target control scene obtained in the process is more in line with the control requirement of the user.
In S204, in the first control scene obtained according to the voice, there is a control scene corresponding to the target scene identifier, which is a condition for determining the target control scene according to the control scene corresponding to the target scene identifier. This condition can be obtained in the form of judgment.
In this case, the above steps are:
judging whether a control scene corresponding to the target scene identification exists in a first control scene obtained according to the voice;
and if so, determining a target control scene according to the control scene corresponding to the target scene identifier.
Since the scene identification includes the space identification and the user identification, the target scene identification may be one or more scene identifications. Therefore, in the first control scenario, there is a control scenario corresponding to the target scenario identifier, which includes at least the following cases:
in a first control scene, a control scene corresponding to the space identifier in the target scene identifier exists, and a control scene corresponding to the user identifier in the target scene identifier does not exist;
in the second situation, in the first control scene, a control scene corresponding to the user identifier in the target scene identifier exists, and a control scene corresponding to the spatial identifier in the target scene identifier does not exist;
and thirdly, in the first control scene, a control scene corresponding to the space identifier and the role identifier in the target scene identifier exists.
S205, controlling the Internet of things equipment in the target control scene.
In S205, the internet of things device in the target control scenario refers to the internet of things device whose operation state is controlled when the target control scenario is executed.
Based on S201-S205, since the target scene identification is obtained according to the user information, which includes the user location and/or the user identity, more information related to the control requirement of the user can be provided. Therefore, even under the condition that the voice of the user provides less information, the target control scene obtained by combining the voice of the user and the user information can be combined to use various sensing data, and the control requirements of the user can be better met, so that the voice of the user for controlling the equipment is simplified, and the use experience of the user is improved.
In one possible implementation manner, when there is no control scenario corresponding to the target scenario identifier in the first control scenario, the target control scenario may be determined by the following implementation manner:
when, in the first control scenario, there is no control scenario corresponding to the target scenario identification,
determining a target control scene according to the first control scene;
and controlling the Internet of things equipment in the target control scene.
When the first control scene does not have a control scene corresponding to the target scene identifier, it is indicated that a certain difference exists between the control scene obtained according to the voice of the user and the control scene obtained according to the user information matching scene identifier.
Considering that the voice uttered by the user when controlling the device generally represents the current control requirement of the user, the target control scene is determined according to the first control scene obtained by the voice of the user, so as to complete the control of the device of the internet of things.
In some possible cases, the first control scenario includes a plurality of control scenarios. In order to improve the accuracy of the control of the device, a unique target control scenario needs to be determined. At this time, according to the first control scenario, the following implementation may be adopted to determine the target control scenario:
and determining a target control scene according to the history information of the executed control scenes in the first control scene.
Here, the history information of execution of each control scenario in the first control scenario refers to a case where a past control scenario is executed for each control scenario in the first control scenario.
In some possible cases, the history information of the execution of each control scenario in the first control scenario may include:
the time at which each control scenario is executed in the first control scenario. Further, in particular the time of the last execution. For example, for one of the first control scenarios, the time of the last execution is closer to the current time, which indicates that the control scenario may be more relevant at the current time. Thus, the control scenario may be determined as the target control scenario.
In some possible cases, the history information of the executed control scenarios in the first control scenario may further include:
the number of times each of the first control scenarios is executed within a preset time period. For example, for one of the first control scenarios, the control scenario is executed more frequently within the preset time period, which indicates that the control scenario is executed more frequently within the preset time period, and the control scenario is executed more likely. Thus, the control scenario may be determined as the target control scenario.
In a possible implementation manner, determining a target control scenario according to the control scenario corresponding to the target scenario identifier may include the following implementation manners:
determining a control scene corresponding to the target scene identifier as a second control scene;
when the second control scenario includes a plurality of control scenarios,
and determining a target control scene according to the executed historical information of each control scene in the second control scenes.
Since the target scene identifier may be one or more scene identifiers, when a control scene corresponding to the target scene identifier exists in the first control scene, it is determined that the control scene corresponding to the target scene identifier is the second control scene. At this time, the obtained second control scenario may also include a plurality of control scenarios.
In order to improve the accuracy of the device control, the obtained target control scenario is unique, and therefore, the unique target control scenario needs to be determined according to the second control scenario.
Here, the history information of execution of each control scenario in the second control scenario refers to a case where a past control scenario is executed for each control scenario in the second control scenario.
In some possible cases, the history information of the execution of each control scenario in the second control scenario may include: the time at which each control scenario is executed in the second control scenario. Further, in particular the time of the last execution.
In some possible cases, the history information of executed control scenarios in the second control scenario may further include: the number of times each of the second control scenarios is executed within a preset time period.
The meaning and the role played by the history information executed for each control scene in the second control scene are similar to the above history information executed for each control scene in the first control scene. And will not be described in detail herein.
In some possible cases, when the second control scenario includes a plurality of control scenarios, the target control scenario may also be determined in other ways.
In S202, user information including a user location and/or a user identity of the user is obtained.
The user information is used for determining the target scene identification and further determining the target control scene, so that the control of the Internet of things equipment is completed. Therefore, more accurate user information is acquired, and the accuracy of equipment control is improved.
Examples of specific implementations for user information acquisition are provided herein. It should be understood that the present disclosure is only exemplary of specific implementations of the embodiments of the present disclosure, and does not limit the embodiments of the present disclosure.
In the control method S202 of the internet of things device in the embodiment of the present application, obtaining the user location may be implemented in the following manner. The control of household equipment applied to families is realized in the following mode.
In a first way,
Detecting the voice of the user according to the voice equipment in the user family; each voice device obtains the sound distance of the voice; determining the space of one device closest to the user according to the sound distance obtained by each voice device; and determining the position of the user according to the space where the device closest to the user is located.
The voice device refers to a device capable of receiving a user's voice in the internet of things system. The voice distance refers to a distance from a voice-emitting position obtained by each voice device according to the voice of the user, and generally refers to a position where the user emits the voice.
For example, in an internet of things system, there are a first voice device, a second voice device, and a third voice device. The first voice device, the second voice device and the third voice device receive the voice of the user at the same time, and the energy corresponding to the voice is the first energy, the second energy and the third energy respectively. And judging the magnitude relation of the first energy, the second energy and the third energy. And when the first energy is maximum, determining that the first interaction equipment is the voice equipment closest to the user, and determining the position of the user according to the position of the first voice equipment.
The second way,
In order to realize the control of the equipment in the Internet of things system, a voice assistant can be installed on the intelligent equipment of the user. The intelligent device can be used for receiving the voice of the user, completing dialogue with the user, interacting with the cloud end and the like according to the voice of the user. In the internet of things system, a plurality of intelligent devices can be included.
In the internet of things system, the first intelligent device is used for actually receiving voice of a user and interacting with the cloud.
In some possible cases, the user location may be determined directly from the location where the first smart device is located. In order for a user to control a device by voice, the user generally needs to input voice using the above-mentioned smart device. And when the user inputs voice, the distance between the user and the intelligent equipment is short. Therefore, in order to obtain the user position, the intelligent device receiving the voice input of the user can be used for directly determining the position of the voice device as the position of the user.
In some possible cases, the received sound may be determined by a plurality of the smart devices; and determining the position of the user according to the position of the intelligent device closest to the user. Since there may be a plurality of the above-mentioned smart devices in the internet of things system. In order to further improve the accuracy of determining the location of the user, the location of the user may be obtained from a plurality of the above-mentioned smart devices.
The third method,
In the internet of things system, there may be household devices having a function of detecting a spatial position of a user, for example, sensing devices such as infrared air conditioners and lamps.
And detecting the position of the user by using the sensing equipment, thereby obtaining the position of the user.
The fourth way,
When determining the user's location, the device receiving the user's voice, the device making interactive response with the user, and the device closest to the user are not necessarily the location where the user is currently speaking. Therefore, the above three ways may be combined, or any combination of the above ways may be used to improve the accuracy of determining the user's position.
It should be understood that the foregoing is only an example of the implementation manner of obtaining the user location in S202 in this embodiment, and is not a limitation to this embodiment. Obtaining the user location may also be accomplished in other ways.
In the control method S202 of the internet of things device in the embodiment of the present application, obtaining the user identity may be implemented in the following manner. The control of household equipment applied to families is realized in the following mode.
And identifying the collected biological characteristic data of the user by utilizing an interactive end for voice interaction with the user, and identifying the interactive user.
The biometric data of the user includes a voiceprint, iris, etc. of the user. And processing the biological characteristic data by utilizing the image and voice processing capability of the interactive end. Upon identifying the interacting user, the user identity of the user may be characterized by a user ID or the like.
It should be understood that the foregoing is only an example of the implementation manner of obtaining the user identity in S202 in this embodiment, and is not a limitation to this embodiment. Obtaining the user identity may also be accomplished in other ways.
In the embodiment of the present application corresponding to fig. 2, according to the voice of the user and the acquired user identity including the user location or/and the user identity, the voice of the user control device can be simplified, and the use experience of the user is improved.
When a user controls a device by voice, the voice uttered by the user typically represents the user's current control needs. When the voice sent by the user contains more information, a control scene meeting the user requirement can be directly obtained according to the voice.
Another embodiment of the present application is provided below, where in order to improve the efficiency of the device control, a certain analysis is first performed on the user's voice. And when the voice of the user meets the preset condition, directly determining a target control scene according to the voice of the user.
Referring to fig. 3, fig. 3 is a flowchart of a method for controlling an internet of things device according to another embodiment of the present application.
As shown in fig. 3, the method for controlling the internet of things device in the embodiment of the present application includes S301 to S310.
S301, recognizing the voice of the user to obtain the semantic meaning of the voice.
In S301, the user is a user who performs device control by voice.
S302, judging whether the semantics contain preset scene keywords.
In S302, the scene keyword is preset.
In some possible cases, whether the semantics include a preset scene keyword may be determined by the following implementation manner:
a plurality of scene keywords are preset, and after the voice semantics are obtained, whether the scene keywords are included in the semantics or not is judged in an exhaustive comparison mode.
For example, the preset scene keyword may be used to describe related information of a spatial position corresponding to the control scene, and may also be used to describe related information of a user corresponding to the control scene.
If yes, executing S303-S306; otherwise, S306 is executed.
S303, obtaining the user information of the user, wherein the user information comprises the user position and/or the user identity.
And when the semantics comprise preset scene keywords, acquiring user information of the user, wherein the user information comprises the user position and/or the user identity.
S304, determining a target scene identifier in a scene identifier set according to the user information; the scene identification in the scene identification set comprises a space identification and a user identification, the space identification is obtained according to the position of the internet of things equipment in each control scene, and the user identification is obtained according to the user corresponding to each control scene.
S305, when a control scene corresponding to the target scene identification exists in a first control scene obtained according to the voice, determining a target control scene according to the control scene corresponding to the target scene identification.
S306, determining a target control scene according to the semantics.
And when the semantics comprise preset scene keywords, determining a target control scene according to the semantics.
In some possible cases, the preset scene keyword may be used to describe related information of a spatial position corresponding to the control scene.
For example, in the scenario of control of home devices, the control scenario includes a "parent is sleeping in a main bed" scenario, a "child is sleeping in a child room" scenario, and a "guest is sleeping in a sub bed" scenario. Under the three control scenes, different devices are controlled respectively.
The preset scene keywords can be set to describe the positions of the internet of things devices in the control scene.
The preset scene keywords are specifically 'main lying', 'secondary lying' and 'child room'.
The semantic of recognizing the user's voice is "sleep in master-sleeper".
In controlling a device by speech, semantics are typically recognized to obtain action keywords that identify a user action. At this time, the action keyword "sleep" is obtained.
It is detected that the semantics include a preset scene keyword "bedroom".
The determination result obtained in S302 is: the semantics comprise preset scene keywords.
At this time, the target control scene "parents sleep in the lying state" in S308 is obtained directly according to the "lying state" and "sleep" obtained in the above process, and the internet of things device in the target control scene is controlled.
Because the voice of the user not only contains the user action information of 'sleeping' but also comprises the spatial position information 'master-sleeping' corresponding to the control scene, the target control scene is directly determined according to the voice of the user at the moment, and the control of the equipment can be usually realized more accurately.
In some possible cases, the preset scene keyword may also be used to describe related information of a user corresponding to the control scene.
For example, in the scenario of control of home devices, the control scenario includes a "parent is sleeping in a main bed" scenario, a "child is sleeping in a child room" scenario, and a "guest is sleeping in a sub bed" scenario. Under the three control scenes, different devices are controlled respectively.
The preset scene keyword may be set as a user name used for being served by the internet of things device when the control scene is executed.
The preset scene keywords are specifically 'parents', 'children' and 'visitors'.
The semantic of recognizing the user's voice is "parents sleep".
In controlling a device by speech, semantics are typically recognized to obtain action keywords that identify a user action. At this time, the action keyword "sleep" is obtained.
It is detected that the semantics include a preset scene keyword "parents".
The determination result obtained in S302 is: the semantics comprise preset scene keywords.
At this time, the target control scene "the parents sleep in the main sleeping" in S308 is obtained directly according to the "parents" and "sleep" obtained in the above process, and the internet of things device in the target control scene is controlled.
Because the voice of the user not only contains the user action information of 'sleeping' but also comprises the information of 'parents' representing the user corresponding to the control scene, the target control scene is directly determined according to the voice of the user at the moment, and the control of the equipment can be usually realized more accurately.
And S307, controlling the Internet of things equipment in the target control scene.
In a possible case, the spatial identifiers in the scene identifier set may be obtained in the following manner, specifically including S401-S404.
S401, determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment.
In S401, the control scenario refers to a scenario for controlling an internet of things device. When different control scenes are executed through user voice, the same pair of equipment can be controlled to be in different working states, or different equipment can be controlled.
The controlled device in the control scenario refers to the internet of things device whose working state is controlled when the control scenario is executed. The controlled device may include one or more internet of things devices.
In order to meet different requirements of the user for controlling the device, the object determined in S401 is a controlled device by setting one or more control scenes, and the controlled device belongs to each control scene.
S402, determining the position distribution of the controlled equipment in each control scene in a preset area.
In S402, the location distribution refers to a distribution of locations of the controlled device in space.
In some possible cases, the preset area may be an area corresponding to a home of a family, and may specifically include an area corresponding to one or more rooms. One or more household devices are distributed in rooms in the house.
The controlled devices in each control scene are respectively distributed in the preset area, so that each control scene corresponds to the position distribution. S402 determines the position distribution corresponding to each control scene.
And S403, determining the space identifiers corresponding to the control scenes according to the position distribution.
In S403, it is determined that the control scenes respectively correspond to space identifiers, that is, each control scene corresponds to its own space identifier, and the space identifiers are used for identifying the control scenes.
For each control scene, the spatial identification has location information of the devices in the control scene, since the spatial identification is derived from the location distribution.
Based on S401-S404, the spatial identifiers of the control scenes are obtained according to the position distribution of the controlled devices in the preset areas in the control scenes.
For each control scene, the space identifier and the control scene have a corresponding relationship, and the space identifier contains position information of the controlled device in the control scene.
When the internet of things equipment is controlled through voice, when user information comprises a user position, a target space identifier is determined according to the user information, controlled equipment in a control scene corresponding to the target space identifier is obtained through the corresponding relation between the space identifier and the control scene, and the controlled equipment is controlled.
Because the space identification contains the position information of the equipment, the voice of the user does not need to contain the position information of the equipment, and therefore the voice of the user when controlling the equipment is simplified, and the use experience of the user is improved.
In a possible case, the user identifiers in the user identifier set may be obtained in the following manner, specifically including S501-S504.
S501, the identity of the target user in each control scene is determined.
In S501, the identity belongs to the target user. The target user is a target user when the control scene is executed. Executing the control scenario can complete the control of the target user to the device through voice.
In some possible cases, determining the identity of the target user in each control scenario may be implemented as follows: for each control scene, acquiring biological characteristic data of a target user through equipment with a biological characteristic data acquisition function, and taking biological characteristic information of the target user as an identity of the target user; the biometric information and the user name can also be corresponded, and the user name is used as the identity of the target user. For example, in order to realize the control of the device by the user through voice, the device for receiving the voice of the user is a smart device, and the collection of the biometric data of the target user can be completed through the smart device. The biometric information may include an image, iris, voice print, etc. of the target user. It is to be understood that the foregoing implementation manner is one implementation manner of the embodiments of the present application, and is not a limitation to the embodiments of the present application, and the embodiments of the present application may also include other implementation manners.
S502, judging whether the target user in each control scene has the role identification of the user created by the target user, wherein the role identification of the user is used for representing the role of the user in a preset user set.
In S502, the role identifier of the user is created by the target user, and is used to represent the role of the user in the preset user set.
It is understood that the role identifier of the user may be created by the target user and used for representing the own role of the target user, or may be created by the target user and used for representing the roles of other users.
The preset user set refers to a set including at least a target user, the set including one or more users, that is, the target user is one of the user set. The role of the target user in the preset user set refers to the role of the target user in the one or more users; the roles of the other users have similar meanings.
For example, the target user is an adult male. In order to distinguish different users, the users are corresponding to identity marks. Assume that the adult man's identity is "Zhang III".
The predetermined set of users are members of a family in which the adult man is located. The adult man's role in the family is father, and the roles of the members in the family include father, mother, and child.
The adult man may create a role identification for the user.
In one possible scenario, the adult man may create his own character identification.
For example, it is understood that the adult male and the "father" roles correspond, i.e., the "father" may be identified as the adult male's role. The "parent" here is the role identification of the user created by the target user, which here refers to the target user itself.
In the above case, the identity "zhang san" and the character "father" indicate that the same person, both of which are the adult man.
Through the role identification of the user, the adult man can be distinguished from different control requirements under the identity of Zhang III and under the role of father. When the adult man controls the equipment through voice, the equipment control corresponding to Zhang san or father can be realized according to the requirement.
In one possible scenario, the adult man creates a role identification for other users.
For example, the adult man creates a role identification "child" for his own child. The "child" here is the role identification of the user created by the target user, which here refers to the child of the adult man.
Through the role identification of the user, the different control requirements of the adult man under the identity of Zhang III and the role identification of the adult man as a child can be distinguished. When the adult man controls the equipment through voice, the equipment control corresponding to Zhang San or child can be realized according to the requirement.
The foregoing is illustrative and explanatory of the embodiments of the present application by way of example and is not restrictive of the embodiments of the present application.
And S503, if yes, determining user identifications corresponding to the control scenes respectively according to the identity identifications of the target users in the control scenes and the role identifications of the users in the control scenes.
In S503, for each control scenario, when there is a role identifier of a user, the user identifier of the control scenario is determined according to the identity identifier of the target user and the role identifier of the user.
The result obtained in S503 is the user identification of each control scenario. The user identification and the control scenario are corresponding.
For a certain control scenario, when there is no role identification of the user, the following implementation may be adopted: and determining the user identification of the control scene according to the identification of the target user. At this time, the basis of the user identifier of the control scenario may not include the role identifier of the user.
Based on S501-S503, the id of the target user is used to distinguish different users, so that when the user controls the device through voice, the user can distinguish which user controls the device.
By judging whether the role identification exists or not, the control requirement of the target user is distinguished from the control requirement of the role corresponding to the role identification, so that the control of the same user under different conditions can be accurately realized, and the accuracy of the control is improved.
Because the user identification contains the information of the user, the voice of the user does not need to contain the information of the user, and therefore the voice of the user when the user controls the equipment is simplified, and the use experience of the user is improved.
Another embodiment of the present application is an internet of things system. As shown in fig. 1, fig. 1 is a schematic structural diagram of an internet of things system provided in an embodiment of the present application, and an internet of things device may include a home device.
As shown in fig. 1, the internet of things system 100 includes an internet of things gateway device 101 and one or more internet of things devices. The internet of things gateway device is used for executing any one of the control methods of the internet of things device, and is used for controlling the internet of things device.
In fig. 1, an example of the internet of things system 100 includes an internet of things device 102, an internet of things device 103, and an internet of things device 104. In practical application, the internet of things system may further include one internet of things device, two internet of things devices, or more than three internet of things devices.
In some possible implementations, the scene identifier is obtained by a control method of any one of the internet of things devices described above.
The beneficial effects that the internet of things system 100, the devices in the system, and the relationship between the devices can achieve are the same as those described above, and detailed description is not hindered here.
Another embodiment of the present application is an internet of things gateway device. As shown in fig. 1, the internet of things gateway device is configured to execute the control method of the internet of things device, so as to control the internet of things device.
In an embodiment of the present application, a computer-readable storage medium is further provided, where the computer-readable storage medium is used for storing a computer program, and the computer program is used for executing the control method for the internet of things device, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111263599.4A CN114124597B (en) | 2021-10-28 | 2021-10-28 | Control method, equipment and system of Internet of things equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111263599.4A CN114124597B (en) | 2021-10-28 | 2021-10-28 | Control method, equipment and system of Internet of things equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114124597A true CN114124597A (en) | 2022-03-01 |
CN114124597B CN114124597B (en) | 2023-06-16 |
Family
ID=80377542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111263599.4A Active CN114124597B (en) | 2021-10-28 | 2021-10-28 | Control method, equipment and system of Internet of things equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114124597B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106356057A (en) * | 2016-08-24 | 2017-01-25 | 安徽咪鼠科技有限公司 | Speech recognition system based on semantic understanding of computer application scenario |
CN107832286A (en) * | 2017-09-11 | 2018-03-23 | 远光软件股份有限公司 | Intelligent interactive method, equipment and storage medium |
CN111428512A (en) * | 2020-03-27 | 2020-07-17 | 大众问问(北京)信息科技有限公司 | Semantic recognition method, device and equipment |
CN111665737A (en) * | 2020-07-21 | 2020-09-15 | 宁波奥克斯电气股份有限公司 | Intelligent household scene control method and system |
CN113409797A (en) * | 2020-03-16 | 2021-09-17 | 阿里巴巴集团控股有限公司 | Voice processing method and system, and voice interaction device and method |
-
2021
- 2021-10-28 CN CN202111263599.4A patent/CN114124597B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106356057A (en) * | 2016-08-24 | 2017-01-25 | 安徽咪鼠科技有限公司 | Speech recognition system based on semantic understanding of computer application scenario |
CN107832286A (en) * | 2017-09-11 | 2018-03-23 | 远光软件股份有限公司 | Intelligent interactive method, equipment and storage medium |
CN113409797A (en) * | 2020-03-16 | 2021-09-17 | 阿里巴巴集团控股有限公司 | Voice processing method and system, and voice interaction device and method |
CN111428512A (en) * | 2020-03-27 | 2020-07-17 | 大众问问(北京)信息科技有限公司 | Semantic recognition method, device and equipment |
CN111665737A (en) * | 2020-07-21 | 2020-09-15 | 宁波奥克斯电气股份有限公司 | Intelligent household scene control method and system |
Also Published As
Publication number | Publication date |
---|---|
CN114124597B (en) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3599605B1 (en) | Home appliance and speech recognition server system and method for controlling thereof | |
CN108667697B (en) | Voice control conflict resolution method and device and voice control system | |
JP6516585B2 (en) | Control device, method thereof and program | |
USRE48569E1 (en) | Control method for household electrical appliance, household electrical appliance control system, and gateway | |
WO2019188791A1 (en) | Aroma releasing system | |
CN112051743A (en) | Device control method, conflict processing method, corresponding devices and electronic device | |
CN109559742B (en) | Voice control method, system, storage medium and computer equipment | |
CN109347709B (en) | Intelligent equipment control method, device and system | |
CN107622767A (en) | Voice control method for home appliance system and home appliance control system | |
CN107120791A (en) | Air conditioner control method and device and air conditioner | |
CN105161099A (en) | Voice-controlled remote control device and realization method thereof | |
CN105700389A (en) | Smart home natural language control method | |
CN109377995B (en) | Method and device for controlling equipment | |
CN114859749B (en) | Intelligent home management method and system based on Internet of things | |
CN113611305B (en) | Voice control method, system, device and medium for autonomous learning home scene | |
CN113205807A (en) | Voice equipment control method and device, storage medium and voice equipment | |
CN108932947B (en) | Voice control method and household appliance | |
JP6719434B2 (en) | Device control device, device control method, and device control system | |
CN110176233A (en) | The method, apparatus and computer storage medium of air-conditioning voice control | |
CN115019793B (en) | Wake-up method, device, system, medium, and equipment based on collaborative error correction | |
KR102261198B1 (en) | Apparatus for providing smart service and method therefor | |
CN111524514A (en) | Voice control method and central control equipment | |
CN116168313A (en) | Control method and device of intelligent device, storage medium and electronic device | |
CN114124597B (en) | Control method, equipment and system of Internet of things equipment | |
CN112526890A (en) | Intelligent household control method and device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |