[go: up one dir, main page]

CN109979463B - Processing method and electronic equipment - Google Patents

Processing method and electronic equipment Download PDF

Info

Publication number
CN109979463B
CN109979463B CN201910254428.1A CN201910254428A CN109979463B CN 109979463 B CN109979463 B CN 109979463B CN 201910254428 A CN201910254428 A CN 201910254428A CN 109979463 B CN109979463 B CN 109979463B
Authority
CN
China
Prior art keywords
user
mode
environment
information
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910254428.1A
Other languages
Chinese (zh)
Other versions
CN109979463A (en
Inventor
吴鹏
牛佩佩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910254428.1A priority Critical patent/CN109979463B/en
Publication of CN109979463A publication Critical patent/CN109979463A/en
Application granted granted Critical
Publication of CN109979463B publication Critical patent/CN109979463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a processing method and electronic equipment, wherein the processing method comprises the following steps: in a first mode, obtaining environmental information in a first mode; processing at least the environmental information; if the processing result shows that the user meeting the condition exists in the environment, switching to a second mode; wherein, in the first mode, the input of the user cannot be obtained and responded by the second mode; in the second mode, user input can be obtained and responded to in the second manner, the first manner and the second manner being different. The method provided by the embodiment of the application can improve the experience of the voice interaction of the user.

Description

Processing method and electronic equipment
Technical Field
The embodiment of the application relates to the field of intelligent equipment, in particular to a processing method and electronic equipment.
Background
Currently, many electronic devices have voice assistants, such as Siri or cana, etc.; the voice assistant can listen to the audio information of the user and can execute corresponding instructions, but before the voice assistant is used, the voice assistant is required to be awakened firstly, in the prior art, a keyword is often required to be input by voice to awaken the voice assistant, for example, the user needs to say Siri or kena of your hello, the voice assistant acquires and recognizes the set keyword and then can be awakened, namely, the voice assistant enters a state of receiving a voice command, then receives the voice instruction of the user, and executes subsequent operations.
There is also a prior art technique to perform the wake-up by means of either virtual or physical buttons or the like. For example, the Siri voice assistant on the smart phone can recognize the fingerprint of the user through the fingerprint recognition button to wake up, but no matter which of the above waking modes is adopted by the user, the operation is complex, the operation is not in accordance with the human communication habit, the operation is not convenient enough, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides the following technical scheme:
a first aspect of the present application provides a processing method, including:
in a first mode, obtaining environmental information in a first mode;
processing at least the environmental information;
if the processing result shows that the user meeting the condition exists in the environment, switching to a second mode;
wherein, in the first mode, the input of the user cannot be obtained and responded by the second mode; in the second mode, user input can be obtained and responded to in the second manner, the first manner and the second manner being different.
Preferably, the first mode is to obtain the environment information by obtaining an environment image;
the second way is to obtain the input by obtaining audio information in the environment.
Preferably, said if the processing result indicates that there is a user in said environment meeting the condition comprises at least one of:
if the environment information contains user face information, and the user face information indicates that the sight line of the user meets the condition; and/or
And if the environment information contains the face information of the user, the face information of the user indicates that the biological characteristics of the face information meet the conditions.
Preferably, obtaining the context information and processing at least the context information by the first means comprises:
obtaining an original environment image;
processing the original environment image to obtain an edge image of the original environment image;
the edge image is processed to determine user facial information.
Preferably, the user face information indicating that the line of sight of the user satisfies a condition includes: the concerned position corresponding to the sight of the user and the position of the acquisition device for acquiring the environmental information in the first mode meet the matching condition.
Preferably, the inability to obtain and respond to user input by the second means comprises:
the input of the user is not available; and/or
(ii) failure to respond to the obtained input by the user;
the switching to the second mode includes:
starting a voice acquisition device; and/or
The application is awakened in response to the user input.
Preferably, the method further comprises, in the second mode, determining whether the input person of the second mode matches the user who satisfies the condition, and if so, responding to the input of the input person.
A second aspect of the present application provides an electronic device comprising,
a first obtaining means configured to obtain the environmental information by a first manner;
a second obtaining device configured to obtain the input of the user through a second mode;
the processing device is configured to instruct the first obtaining device to obtain the environment information in a first mode; processing at least the environmental information; if the processing result shows that the user meeting the condition exists in the environment, switching to a second mode;
wherein, in the first mode, the input of the user cannot be obtained and responded by the second mode; in the second mode, user input can be obtained and responded to in the second manner, the first manner and the second manner being different.
Preferably, the first obtaining means is image obtaining means;
the second obtaining means is an audio obtaining means.
Preferably, the first obtaining device is disposed at an end of the electronic device, and the collecting direction is a vertical direction, and the first obtaining device is further configured to obtain an original environment image; and processing the original environment image to obtain an edge image of the original environment image.
Preferably, the processing device is further configured to determine whether the environment information includes user face information, and the user face information indicates that the line of sight of the user satisfies a condition; and/or
And judging whether the environment information contains user face information or not, wherein the user face information indicates that the biological characteristics of the face information meet the conditions.
Preferably, the processing device is further configured to determine whether a corresponding attention position of the user's sight line and a position of the acquisition device that acquires the environmental information in the first manner satisfy a matching condition.
Preferably, the processing means is further configured such that, in the first mode, no input from the user is available; and/or failure to respond to the user input obtained.
Preferably, the processing device is further configured to determine whether the input of the second mode matches the user who satisfies the condition in the second mode, and if so, to respond to the input of the input.
In the embodiment provided by the application, whether the user meeting the conditions exists in the environment can be judged, so that the voice assistant of the electronic equipment can be awakened, the method is more convenient than the existing method, unnecessary input operation or button operation of voice keywords cannot be increased, and the user experience is better improved.
Drawings
Fig. 1 is a logic block diagram of a processing method provided in an embodiment of the present application;
fig. 2 is an original environment image and an edge image of the original environment image in an embodiment of the present application;
fig. 3 is a schematic diagram of an electronic device in an embodiment of the present application.
Detailed Description
Specific embodiments of the present application will be described in detail below with reference to the accompanying drawings, but the present application is not limited thereto.
It will be understood that various modifications may be made to the embodiments disclosed herein. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
Hereinafter, embodiments of the present application will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, a first embodiment of the present application provides a processing method, including:
in a first mode, obtaining environmental information in a first mode;
processing at least the environmental information;
if the processing result shows that the user meeting the condition exists in the environment, switching to a second mode;
wherein, in the first mode, the input of the user cannot be obtained and responded by the second mode; in the second mode, user input can be obtained and responded to in the second manner, the first manner and the second manner being different.
The processing method can be applied to the electronic equipment to wake up the intelligent assistant of the electronic equipment. The electronic device may include, for example, two operation modes, a first mode and a second mode, for example, the first mode may be a state where an intelligent assistant of the electronic device has not been awakened, the second mode is a state where the intelligent assistant of the electronic device has been awakened and waits to receive a voice instruction of a user, and after the voice instruction of the user is received, an operation corresponding to the voice instruction is performed.
In one embodiment, for example, the context information may be obtained in a first manner when the intelligent assistant of the electronic device has not been woken up, the context information may be processed, and if the processing result indicates that there is a user in the context that satisfies the condition, the intelligent assistant of the electronic device may be woken up and wait to receive a voice instruction of the user.
In the embodiments provided in the present application, the environment information may be, for example, image information acquired by the first method, specifically, non-contact image information, and may be, for example, a picture of a surrounding environment.
The method provided by the application can effectively wake up the voice assistant of the electronic equipment, namely, the voice assistant enters a state of receiving the voice command, and the method provided by the embodiment of the application can wake up the voice assistant of the electronic equipment by judging whether the environment has the user meeting the conditions, is more convenient than the existing method, and does not increase the input operation or button operation of unnecessary voice keywords.
In one embodiment provided by the present application, the first manner is to obtain the environment information by obtaining an environment image;
the second way is to obtain the input by obtaining audio information in the environment.
The first mode and the second mode provided in the embodiment of the application are different modes, and the first mode is to obtain the environment information by obtaining an environment image; a second way is to obtain the input by obtaining audio information in the environment.
In one embodiment, the environment information is obtained by obtaining an image of the environment when the intelligent assistant of the electronic device has not been awakened (in the first mode of the electronic device); in the present embodiment, the environment information can only be obtained by obtaining the environment image, but not by obtaining the audio information in the environment, i.e. in the first mode, the environment information can only be obtained by the first way, but not by the second way.
In another embodiment, the if processing result indicates that there is a user in the environment that satisfies a condition includes at least one of:
if the environment information contains user face information, and the user face information indicates that the sight line of the user meets the condition; and/or
And if the environment information contains the face information of the user, the face information of the user indicates that the biological characteristics of the face information meet the conditions.
In this embodiment, if the environment information includes user face information indicating that the line of sight of the user satisfies a condition, it indicates that the processing result indicates that there is a user satisfying the condition in the environment. What kind of condition is specifically satisfied to user's sight in this application does not do specifically and restricts, can restrict as required. For example, whether the position of the attention position corresponding to the line of sight of the user and the position of the acquisition device for obtaining the environmental information in the first mode satisfy the matching condition may be used as a determination method for determining whether the line of sight of the user satisfies the condition, and specifically, whether a distance between the position of the attention position corresponding to the line of sight of the user and the position of the acquisition device for obtaining the environmental information in the first mode is smaller than a threshold, which may be set according to the use requirement of the user, for example, when the user is in a relatively open space, for example, in a hotel lobby, the threshold may be larger, for example, 10 m; when the space is relatively narrow, for example at home, the threshold value may be correspondingly lowered, for example to 3m, and when the space continues to become smaller, for example in a car, the threshold value may be further reduced, for example to 0.3-0.5 m. The method provided by the application realizes the judgment of whether the user meeting the conditions exists in the environment or not by judging whether the sight line of the user meets the conditions or not, and can effectively simulate the natural habit that people stare at the speaking object when communicating with each other; the voice assistant is awakened, the communication habit of human beings is met, the use is convenient, and the user experience is better improved.
In another embodiment of the present application, if the environment information includes user face information, and the user face information indicates that a biometric feature of the face information satisfies a condition, it indicates that the processing result indicates that there is a user satisfying the condition in the environment. In this embodiment, a photograph of a head of at least one user may be uploaded to a memory of an electronic device, and when processing environment information and a processing result indicates that the environment information includes facial information of the user, the facial information obtained by the processing may be matched with the photograph of the head of the user uploaded to the memory of the electronic device in advance, and if the facial information of the at least one user included in the environment information may be matched with the photograph of the head of the at least one user uploaded to the memory of the electronic device in advance, the processing result indicates that a user satisfying a condition exists in the environment.
In other embodiments of the present application, in order to improve the use safety of the electronic device, the biometric features of the facial information of the user need to be identified, and only when the biometric features meet the conditions, the intelligent assistant of the electronic device can be awakened; that is, only certain persons can wake up the assistant-only of the electronic device; meanwhile, in order to reduce the power consumption of the electronic equipment at a necessary moment, when the face information of the user is detected to meet the preset biological feature condition of the face information, the intelligent assistant of the electronic equipment is still not awakened, and only when the face information of the user meets the preset biological feature condition of the face information and the sight line of the user is also detected to meet the preset condition, the intelligent assistant of the electronic equipment is awakened.
In one embodiment of the present application, obtaining environmental information and processing at least the environmental information in a first manner includes:
obtaining an original environment image;
processing the original environment image to obtain an edge image of the original environment image;
the edge image is processed to determine user facial information.
In this embodiment, the acquired environment information is image information of 360 ° around the electronic device as a center, and at present, an image that can be captured by the image capturing device or the image capturing device on the electronic device is usually square, rectangular, or circular, and at most, is limited to performing wide-angle shooting or super-wide-angle shooting on the existing environment, and cannot be captured an environment image of 360 °, so that the environment information obtained by the first method in this embodiment is not an image of a simple captured environment, but an image of an environment (an original environment image) is obtained and then processed, so as to obtain an edge image of the environment (an edge image of the original environment image), as shown in fig. 2, taking the image of the captured environment as a rectangular image as an example, wherein a is the original environment image, and b is the edge image of the original environment image.
In this embodiment, the environmental information is processed to process the edge image to determine the facial information of the user.
In other embodiments of the present application, the inability to obtain and respond to user input in the second manner includes:
the input of the user is not available; and/or
(ii) failure to respond to the obtained input by the user;
the switching to the second mode includes:
starting a voice acquisition device; and/or
The application is awakened in response to the user input.
In the present embodiment, in the first mode, the environment information can be obtained only by the first manner; if the user inputs the electronic equipment in the second mode, the electronic equipment cannot obtain the input of the user; and/or failure to respond to the user input obtained.
In another embodiment, the intelligent assistant of the electronic device switches from the first mode to the second mode, which indicates that the intelligent assistant has been awakened, i.e., indicates that the electronic device has started the voice capture device; and/or wake up an application responsive to user input. At this time, the user can perform a voice instruction on the electronic device, so that the electronic device performs corresponding operations according to the voice instruction. For example, if the user issues a voice command "play song" xxxxxx ", the electronic device will play the corresponding song; when the user sends a voice command "play" xxxxxx "movie", the electronic device will play the corresponding movie.
In other embodiments of the present application, the method further includes, in the second mode, determining whether the input user of the second mode matches the user who satisfies the condition, and if so, responding to the input of the input user. In this embodiment, it is necessary to determine whether or not the user who satisfies the condition existing in the environment indicated by the processing result and the input person who performs input in the second mode are the same person, and only when both of them are the same person, the input of the input person is responded. For example, in an application scenario, by processing environment information, it is obtained that a user meeting a condition exists in the environment as a user a, at this time, an intelligent assistant of the electronic device switches from a first mode to a second mode, in this second mode, a user B sends a voice instruction, at this time, it is determined whether the user a and the user B are the same person, if the user a and the user B are the same person, the voice instruction sent by the user B is responded, and if the user a and the user B are not the same person, the voice instruction sent by the user B is not responded.
In another application scenario, by processing environment information, a plurality of users meeting conditions in the environment are obtained, namely a user A, a user B and a user C, at the moment, the intelligent assistant of the electronic equipment is switched from a first mode to a second mode, and in the second mode, any one of the users A, the user B and the user C sends a voice instruction and responds; if the user sending the voice command is not among the user a, the user B and the user C in the second mode, for example, the user D or the user E sends the voice command, no response is made.
In the embodiment of the application, when the intelligent assistant of the electronic equipment is not awakened, an original environment image is obtained in a mode of obtaining the environment image, and the original environment image is processed to obtain an edge image of the original environment image; and processing the edge image to determine whether the environmental information contains the facial information of the user, if the processing result shows that the environmental information contains the facial information of the user, further judging whether the sight line of the user meets a preset condition and/or whether the biological characteristics of the facial information of the user meet the preset condition, if so, awakening the intelligent assistant of the electronic equipment to wait for receiving the voice command of the user, and executing the operation corresponding to the voice command after receiving the voice command of the user.
Based on the same inventive concept, as shown in fig. 3, a second embodiment of the present application provides an electronic device, including,
a first obtaining means configured to obtain the environmental information by a first manner;
a second obtaining device configured to obtain the input of the user through a second mode;
the processing device is configured to instruct the first obtaining device to obtain the environment information in a first mode; processing at least the environmental information; if the processing result shows that the user meeting the condition exists in the environment, switching to a second mode;
wherein, in the first mode, the input of the user cannot be obtained and responded by the second mode; in the second mode, user input can be obtained and responded to in the second manner, the first manner and the second manner being different.
In this embodiment of the application, the first obtaining device and the second obtaining device are respectively disposed on the electronic device, so as to perform input in different electronic operating modes, so as to wake up an intelligent assistant of the electronic device and execute a voice instruction after the wake-up. The electronic device may include, for example, two operation modes, a first mode and a second mode, for example, the first mode may be a state where an intelligent assistant of the electronic device has not been awakened, the second mode is a state where the intelligent assistant of the electronic device has been awakened and waits to receive a voice instruction of a user, and after the voice instruction of the user is received, an operation corresponding to the voice instruction is performed.
In one embodiment, for example, the context information may be obtained in a first manner when the intelligent assistant of the electronic device has not been woken up, the context information may be processed, and if the processing result indicates that there is a user in the context that satisfies the condition, the intelligent assistant of the electronic device may be woken up and wait to receive a voice instruction of the user.
The electronic equipment provided by the application can effectively wake up the voice assistant of the electronic equipment through the cooperation of the first obtaining device, the second obtaining device and the processing device, namely, the voice assistant is enabled to enter a state of receiving a voice command. The electronic equipment provided by the application can effectively simulate the natural habit that people stare at the speaking object when communicating with each other; the voice assistant is awakened, the communication habit of human beings is met, the use is convenient, and the user experience is better improved.
In another embodiment provided by the present application, the first obtaining means is an image obtaining means;
the second obtaining means is an audio obtaining means.
The first obtaining device and the second obtaining device provided in the embodiment of the application are different, and the first obtaining device is an image obtaining device; the second obtaining means is an audio obtaining means. Correspondingly, the first mode and the second mode are different modes, and the first mode is to obtain the environment information by obtaining an environment image; a second way is to obtain the input by obtaining audio information in the environment.
In one embodiment, the environment information is obtained by the first obtaining means when the intelligent assistant of the electronic device has not been woken up (in the first mode of the electronic device); in this embodiment, the environment information can only be obtained by the first obtaining means and cannot be obtained by the second obtaining means, i.e. in the first mode, the environment information can only be obtained by the first obtaining means and cannot be obtained by the second obtaining means.
In other embodiments provided herein, the first obtaining device is disposed at an end of the electronic device, and the collecting direction is a vertical direction, and the first obtaining device is further configured to obtain an original environment image; and processing the original environment image to obtain an edge image of the original environment image.
In this embodiment, the first obtaining device may be disposed at an upper end portion of the electronic apparatus, and may also be disposed at a lower end portion of the electronic apparatus. When image acquisition is carried out, the electronic equipment is in a vertical state, and the first obtaining device is arranged at the upper end part or the lower end part of the electronic equipment for image acquisition, namely the acquisition direction is the vertical direction.
In this embodiment, the acquired environment information is image information of 360 ° around the electronic device as a center, and at present, an image that can be captured by the image capturing device or the image capturing device on the electronic device is usually square, rectangular, or circular, and at most, is limited to performing wide-angle shooting or super-wide-angle shooting on the existing environment, and cannot be captured an environment image of 360 °, so that the environment information obtained by the first method in this embodiment is not an image of a simple captured environment, but an image of an environment (an original environment image) is obtained and then processed, so as to obtain an edge image of the environment (an edge image of the original environment image), as shown in fig. 2, taking the image of the captured environment as a rectangular image as an example, wherein a is the original environment image, and b is the edge image of the original environment image.
In this embodiment, the environmental information is processed to process the edge image to determine the facial information of the user.
In one embodiment of the application, the processing device is further configured to determine whether the environmental information includes user facial information, and the user facial information indicates that the line of sight of the user satisfies a condition; and/or
And judging whether the environment information contains user face information or not, wherein the user face information indicates that the biological characteristics of the face information meet the conditions.
In this embodiment, if the environment information includes user face information indicating that the line of sight of the user satisfies a condition, it indicates that the processing result indicates that there is a user satisfying the condition in the environment. What kind of condition is specifically satisfied to user's sight in this application does not do specifically and restricts, can restrict as required. For example, whether the position of the attention position corresponding to the line of sight of the user and the position of the acquisition device for obtaining the environmental information in the first mode satisfy the matching condition may be used as a determination method for determining whether the line of sight of the user satisfies the condition, and specifically, whether a distance between the position of the attention position corresponding to the line of sight of the user and the position of the acquisition device for obtaining the environmental information in the first mode is smaller than a threshold, which may be set according to the use requirement of the user, for example, when the user is in a relatively open space, for example, in a hotel lobby, the threshold may be larger, for example, 10 m; when the space is relatively narrow, for example at home, the threshold value may be correspondingly lowered, for example to 3m, and when the space continues to become smaller, for example in a car, the threshold value may be further reduced, for example to 0.3-0.5 m.
In another embodiment of the present application, if the environment information includes user face information, and the user face information indicates that a biometric feature of the face information satisfies a condition, it indicates that the processing result indicates that there is a user satisfying the condition in the environment. In this embodiment, a photograph of a head of at least one user may be uploaded to a memory of an electronic device, and when processing environment information and a processing result indicates that the environment information includes facial information of the user, the facial information obtained by the processing may be matched with the photograph of the head of the user uploaded to the memory of the electronic device in advance, and if the facial information of the at least one user included in the environment information may be matched with the photograph of the head of the at least one user uploaded to the memory of the electronic device in advance, the processing result indicates that a user satisfying a condition exists in the environment.
In other embodiments of the present application, in order to improve the use safety of the electronic device, the biometric features of the facial information of the user need to be identified, and only when the biometric features meet the conditions, the intelligent assistant of the electronic device can be awakened; that is, only certain persons can wake up the assistant-only of the electronic device; meanwhile, in order to reduce the power consumption of the electronic equipment at a necessary moment, when the face information of the user is detected to meet the preset biological feature condition of the face information, the intelligent assistant of the electronic equipment is still not awakened, and only when the face information of the user meets the preset biological feature condition of the face information and the sight line of the user is also detected to meet the preset condition, the intelligent assistant of the electronic equipment is awakened.
In other embodiments of the present application, the processing device is further configured to, in the first mode, not obtain the user input; and/or failure to respond to the user input obtained.
In the present embodiment, in the first mode, the environment information can be obtained only by the first obtaining means; if the user inputs the electronic equipment by the second obtaining device, the electronic equipment cannot obtain the input of the user; and/or failure to respond to the user input obtained.
In another embodiment, the intelligent assistant of the electronic device switches from the first mode to the second mode, indicating that the intelligent assistant has been awakened, i.e., indicating that the electronic device has started the voice capturing device (i.e., the second obtaining means has been started); and/or wake up an application responsive to user input. At this time, the user can perform a voice instruction on the electronic device, so that the electronic device performs corresponding operations according to the voice instruction. For example, if the user issues a voice command "play song" xxxxxx ", the electronic device will play the corresponding song; when the user sends a voice command "play" xxxxxx "movie", the electronic device will play the corresponding movie.
In other embodiments of the present application, the processing device is further configured to determine whether the input person in the second mode matches with the user who satisfies the condition in the second mode, and if the input person in the second mode matches with the user who satisfies the condition, respond to the input of the input person.
In this embodiment, it is necessary to determine whether or not the user who satisfies the condition existing in the environment indicated by the processing result and the input person who performs input in the second mode are the same person, and only when both of them are the same person, the input of the input person is responded. For example, in an application scenario, by processing environment information, it is obtained that a user meeting a condition exists in the environment as a user a, at this time, an intelligent assistant of the electronic device switches from a first mode to a second mode, in this second mode, a user B sends a voice instruction, at this time, it is determined whether the user a and the user B are the same person, if the user a and the user B are the same person, the voice instruction sent by the user B is responded, and if the user a and the user B are not the same person, the voice instruction sent by the user B is not responded.
In another application scenario, by processing environment information, a plurality of users meeting conditions in the environment are obtained, namely a user A, a user B and a user C, at the moment, the intelligent assistant of the electronic equipment is switched from a first mode to a second mode, and in the second mode, any one of the users A, the user B and the user C sends a voice instruction and responds; if the user sending the voice command is not among the user a, the user B and the user C in the second mode, for example, the user D or the user E sends the voice command, no response is made.
In the embodiment of the application, when the intelligent assistant of the electronic equipment is not awakened, an original environment image is obtained in a mode of obtaining the environment image, and the original environment image is processed to obtain an edge image of the original environment image; and processing the edge image to determine whether the environmental information contains the facial information of the user, if the processing result shows that the environmental information contains the facial information of the user, further judging whether the sight line of the user meets a preset condition and/or whether the biological characteristics of the facial information of the user meet the preset condition, if so, awakening the intelligent assistant of the electronic equipment to wait for receiving the voice command of the user, and executing the operation corresponding to the voice command after receiving the voice command of the user.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.

Claims (8)

1. A method of processing, comprising:
in a first mode, obtaining environmental information in a first mode;
processing at least the environmental information; wherein, include:
obtaining an original environment image;
processing the original environment image to obtain an edge image of the original environment image;
processing the edge image to determine user facial information;
if the processing result shows that the user meeting the condition exists in the environment, switching to a second mode;
wherein, in the first mode, the input of the user cannot be obtained and responded by the second mode; in the second mode, user input can be obtained and responded to in the second manner, the first manner and the second manner being different.
2. The method of claim 1, wherein the first manner is to obtain the environment information by obtaining an environment image;
the second way is to obtain the input by obtaining audio information in the environment.
3. The method of claim 1, wherein the if the processing result indicates that there is a user in the environment that satisfies the condition comprises at least one of:
if the environment information contains user face information, and the user face information indicates that the sight line of the user meets the condition; and/or
And if the environment information contains the face information of the user, the face information of the user indicates that the biological characteristics of the face information meet the conditions.
4. The method of claim 3, wherein the user facial information indicating that the user's gaze satisfies a condition comprises: the concerned position corresponding to the sight of the user and the position of the acquisition device for acquiring the environmental information in the first mode meet the matching condition.
5. The method of claim 1, wherein the inability to obtain and respond to user input by the second means comprises:
the input of the user is not available; and/or
(ii) failure to respond to the obtained input by the user;
the switching to the second mode includes:
starting a voice acquisition device; and/or
The application is awakened in response to the user input.
6. The method of claim 1, wherein the method further comprises, in the second mode, determining whether the input of the second mode matches the user who satisfies the condition, and if so, responding to the input of the input.
7. An electronic device includes a first electronic component having a first electronic component,
a first obtaining means configured to obtain the environmental information by a first manner; the first obtaining device is arranged at the end of the electronic equipment, the collecting direction is a vertical direction, and the first obtaining device is further configured to obtain an original environment image; processing the original environment image to obtain an edge image of the original environment image
A second obtaining device configured to obtain the input of the user through a second mode;
the processing device is configured to instruct the first obtaining device to obtain the environment information in a first mode; processing at least the environmental information; if the processing result shows that the user meeting the condition exists in the environment, switching to a second mode;
wherein, in the first mode, the input of the user cannot be obtained and responded by the second mode; in the second mode, user input can be obtained and responded to in the second manner, the first manner and the second manner being different.
8. The electronic device of claim 7, wherein the first obtaining means is an image obtaining means;
the second obtaining means is an audio obtaining means.
CN201910254428.1A 2019-03-31 2019-03-31 Processing method and electronic equipment Active CN109979463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910254428.1A CN109979463B (en) 2019-03-31 2019-03-31 Processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910254428.1A CN109979463B (en) 2019-03-31 2019-03-31 Processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN109979463A CN109979463A (en) 2019-07-05
CN109979463B true CN109979463B (en) 2022-04-22

Family

ID=67081957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910254428.1A Active CN109979463B (en) 2019-03-31 2019-03-31 Processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN109979463B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4016524A4 (en) * 2019-08-15 2022-08-24 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. MEDICAL DEVICE AND MEDICAL DEVICE SYSTEM

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266648A (en) * 2007-03-13 2008-09-17 爱信精机株式会社 Facial feature point detection device, facial feature point detection method and program thereof
CN105204628A (en) * 2015-09-01 2015-12-30 涂悦 Voice control method based on visual awakening
CN106373568A (en) * 2016-08-30 2017-02-01 深圳市元征科技股份有限公司 Intelligent vehicle unit control method and device
CN106537490A (en) * 2014-05-21 2017-03-22 德国福维克控股公司 Electrically operated domestic appliance having a voice recognition device
CN108198553A (en) * 2018-01-23 2018-06-22 北京百度网讯科技有限公司 Voice interactive method, device, equipment and computer readable storage medium
CN108269572A (en) * 2018-03-07 2018-07-10 佛山市云米电器科技有限公司 A kind of voice control terminal and its control method with face identification functions

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102218906B1 (en) * 2014-01-17 2021-02-23 엘지전자 주식회사 Mobile terminal and controlling method thereof
CN107490971B (en) * 2016-06-09 2019-06-11 苹果公司 Intelligent automation assistant in home environment
CN107120791A (en) * 2017-04-27 2017-09-01 珠海格力电器股份有限公司 Air conditioner control method and device and air conditioner
CN109032554B (en) * 2018-06-29 2021-11-16 联想(北京)有限公司 Audio processing method and electronic equipment
CN108903521B (en) * 2018-07-03 2020-11-06 京东方科技集团股份有限公司 Man-machine interaction method applied to intelligent picture frame and intelligent picture frame
CN109067628B (en) * 2018-09-05 2021-07-20 广东美的厨房电器制造有限公司 Voice control method and control device of intelligent household appliance and intelligent household appliance

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266648A (en) * 2007-03-13 2008-09-17 爱信精机株式会社 Facial feature point detection device, facial feature point detection method and program thereof
CN106537490A (en) * 2014-05-21 2017-03-22 德国福维克控股公司 Electrically operated domestic appliance having a voice recognition device
CN105204628A (en) * 2015-09-01 2015-12-30 涂悦 Voice control method based on visual awakening
CN106373568A (en) * 2016-08-30 2017-02-01 深圳市元征科技股份有限公司 Intelligent vehicle unit control method and device
CN108198553A (en) * 2018-01-23 2018-06-22 北京百度网讯科技有限公司 Voice interactive method, device, equipment and computer readable storage medium
CN108269572A (en) * 2018-03-07 2018-07-10 佛山市云米电器科技有限公司 A kind of voice control terminal and its control method with face identification functions

Also Published As

Publication number Publication date
CN109979463A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2021135685A1 (en) Identity authentication method and device
WO2021000814A1 (en) Voice control method and related apparatus
US9547760B2 (en) Method and system for authenticating user of a mobile device via hybrid biometics information
KR20190022109A (en) Method for activating voice recognition servive and electronic device for the same
CN108009414B (en) Multi-user intelligent console system based on biological recognition and control method
CN105204628A (en) Voice control method based on visual awakening
CN109712624A (en) A kind of more voice assistant coordination approach, device and system
CN108509037A (en) A kind of method for information display and mobile terminal
CN110968353A (en) Central processing unit awakening method and device, voice processor and user equipment
CN103292437A (en) Voice interactive air conditioner and control method thereof
CN102929660A (en) Method for controlling mood theme of terminal equipment and terminal equipment thereof
CN105976814A (en) Headset control method and device
KR102203720B1 (en) Method and apparatus for speech recognition
CN108549802A (en) An unlocking method, device and mobile terminal based on face recognition
CN112860169A (en) Interaction method and device, computer readable medium and electronic equipment
CN112634895A (en) Voice interaction wake-up-free method and device
WO2021082131A1 (en) Air conditioning device, and temperature control method and apparatus
CN108509782A (en) A kind of recognition of face control method and mobile terminal
CN114220420A (en) Multimodal voice wake-up method, device and computer-readable storage medium
CN109032345A (en) Apparatus control method, device, equipment, server-side and storage medium
CN109979463B (en) Processing method and electronic equipment
US20140300535A1 (en) Method and electronic device for improving performance of non-contact type recognition function
CN114125143A (en) A voice interaction method and electronic device
CN112133296B (en) Full duplex voice control method and device, storage medium and voice equipment
CN113932387A (en) Method, device and air conditioner for air conditioning control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant