CN107622770B - Voice wake-up method and device - Google Patents
Voice wake-up method and device Download PDFInfo
- Publication number
- CN107622770B CN107622770B CN201710922732.XA CN201710922732A CN107622770B CN 107622770 B CN107622770 B CN 107622770B CN 201710922732 A CN201710922732 A CN 201710922732A CN 107622770 B CN107622770 B CN 107622770B
- Authority
- CN
- China
- Prior art keywords
- awakening
- voice
- scene
- threshold
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000002618 waking effect Effects 0.000 claims abstract description 27
- 238000004458 analytical method Methods 0.000 claims description 19
- 238000001514 detection method Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 10
- 238000013135 deep learning Methods 0.000 claims description 7
- 230000001960 triggered effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000007476 Maximum Likelihood Methods 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a voice awakening method and a voice awakening device, wherein the method can be used for carrying out re-identification on the condition that the similarity between detected awakening voice identified by a local first acoustic model and a preset awakening word signal is not high or low through a second acoustic model of a cloud server, so that the condition that terminal equipment is awakened by mistake or can be awakened but not awakened can be avoided as much as possible, and the user experience is improved. In addition, the terminal device determines whether to execute the operation of waking up the terminal device according to the situation that the recognition degree between the waking-up voice recognized through the first acoustic model and the preset waking-up word signal is high or the situation that the recognition degree is low, and the terminal device does not need to be sent to the cloud server for recognition, so that the efficiency of executing the waking-up operation of the terminal device can be improved.
Description
Technical Field
The invention relates to the technical field of intelligent human-computer interaction, in particular to a voice awakening method and device.
Background
Artificial Intelligence (AI) is a new technical science to study and develop theories, methods, techniques and application systems for simulating, extending and expanding human Intelligence. Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence, a field of research that includes robotics, speech recognition, image recognition, natural language processing, and expert systems.
With the development of voice recognition technology, more and more intelligent terminal devices are configured with a voice wake-up function. The user inputs a section of voice to the intelligent terminal equipment, the intelligent terminal equipment judges whether the input voice contains the awakening word or not through a built-in algorithm, and if the input voice contains the awakening word, the intelligent terminal equipment is switched from a dormant state to an awakening state.
However, since the user may be in different scenes, for example, the user is in a concert, the scene is noisy, and the noise ratio in the speech received by the intelligent terminal device is high, the intelligent terminal device may be awoken by mistake, which may affect the experience of the user.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
To this end, a first object of the present invention is to propose a voice wake-up method. According to the method, the situation that the similarity between the detected awakening voice and the preset awakening word signal is not high or low and is recognized by the local first acoustic model can be recognized again through the second acoustic model of the cloud server, the situation that the terminal device is awakened by mistake or can be awakened but not awakened can be avoided as much as possible, and the experience degree of a user is improved.
Therefore, a second objective of the present invention is to provide a voice wake-up apparatus.
A third object of the invention is to propose a computer device.
A fourth object of the invention is to propose a computer program product.
A fifth object of the invention is to propose a non-transitory computer-readable storage medium.
To achieve the above object, an embodiment of a first aspect of the present invention provides a voice wake-up method, including:
detecting wake-up voice input to terminal equipment and a current scene where the terminal equipment is located;
acquiring a first threshold and a second threshold according to the current scene and the corresponding relation between the scene and the threshold, wherein the first threshold is larger than the second threshold;
analyzing the acoustic characteristics of the awakening voice according to a first acoustic model to acquire a first similarity between the awakening voice and a preset awakening word signal;
judging whether the first similarity is larger than the second threshold and smaller than the first threshold;
if the judgment result is yes, sending the awakening voice to a cloud server so that the cloud server judges a second similarity between the awakening voice and the preset awakening word signal according to a second acoustic model, and if the second similarity is larger than the first threshold, generating an awakening instruction for awakening the terminal equipment; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model;
and receiving the awakening instruction and executing the operation of awakening the terminal equipment.
As above, the generating a wake-up instruction for waking up the terminal device if the second similarity is greater than the first threshold includes:
analyzing the acoustic characteristics of the awakening voice according to the second acoustic model to obtain a pronunciation sequence corresponding to the awakening voice;
analyzing a pronunciation sequence corresponding to the awakening voice according to a language model to obtain a text sequence corresponding to the awakening voice;
matching the text sequence corresponding to the awakening voice with the text sequence corresponding to the preset awakening word signal;
and if the matching is successful, generating a wake-up instruction for waking up the terminal equipment.
In the method, analyzing the acoustic features of the wake-up speech according to the first acoustic model to obtain a first similarity between the wake-up speech and a preset wake-up word signal includes:
determining feature similarity between the acoustic features of the awakening voice and the acoustic features of the preset awakening word signal according to the acoustic features of the awakening voice and the first acoustic model;
and determining a first similarity between the awakening voice and the preset awakening word signal according to the feature similarities.
In the method, the detecting the current scene where the terminal device is located includes:
detecting the current position of the terminal equipment, and determining the current scene of the terminal equipment according to the current position;
or detecting the scene voice of the terminal equipment, performing corpus analysis on the scene voice, acquiring a corpus set of the scene voice, determining a scene corresponding to the corpus set, and determining the scene corresponding to the corpus set as the current scene where the terminal equipment is located.
The method as described above, further comprising:
if the first similarity is larger than the first threshold, the operation of awakening the terminal equipment is executed;
or if the first similarity is smaller than the second threshold, not executing the operation of waking up the terminal device.
In order to achieve the above object, a second embodiment of the present invention provides a voice wake-up apparatus, including:
a first detection module for detecting the awakening voice input to the terminal equipment
The second detection module is used for detecting the current scene of the terminal equipment;
a threshold module, configured to obtain a first threshold and a second threshold according to the current scene and a correspondence between the scene and the threshold, where the first threshold is greater than the second threshold;
the analysis module is used for analyzing the acoustic characteristics of the awakening voice according to a first acoustic model to acquire a first similarity between the awakening voice and a preset awakening word signal;
the judging module is used for judging whether the first similarity is larger than the second threshold and smaller than the first threshold, and if so, the sending module is triggered;
the sending module is used for sending the awakening voice to a cloud server so that the cloud server judges a second similarity between the awakening voice and the preset awakening word signal according to a second acoustic model, and if the second similarity is larger than the first threshold, an awakening instruction for awakening the terminal device is generated; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model;
and the first execution module is used for receiving the awakening instruction and executing the operation of awakening the terminal equipment.
In the foregoing apparatus, the cloud server includes a wake-up instruction generation module;
the wake-up instruction generation module is specifically configured to:
analyzing the acoustic characteristics of the awakening voice according to the second acoustic model to obtain a pronunciation sequence corresponding to the awakening voice;
analyzing a pronunciation sequence corresponding to the awakening voice according to a language model to obtain a text sequence corresponding to the awakening voice;
matching the text sequence corresponding to the awakening voice with the text sequence corresponding to the preset awakening word signal;
and if the matching is successful, generating a wake-up instruction for waking up the terminal equipment.
In the above apparatus, the analysis module is specifically configured to:
determining feature similarity between the acoustic features of the awakening voice and the acoustic features of the preset awakening word signal according to the acoustic features of the awakening voice and the first acoustic model;
and determining a first similarity between the awakening voice and the preset awakening word signal according to the feature similarities.
In the above apparatus, the second detection module is specifically configured to:
detecting the current position of the terminal equipment, and determining the current scene of the terminal equipment according to the current position;
or, the second detection module is specifically configured to: detecting the scene voice of the terminal equipment, carrying out corpus analysis on the scene voice, acquiring a corpus set of the scene voice, determining a scene corresponding to the corpus set, and determining the scene corresponding to the corpus set as the current scene where the terminal equipment is located.
The apparatus as described above, further comprising: a second execution module and a third execution module;
if the judgment result of the judgment module is that the first similarity is greater than the first threshold, triggering a second execution module; the second execution module is used for executing an operation of waking up the terminal device;
or, if the judgment result of the judgment module is that the first similarity is smaller than the second threshold, triggering a third execution module; the third execution module is configured to not execute an operation of waking up the terminal device.
To achieve the above object, a third embodiment of the present invention provides a computer device, including: the device comprises a memory and a processor, wherein the processor runs a program corresponding to an executable program code by reading the executable program code stored in the memory, so as to realize the voice wake-up method according to the first aspect of the embodiment of the invention.
To achieve the above object, a fourth embodiment of the present invention provides a computer program product, wherein when the instructions of the computer program product are executed by a processor, the voice wake-up method according to the first embodiment is performed.
To achieve the above object, a fifth embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the voice wake-up method according to the first embodiment.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart illustrating a voice wake-up method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a voice wake-up method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a voice wake-up apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a voice wake-up apparatus according to another embodiment of the present invention;
FIG. 5 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a voice wake-up method and apparatus according to an embodiment of the present invention with reference to the drawings.
Fig. 1 is a flowchart illustrating a voice wake-up method according to an embodiment of the present invention. The execution main body of the method is a voice awakening device, and the device can be realized by hardware and/or software and can also be integrated into terminal equipment.
As shown in fig. 1, the voice wake-up method proposed in this embodiment includes the following steps:
s101, detecting awakening voice input into the terminal equipment and a current scene where the terminal equipment is located.
For example, when the user speaks a language, such as "degree of smallness", to the terminal device, since the voice includes the wake-up word "smallness" set by the user or default, the voice spoken by the user is the wake-up voice currently; the terminal device may receive a wake-up voice input by a user through a configured speech detection device such as a handset.
Specifically, the user may be in different scenes, for example, the user is in a concert, the scene is noisy, and the noise ratio in the voice received by the intelligent terminal device is high, which may cause the intelligent terminal device to be awoken by mistake, thereby affecting the experience of the user. Therefore, it is necessary to detect the current scene where the terminal device is located, and adaptively wake up the terminal device according to the difference of the scenes, so as to avoid the occurrence of false wake-up or the situation that the terminal device can be woken up but not wake up as much as possible. It is noted that the detected actual scene may be subdivided, for example into a quiet scene and a noisy scene. The probability of the terminal device being in a false wake-up situation in a quiet scene is lower than the probability of the terminal device being in a false wake-up situation in a noisy scene.
In a possible implementation manner, a specific implementation manner for detecting the current scene where the terminal device is located is as follows: and detecting the current position of the terminal equipment, and determining the current scene of the terminal equipment according to the current position. For example, the terminal device is configured with a Positioning module such as a GPS (Global Positioning System), and the current location of the terminal device is detected as a ktv (karaoke television) entertainment place by the Positioning module, and at this time, the current scene in which the terminal device is located is determined as a noise scene. For another example, the current location of the terminal device is detected as a library by the positioning module, and at this time, the current scene where the terminal device is located is determined as a quiet scene.
In another possible implementation manner, a specific implementation manner of detecting the current scene where the terminal device is located is as follows: detecting the scene voice of the terminal equipment, carrying out corpus analysis on the scene voice, acquiring a corpus set of the scene voice, determining a scene corresponding to the corpus set, and determining the scene corresponding to the corpus set as the current scene where the terminal equipment is located.
For example, the scene voice may be understood as detected voice of an environment where the terminal device is located, the scene voice may be detected before detecting the wake-up voice, after detecting the wake-up voice, or both, and is not limited herein.
For example, the scene voices detected in the library have specific corpora such as borrowing books, returning books, and the like; scene voices detected in a KTV entertainment place also have specific linguistic data such as a song star name, a song title, and a remigration. In this embodiment, the corpus analysis is performed on the detected scene voice from multiple angles, such as semantics, voice, context, and the like, to obtain all corpora of the scene voice, and all corpora form a corpus set. Optionally, a scene model capable of performing deep learning on the corpora corresponding to different scenes is configured in the terminal device, and a scene corresponding to the speech set can be obtained by inputting the corpus set into the scene model for deep learning. Optionally, the scene corresponding to the corpus set is subdivided into a quiet scene and a noise scene, and accordingly, it may be determined that the current scene where the terminal device is located is the quiet scene or the noise scene.
It should be noted that the current scenario in which the terminal device is detected is not limited to the illustration.
S102, acquiring a first threshold and a second threshold according to the current scene and the corresponding relation between the scene and the threshold, wherein the first threshold is larger than the second threshold.
Specifically, the first threshold and the second threshold may be set autonomously by a user or by a manufacturer before the terminal device leaves a factory, and are not particularly limited herein. In this embodiment, different first threshold values and second threshold values are set according to different scenes, for example, the first threshold value corresponding to the noise scene is higher than the first threshold value corresponding to the quiet scene, and the second threshold value corresponding to the noise scene is higher than the second threshold value corresponding to the quiet scene, so that the first threshold value or the second threshold value is adaptively adjusted according to different scenes, thereby avoiding the occurrence of a situation that the terminal device is awoken by mistake or can be awoken but not awoken due to the fixed first threshold value or second threshold value as much as possible, and improving the experience of the user in using the terminal device. More specifically, a correspondence between a scene and a threshold is configured in advance, and the first threshold and the second threshold can be accurately obtained according to the current scene and the correspondence between the scene and the threshold.
For example, the similarity between the wake-up speech and the preset wake-up word signal is used as a source of the first threshold or the second threshold, and specifically, if the similarity between the wake-up speech and the preset wake-up word signal is higher than the first threshold, the wake-up speech and the preset wake-up word signal may be considered to be matched; if the similarity between the awakening voice and the preset awakening word signal is lower than a second threshold value, the awakening voice and the preset awakening word signal can be considered to be not matched; if the similarity between the wake-up speech and the preset wake-up word signal is between the first threshold and the second threshold, it can be considered that the matching degree between the wake-up speech and the preset wake-up word signal is not high or low, and when such a situation occurs, it is necessary to further confirm whether the wake-up speech can be matched with the preset wake-up word signal such as "small degree".
S103, analyzing the acoustic features of the awakening voice according to a first acoustic model, and acquiring a first similarity between the awakening voice and a preset awakening word signal.
Specifically, the acoustic model is one of the most important parts in the speech recognition system, and the pronunciation sequence corresponding to the input speech can be obtained through analysis by the acoustic model, and the similarity between the input speech and the preset speech can also be obtained.
In this embodiment, a voice endpoint detection technology may be adopted to separate a mute part and an actual wake-up voice part of a detected wake-up voice, then extract acoustic features of the acquired actual wake-up voice part, input the acoustic features of the acquired wake-up voice into a first acoustic model for analysis, and acquire a first similarity between the wake-up voice and a preset wake-up word signal. Optionally, the first acoustic model is built based on a hidden markov model.
In a possible implementation manner, the specific implementation manner of step S103 is: determining feature similarity between the acoustic features of the awakening voice and the acoustic features of the preset awakening word signal according to the acoustic features of the awakening voice and the first acoustic model; and determining a first similarity between the awakening voice and the preset awakening word signal according to the feature similarities.
For example, the wake-up speech has a plurality of different acoustic features, and accordingly, the preset wake-up word signal has a plurality of different acoustic features, the first acoustic model may analyze a feature similarity between an acoustic feature of each wake-up speech and an acoustic feature of the corresponding preset wake-up word signal, and then perform statistical analysis on each obtained feature similarity, for example, may perform statistical analysis on each obtained feature similarity by using a maximum likelihood principle, obtain a maximum likelihood value between the acoustic feature of the wake-up speech and the acoustic feature of the preset wake-up word signal, and use the obtained maximum likelihood value as a first similarity between the wake-up speech and the preset wake-up word signal.
S104, judging whether the first similarity is larger than the second threshold and smaller than the first threshold.
Specifically, when the first similarity is greater than the second threshold and smaller than the first threshold, which indicates that the similarity between the detected wake-up speech and the preset wake-up word signal is not high or low, it is necessary to further confirm whether the wake-up speech can be matched with the preset wake-up word signal, such as "small degree".
S105, if the judgment result is yes, sending the awakening voice to a cloud server so that the cloud server judges a second similarity between the awakening voice and the preset awakening word signal according to a second acoustic model, and if the second similarity is larger than the first threshold, generating an awakening instruction for awakening the terminal equipment; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model.
In the present embodiment, the first acoustic model is configured locally, i.e., in the terminal device; the second acoustic model in this embodiment is configured in the cloud server. The cloud server has strong data processing capacity, for example, the cloud server can perform deep learning by mining more related data to establish a second acoustic model with higher identification precision. In this embodiment, the recognition accuracy of the second acoustic model is greater than that of the first acoustic model, and the situation that the similarity between the detected awakening voice recognized by the first acoustic model and the preset awakening word signal is not high or low can be recognized again through the second acoustic model of the cloud server.
If the second acoustic model of the cloud server judges that the second similarity between the awakening voice and the preset awakening word signal is larger than the first threshold value, the awakening voice can be considered to be matched with the preset awakening word signal. Taking the preset awakening word signal as the 'small degree' as an example, the recognition result is matching, which indicates that the user has spoken the awakening voice of 'small degree', and the operation of awakening the terminal device can be executed at this time. Specifically, in this embodiment, if the second similarity is greater than the first threshold, a wake-up instruction for waking up the terminal device is generated; and if the second similarity is smaller than the first threshold, not generating a wake-up instruction for waking up the terminal equipment.
In a possible implementation manner, if the second similarity is greater than the first threshold, a specific implementation manner of generating a wake-up instruction for waking up the terminal device is as follows:
and S1, analyzing the acoustic characteristics of the awakening voice according to the second acoustic model, and acquiring a pronunciation sequence corresponding to the awakening voice.
In this embodiment, the pronunciation sequence that most closely matches the wake-up voice can be determined by the second acoustic model.
And S2, analyzing the pronunciation sequence corresponding to the awakening voice according to the language model, and acquiring the text sequence corresponding to the awakening voice.
Specifically, a language model is one of the most important parts in a speech recognition system, and a text sequence corresponding to input speech can be obtained through the language model, that is, the input speech is converted into text. Optionally, the language model is an N-Gram model (N-Gram model).
After the pronunciation sequence most matched with the awakening voice can be determined through the second acoustic model, the text sequence most matched with the awakening voice can be determined through the voice model.
And S3, matching the text sequence corresponding to the awakening voice with the text sequence corresponding to the preset awakening word signal.
And S4, if the matching is successful, generating a wake-up instruction for waking up the terminal equipment.
In the embodiment, the similarity between the acoustic features of the awakening voice and the preset awakening word signal is preliminarily judged through the second acoustic model, and then, the text sequence corresponding to the awakening voice and the text sequence corresponding to the preset awakening word signal are matched through the language model, namely, the two times of matching are performed from the two angles of voice and text, so that the voice awakening method is more accurate and reliable.
And S106, receiving the awakening instruction and executing the operation of awakening the terminal equipment.
The voice awakening method provided by the embodiment of the invention comprises the following steps: detecting wake-up voice input to terminal equipment and a current scene where the terminal equipment is located; acquiring a first threshold and a second threshold according to the current scene and the corresponding relation between the scene and the threshold, wherein the first threshold is larger than the second threshold; analyzing the acoustic characteristics of the awakening voice according to a first acoustic model to acquire a first similarity between the awakening voice and a preset awakening word signal; judging whether the first similarity is larger than the second threshold and smaller than the first threshold; if the judgment result is yes, sending the awakening voice to a cloud server so that the cloud server judges a second similarity between the awakening voice and the preset awakening word signal according to a second acoustic model, and if the second similarity is larger than the first threshold, generating an awakening instruction for awakening the terminal equipment; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model; and receiving the awakening instruction and executing the operation of awakening the terminal equipment. According to the method, the situation that the similarity between the detected awakening voice and the preset awakening word signal is not high or low and is recognized by the local first acoustic model can be recognized again through the second acoustic model of the cloud server, the situation that the terminal device is awakened by mistake or can be awakened but not awakened can be avoided as much as possible, and the experience degree of a user is improved.
Fig. 2 is a flowchart illustrating a voice wake-up method according to another embodiment of the present invention. On the basis of the above embodiment, if the first similarity is greater than the first threshold, an operation of waking up the terminal device is performed; or if the first similarity is smaller than the second threshold, not executing the operation of waking up the terminal device.
As shown in fig. 2, the voice wake-up method proposed in this embodiment includes the following steps:
s201, detecting the awakening voice input into the terminal equipment and the current scene where the terminal equipment is located, and executing the step S202.
S202, acquiring a first threshold and a second threshold according to the current scene and the corresponding relation between the scene and the threshold, wherein the first threshold is larger than the second threshold, and executing the step S203.
S203, analyzing the acoustic characteristics of the awakening voice according to a first acoustic model, acquiring a first similarity between the awakening voice and a preset awakening word signal, and executing the step S204.
And S204, judging whether the first similarity is larger than the second threshold and smaller than the first threshold, and executing any one of the steps S205, S207 and S208.
S205, if the judgment result is yes, sending the awakening voice to a cloud server so that the cloud server judges a second similarity between the awakening voice and the preset awakening word signal according to a second acoustic model, and if the second similarity is larger than the first threshold, generating an awakening instruction for awakening the terminal equipment; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model, step S206 is performed.
S206, receiving the awakening instruction and executing the operation of awakening the terminal equipment.
It should be noted that the implementation manners of steps S201, S202, S203, S204, S205, and S206 in this embodiment are the same as the implementation manners of steps S101, S102, S103, S104, S105, and S106 in the foregoing embodiment, and are not described again here.
S207, if the first similarity is larger than the first threshold, the operation of awakening the terminal equipment is executed.
Specifically, it is determined through a local first acoustic model that the first similarity is greater than a first threshold, and the awakening voice can be considered to be matched with a preset awakening word signal. Taking the preset awakening word signal as the 'small degree' as an example, the recognition result is matching, which indicates that the user has spoken the awakening voice of 'small degree', and the operation of awakening the terminal device can be executed at this time.
S208, if the first similarity is smaller than the second threshold, the operation of awakening the terminal equipment is not executed.
Specifically, it is determined through a local first acoustic model that the first similarity is smaller than a second threshold, and the awakening voice is considered to be not matched with the preset awakening word signal. Taking the preset awakening word signal as the 'small degree' as an example, if the identification result is not matched, the user does not speak the awakening voice of 'small degree', and the operation of awakening the terminal device is not executed at this time.
According to the voice awakening method provided by the embodiment of the invention, when the first similarity is determined to be greater than the first threshold value through the local first acoustic model, the operation of awakening the terminal equipment is executed; and when the first similarity is determined to be smaller than the second threshold value through the local first acoustic model, the operation of awakening the terminal equipment is not executed. That is to say, for the situation that the recognition degree between the awakening voice identified by the first acoustic model and the preset awakening word signal is high or the situation that the recognition degree is low, the terminal device determines whether to execute the operation of awakening the terminal device, and does not need to send the operation to the cloud server for identification, so that the efficiency of executing the awakening operation of the terminal device can be improved.
Fig. 3 is a schematic structural diagram of a voice wake-up apparatus according to an embodiment of the present invention. The device can be realized by hardware and/or software, and can also be integrated into terminal equipment for executing the voice wake-up method.
As shown in fig. 3, the voice wake-up apparatus provided in this embodiment includes:
the first detection module 01 is used for detecting the awakening voice input to the terminal equipment;
the second detection module 02 is configured to detect a current scene where the terminal device is located;
a threshold module 03, configured to obtain a first threshold and a second threshold according to the current scene and a corresponding relationship between the scene and the threshold, where the first threshold is greater than the second threshold;
the analysis module 04 is configured to analyze the acoustic features of the wake-up voice according to a first acoustic model, and obtain a first similarity between the wake-up voice and a preset wake-up word signal;
the judging module 05 is configured to judge whether the first similarity is greater than the second threshold and smaller than the first threshold, and if the first similarity is greater than the second threshold and smaller than the first threshold, trigger the sending module;
the sending module 06 is configured to send the wake-up voice to a cloud server so that the cloud server determines a second similarity between the wake-up voice and the preset wake-up word signal according to a second acoustic model, and if the second similarity is greater than the first threshold, generates a wake-up instruction for waking up the terminal device; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model;
and the first execution module 07 is configured to receive the wake-up instruction and execute an operation of waking up the terminal device.
Further, the cloud server comprises a wake-up instruction generation module;
the wake-up instruction generation module is specifically configured to:
analyzing the acoustic characteristics of the awakening voice according to the second acoustic model to obtain a pronunciation sequence corresponding to the awakening voice;
analyzing a pronunciation sequence corresponding to the awakening voice according to a language model to obtain a text sequence corresponding to the awakening voice;
matching the text sequence corresponding to the awakening voice with the text sequence corresponding to the preset awakening word signal;
and if the matching is successful, generating a wake-up instruction for waking up the terminal equipment.
Further, the analysis module 04 is specifically configured to:
determining feature similarity between the acoustic features of the awakening voice and the acoustic features of the preset awakening word signal according to the acoustic features of the awakening voice and the first acoustic model;
and determining a first similarity between the awakening voice and the preset awakening word signal according to the feature similarities.
Further, the second detection module 02 is specifically configured to:
detecting the current position of the terminal equipment, and determining the current scene of the terminal equipment according to the current position;
or, the second detection module 02 is specifically configured to: detecting the scene voice of the terminal equipment, carrying out corpus analysis on the scene voice, acquiring a corpus set of the scene voice, determining a scene corresponding to the corpus set, and determining the scene corresponding to the corpus set as the current scene where the terminal equipment is located.
The specific manner in which the respective modules perform operations has been described in detail in relation to the apparatus in this embodiment, and will not be elaborated upon here.
The voice wake-up device provided by the embodiment of the invention comprises: the first detection module is used for detecting the awakening voice input to the terminal equipment; the second detection module is used for detecting the current scene of the terminal equipment; a threshold module, configured to obtain a first threshold and a second threshold according to the current scene and a correspondence between the scene and the threshold, where the first threshold is greater than the second threshold; the analysis module is used for analyzing the acoustic characteristics of the awakening voice according to a first acoustic model to acquire a first similarity between the awakening voice and a preset awakening word signal; the judging module is used for judging whether the first similarity is larger than the second threshold and smaller than the first threshold, and if so, the sending module is triggered; the sending module is used for sending the awakening voice to a cloud server so that the cloud server judges a second similarity between the awakening voice and the preset awakening word signal according to a second acoustic model, and if the second similarity is larger than the first threshold, an awakening instruction for awakening the terminal device is generated; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model; and the first execution module is used for receiving the awakening instruction and executing the operation of awakening the terminal equipment. The device can be used for recognizing the situation that the similarity between the detected awakening voice recognized by the local first acoustic model and the preset awakening word signal is not high or low, and can be recognized again through the second acoustic model of the cloud server, so that the situation that the terminal equipment is mistakenly awakened or can be awakened but not awakened can be avoided as much as possible, and the experience degree of a user is improved.
Fig. 4 is a schematic structural diagram of a voice wake-up apparatus according to an embodiment of the present invention. On the basis of the above embodiment, the voice wake-up apparatus further includes a second execution module and a third execution module.
As shown in fig. 4, the voice wake-up apparatus provided in this embodiment includes:
the first detection module 01 is used for detecting the awakening voice input to the terminal equipment;
the second detection module 02 is configured to detect a current scene where the terminal device is located;
a threshold module 03, configured to obtain a first threshold and a second threshold according to the current scene and a corresponding relationship between the scene and the threshold, where the first threshold is greater than the second threshold;
the analysis module 04 is configured to analyze the acoustic features of the wake-up voice according to a first acoustic model, and obtain a first similarity between the wake-up voice and a preset wake-up word signal;
a judging module 05, configured to judge whether the first similarity is greater than the second threshold and smaller than the first threshold, if so, trigger a sending module, or, if the judging result of the judging module is that the first similarity is greater than the first threshold, trigger a second executing module, or, if the judging result of the judging module is that the first similarity is smaller than the second threshold, trigger a third executing module;
the sending module 06 is configured to send the wake-up voice to a cloud server so that the cloud server determines a second similarity between the wake-up voice and the preset wake-up word signal according to a second acoustic model, and if the second similarity is greater than the first threshold, generates a wake-up instruction for waking up the terminal device; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model;
and the first execution module 07 is configured to receive the wake-up instruction and execute an operation of waking up the terminal device.
Further, the cloud server comprises a wake-up instruction generation module;
the wake-up instruction generation module is specifically configured to:
analyzing the acoustic characteristics of the awakening voice according to the second acoustic model to obtain a pronunciation sequence corresponding to the awakening voice;
analyzing a pronunciation sequence corresponding to the awakening voice according to a language model to obtain a text sequence corresponding to the awakening voice;
matching the text sequence corresponding to the awakening voice with the text sequence corresponding to the preset awakening word signal;
and if the matching is successful, generating a wake-up instruction for waking up the terminal equipment.
Further, the analysis module 04 is specifically configured to:
determining feature similarity between the acoustic features of the awakening voice and the acoustic features of the preset awakening word signal according to the acoustic features of the awakening voice and the first acoustic model;
and determining a first similarity between the awakening voice and the preset awakening word signal according to the feature similarities.
Further, the second detection module 02 is specifically configured to:
detecting the current position of the terminal equipment, and determining the current scene of the terminal equipment according to the current position;
or, the second detection module 02 is specifically configured to: detecting the scene voice of the terminal equipment, carrying out corpus analysis on the scene voice, acquiring a corpus set of the scene voice, determining a scene corresponding to the corpus set, and determining the scene corresponding to the corpus set as the current scene where the terminal equipment is located.
And a second executing module 08, configured to execute an operation of waking up the terminal device.
And a third executing module 09, configured to not execute an operation of waking up the terminal device.
The specific manner in which the respective modules perform operations has been described in detail in relation to the apparatus in this embodiment, and will not be elaborated upon here.
According to the voice awakening device provided by the embodiment of the invention, when the first similarity is determined to be greater than the first threshold value through the local first acoustic model, the operation of awakening the terminal equipment is executed; and when the first similarity is determined to be smaller than the second threshold value through the local first acoustic model, the operation of awakening the terminal equipment is not executed. That is to say, for the situation that the recognition degree between the awakening voice identified by the first acoustic model and the preset awakening word signal is high or the situation that the recognition degree is low, the terminal device determines whether to execute the operation of awakening the terminal device, and does not need to send the operation to the cloud server for identification, so that the efficiency of executing the awakening operation of the terminal device can be improved.
FIG. 5 illustrates a block diagram of an exemplary computer device 20 suitable for use in implementing embodiments of the present invention. The computer device 20 shown in fig. 5 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present invention.
As shown in fig. 5, the computer device 20 is in the form of a general purpose computing device. The components of computer device 20 may include, but are not limited to: one or more processors or processing units 21, a system memory 22, and a bus 23 that couples various system components including the system memory 22 and the processing unit 21.
The system Memory 22 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The computer device may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 23 by one or more data media interfaces. Memory 22 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 22, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The processing unit 21 executes various functional applications and data processing by executing programs stored in the system memory 22, for example, implementing the voice wakeup method shown in fig. 1-2.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In order to implement the foregoing embodiments, the present invention further provides a computer program product, wherein when the instructions in the computer program product are executed by a processor, the voice wake-up method according to the foregoing embodiments is performed.
In order to implement the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, is capable of implementing the voice wake-up method as described in the foregoing embodiments.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (10)
1. A voice wake-up method, comprising:
detecting awakening voice input into terminal equipment and a current scene where the terminal equipment is located, wherein the scene voice of the terminal equipment is detected, the scene voice is subjected to corpus analysis, a corpus set of the scene voice is obtained, the scene corresponding to the corpus set is determined to be the current scene where the terminal equipment is located, a scene model for deep learning of corpora corresponding to different scenes is configured in the terminal equipment, and the scene corresponding to the corpus set is obtained by inputting the corpus set into the scene model for deep learning;
acquiring a first threshold and a second threshold according to the current scene and the corresponding relation between the scene and the thresholds, wherein the first threshold is larger than the second threshold, the current scene comprises a noise scene and a quiet scene, the first threshold corresponding to the noise scene is higher than the first threshold corresponding to the quiet scene, and the second threshold corresponding to the noise scene is higher than the second threshold corresponding to the quiet scene;
analyzing the acoustic characteristics of the awakening voice according to a first acoustic model to acquire a first similarity between the awakening voice and a preset awakening word signal;
judging whether the first similarity is larger than the second threshold and smaller than the first threshold;
if the judgment result is yes, sending the awakening voice to a cloud server so that the cloud server judges a second similarity between the awakening voice and the preset awakening word signal according to a second acoustic model, and if the second similarity is larger than the first threshold, generating an awakening instruction for awakening the terminal equipment; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model;
and receiving the awakening instruction and executing the operation of awakening the terminal equipment.
2. The method of claim 1, wherein generating a wake-up instruction for waking up the terminal device if the second similarity is greater than the first threshold comprises:
analyzing the acoustic characteristics of the awakening voice according to the second acoustic model to obtain a pronunciation sequence corresponding to the awakening voice;
analyzing a pronunciation sequence corresponding to the awakening voice according to a language model to obtain a text sequence corresponding to the awakening voice;
matching the text sequence corresponding to the awakening voice with the text sequence corresponding to the preset awakening word signal;
and if the matching is successful, generating a wake-up instruction for waking up the terminal equipment.
3. The method of claim 1, wherein the analyzing the acoustic features of the wake-up speech according to the first acoustic model to obtain a first similarity between the wake-up speech and a preset wake-up word signal comprises:
determining feature similarity between the acoustic features of the awakening voice and the acoustic features of the preset awakening word signal according to the acoustic features of the awakening voice and the first acoustic model;
and determining a first similarity between the awakening voice and the preset awakening word signal according to the feature similarities.
4. The method of claim 1, further comprising:
if the first similarity is larger than the first threshold, the operation of awakening the terminal equipment is executed;
or if the first similarity is smaller than the second threshold, not executing the operation of waking up the terminal device.
5. A voice wake-up apparatus, comprising:
the first detection module is used for detecting the awakening voice input to the terminal equipment;
the second detection module is used for detecting the current scene of the terminal equipment; the second detection module is specifically configured to: detecting scene voice of the terminal equipment, performing corpus analysis on the scene voice, acquiring a corpus set of the scene voice and determining a scene corresponding to the corpus set, and determining the scene corresponding to the corpus set as a current scene where the terminal equipment is located, wherein a scene model for deep learning of corpora corresponding to different scenes is configured in the terminal equipment, and deep learning is performed by inputting the corpus set into the scene model to acquire the scene corresponding to the corpus set;
a threshold module, configured to obtain a first threshold and a second threshold according to the current scene and a correspondence between the scenes and the thresholds, where the first threshold is greater than the second threshold, the current scene includes a noise scene and a quiet scene, the first threshold corresponding to the noise scene is higher than the first threshold corresponding to the quiet scene, and the second threshold corresponding to the noise scene is higher than the second threshold corresponding to the quiet scene;
the analysis module is used for analyzing the acoustic characteristics of the awakening voice according to a first acoustic model to acquire a first similarity between the awakening voice and a preset awakening word signal;
the judging module is used for judging whether the first similarity is larger than the second threshold and smaller than the first threshold, and if so, the sending module is triggered;
the sending module is used for sending the awakening voice to a cloud server so that the cloud server judges a second similarity between the awakening voice and the preset awakening word signal according to a second acoustic model, and if the second similarity is larger than the first threshold, an awakening instruction for awakening the terminal device is generated; wherein the recognition accuracy of the second acoustic model is greater than the recognition accuracy of the first acoustic model;
and the first execution module is used for receiving the awakening instruction and executing the operation of awakening the terminal equipment.
6. The apparatus of claim 5, wherein the cloud server comprises a wake up instruction generation module;
the wake-up instruction generation module is specifically configured to:
analyzing the acoustic characteristics of the awakening voice according to the second acoustic model to obtain a pronunciation sequence corresponding to the awakening voice;
analyzing a pronunciation sequence corresponding to the awakening voice according to a language model to obtain a text sequence corresponding to the awakening voice;
matching the text sequence corresponding to the awakening voice with the text sequence corresponding to the preset awakening word signal;
and if the matching is successful, generating a wake-up instruction for waking up the terminal equipment.
7. The apparatus of claim 5, wherein the analysis module is specifically configured to:
determining feature similarity between the acoustic features of the awakening voice and the acoustic features of the preset awakening word signal according to the acoustic features of the awakening voice and the first acoustic model;
and determining a first similarity between the awakening voice and the preset awakening word signal according to the feature similarities.
8. The apparatus of claim 5, further comprising: a second execution module and a third execution module;
if the judgment result of the judgment module is that the first similarity is greater than the first threshold, triggering a second execution module; the second execution module is used for executing an operation of waking up the terminal device;
or, if the judgment result of the judgment module is that the first similarity is smaller than the second threshold, triggering a third execution module; the third execution module is configured to not execute an operation of waking up the terminal device.
9. A computer device, comprising: a processor and a memory;
wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the voice wake-up method according to any one of claims 1 to 4.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the voice wake-up method according to any one of claims 1-4.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710922732.XA CN107622770B (en) | 2017-09-30 | 2017-09-30 | Voice wake-up method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710922732.XA CN107622770B (en) | 2017-09-30 | 2017-09-30 | Voice wake-up method and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN107622770A CN107622770A (en) | 2018-01-23 |
| CN107622770B true CN107622770B (en) | 2021-03-16 |
Family
ID=61091402
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710922732.XA Active CN107622770B (en) | 2017-09-30 | 2017-09-30 | Voice wake-up method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN107622770B (en) |
Families Citing this family (47)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107564517A (en) | 2017-07-05 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Voice awakening method, equipment and system, cloud server and computer-readable recording medium |
| CN108198548B (en) * | 2018-01-25 | 2020-11-20 | 苏州奇梦者网络科技有限公司 | Voice awakening method and system |
| CN110444195B (en) * | 2018-01-31 | 2021-12-14 | 腾讯科技(深圳)有限公司 | Method and device for recognizing voice keywords |
| CN108335696A (en) * | 2018-02-09 | 2018-07-27 | 百度在线网络技术(北京)有限公司 | Voice awakening method and device |
| CN108196465A (en) * | 2018-03-07 | 2018-06-22 | 佛山市云米电器科技有限公司 | A kind of intelligent sound box and its control method based on phonetic order control |
| CN108537019A (en) * | 2018-03-20 | 2018-09-14 | 努比亚技术有限公司 | A kind of unlocking method and device, storage medium |
| CN108564941B (en) * | 2018-03-22 | 2020-06-02 | 腾讯科技(深圳)有限公司 | Voice recognition method, device, equipment and storage medium |
| CN108665900B (en) | 2018-04-23 | 2020-03-03 | 百度在线网络技术(北京)有限公司 | Cloud wake-up method and system, terminal and computer readable storage medium |
| CN108924337A (en) * | 2018-05-02 | 2018-11-30 | 宇龙计算机通信科技(深圳)有限公司 | A kind of control method and device waking up performance |
| CN110600023A (en) * | 2018-06-12 | 2019-12-20 | Tcl集团股份有限公司 | Terminal equipment interaction method and device and terminal equipment |
| CN108962240B (en) * | 2018-06-14 | 2021-09-21 | 百度在线网络技术(北京)有限公司 | Voice control method and system based on earphone |
| CN108831477B (en) * | 2018-06-14 | 2021-07-09 | 出门问问信息科技有限公司 | A speech recognition method, device, equipment and storage medium |
| CN109215647A (en) * | 2018-08-30 | 2019-01-15 | 出门问问信息科技有限公司 | Voice awakening method, electronic equipment and non-transient computer readable storage medium |
| JP7001029B2 (en) * | 2018-09-11 | 2022-01-19 | 日本電信電話株式会社 | Keyword detector, keyword detection method, and program |
| CN109473092B (en) * | 2018-12-03 | 2021-11-16 | 珠海格力电器股份有限公司 | Voice endpoint detection method and device |
| CN109584873A (en) * | 2018-12-13 | 2019-04-05 | 北京极智感科技有限公司 | A kind of awakening method, device, readable medium and the equipment of vehicle-mounted voice system |
| CN109817200A (en) * | 2019-01-30 | 2019-05-28 | 北京声智科技有限公司 | The optimization device and method that voice wakes up |
| CN110049107B (en) * | 2019-03-22 | 2022-04-08 | 钛马信息网络技术有限公司 | Internet vehicle awakening method, device, equipment and medium |
| CN110060678B (en) * | 2019-04-16 | 2021-09-14 | 深圳欧博思智能科技有限公司 | Virtual role control method based on intelligent device and intelligent device |
| CN110070857B (en) * | 2019-04-25 | 2021-11-23 | 北京梧桐车联科技有限责任公司 | Model parameter adjusting method and device of voice awakening model and voice equipment |
| CN110047487B (en) * | 2019-06-05 | 2022-03-18 | 广州小鹏汽车科技有限公司 | Wake-up method and device for vehicle-mounted voice equipment, vehicle and machine-readable medium |
| CN110390934B (en) * | 2019-06-25 | 2022-07-26 | 华为技术有限公司 | Information prompting method and voice interaction terminal |
| CN110544468B (en) * | 2019-08-23 | 2022-07-12 | Oppo广东移动通信有限公司 | Application awakening method and device, storage medium and electronic equipment |
| CN110515449B (en) * | 2019-08-30 | 2021-06-04 | 北京安云世纪科技有限公司 | Method and device for awakening intelligent equipment |
| CN110718212A (en) * | 2019-10-12 | 2020-01-21 | 出门问问信息科技有限公司 | Voice wake-up method, device and system, terminal and computer readable storage medium |
| CN110706703A (en) * | 2019-10-16 | 2020-01-17 | 珠海格力电器股份有限公司 | Voice wake-up method, device, medium and equipment |
| CN110808030B (en) * | 2019-11-22 | 2021-01-22 | 珠海格力电器股份有限公司 | Voice awakening method, system, storage medium and electronic equipment |
| CN110910878B (en) * | 2019-11-27 | 2022-02-11 | 珠海格力电器股份有限公司 | Voice wake-up control method and device, storage medium and household appliance |
| CN111081251B (en) * | 2019-11-27 | 2022-03-04 | 云知声智能科技股份有限公司 | Voice wake-up method and device |
| CN113192499A (en) * | 2020-01-10 | 2021-07-30 | 青岛海信移动通信技术股份有限公司 | Voice awakening method and terminal |
| CN111223490A (en) * | 2020-03-12 | 2020-06-02 | Oppo广东移动通信有限公司 | Voiceprint wake-up method and device, device and storage medium |
| CN111696562B (en) * | 2020-04-29 | 2022-08-19 | 华为技术有限公司 | Voice wake-up method, device and storage medium |
| CN111627439B (en) * | 2020-05-21 | 2022-07-22 | 腾讯科技(深圳)有限公司 | Audio data processing method and device, storage medium and electronic equipment |
| CN111724766B (en) * | 2020-06-29 | 2024-01-05 | 合肥讯飞数码科技有限公司 | Language identification method, related equipment and readable storage medium |
| CN112133301A (en) * | 2020-08-21 | 2020-12-25 | 深圳数联天下智能科技有限公司 | Voice recognition method, control device, voice recognition circuit and household equipment |
| CN114863936B (en) * | 2021-01-20 | 2025-05-16 | 华为技术有限公司 | A wake-up method and electronic device |
| CN114944155B (en) * | 2021-02-14 | 2024-06-04 | 成都启英泰伦科技有限公司 | Off-line voice recognition method combining terminal hardware and algorithm software processing |
| CN113516977B (en) * | 2021-03-15 | 2024-08-02 | 每刻深思智能科技(北京)有限责任公司 | Keyword recognition method and system |
| WO2022222045A1 (en) * | 2021-04-20 | 2022-10-27 | 华为技术有限公司 | Speech information processing method, and device |
| CN115249474A (en) * | 2021-04-27 | 2022-10-28 | 上海博泰悦臻网络技术服务有限公司 | Voice information recognition method, system, device and storage medium |
| CN113205809A (en) * | 2021-04-30 | 2021-08-03 | 思必驰科技股份有限公司 | Voice wake-up method and device |
| CN113744734A (en) * | 2021-08-30 | 2021-12-03 | 青岛海尔科技有限公司 | A voice wake-up method, device, electronic device and storage medium |
| CN113613079B (en) * | 2021-10-11 | 2022-01-04 | 浙江德塔森特数据技术有限公司 | Intelligent device video advertisement processing method and intelligent device |
| CN114915514B (en) * | 2022-03-28 | 2024-03-22 | 青岛海尔科技有限公司 | Method and device for processing intention, storage medium and electronic device |
| US12451135B2 (en) * | 2022-08-09 | 2025-10-21 | Samsung Electronics Co., Ltd. | Context-aware false trigger mitigation for automatic speech recognition (ASR) systems or other systems |
| CN118057805B (en) * | 2022-11-18 | 2025-09-09 | 荣耀终端股份有限公司 | Awakening method and awakening device of voice assistant |
| CN117594046B (en) * | 2023-10-19 | 2025-07-04 | 摩尔线程智能科技(北京)股份有限公司 | A model training method, awakening method, device and storage medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160189706A1 (en) * | 2014-12-30 | 2016-06-30 | Broadcom Corporation | Isolated word training and detection |
| CN106297777A (en) * | 2016-08-11 | 2017-01-04 | 广州视源电子科技股份有限公司 | Method and device for awakening voice service |
| CN106448663A (en) * | 2016-10-17 | 2017-02-22 | 海信集团有限公司 | Voice wakeup method and voice interaction device |
| CN106653031A (en) * | 2016-10-17 | 2017-05-10 | 海信集团有限公司 | Voice wake-up method and voice interaction device |
| CN107134279A (en) * | 2017-06-30 | 2017-09-05 | 百度在线网络技术(北京)有限公司 | A kind of voice awakening method, device, terminal and storage medium |
-
2017
- 2017-09-30 CN CN201710922732.XA patent/CN107622770B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160189706A1 (en) * | 2014-12-30 | 2016-06-30 | Broadcom Corporation | Isolated word training and detection |
| CN106297777A (en) * | 2016-08-11 | 2017-01-04 | 广州视源电子科技股份有限公司 | Method and device for awakening voice service |
| CN106448663A (en) * | 2016-10-17 | 2017-02-22 | 海信集团有限公司 | Voice wakeup method and voice interaction device |
| CN106653031A (en) * | 2016-10-17 | 2017-05-10 | 海信集团有限公司 | Voice wake-up method and voice interaction device |
| CN107134279A (en) * | 2017-06-30 | 2017-09-05 | 百度在线网络技术(北京)有限公司 | A kind of voice awakening method, device, terminal and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107622770A (en) | 2018-01-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107622770B (en) | Voice wake-up method and device | |
| CN110838289B (en) | Wake-up word detection method, device, equipment and medium based on artificial intelligence | |
| CN110718223B (en) | Method, apparatus, device and medium for voice interaction control | |
| US10943582B2 (en) | Method and apparatus of training acoustic feature extracting model, device and computer storage medium | |
| US10515627B2 (en) | Method and apparatus of building acoustic feature extracting model, and acoustic feature extracting method and apparatus | |
| CN107919130B (en) | Cloud-based voice processing method and device | |
| EP3078021B1 (en) | Initiating actions based on partial hotwords | |
| US10418027B2 (en) | Electronic device and method for controlling the same | |
| US8606581B1 (en) | Multi-pass speech recognition | |
| CN114038457B (en) | Method, electronic device, storage medium, and program for voice wakeup | |
| CN107886944B (en) | Voice recognition method, device, equipment and storage medium | |
| US20150325240A1 (en) | Method and system for speech input | |
| CN110706707B (en) | Method, apparatus, device and computer-readable storage medium for voice interaction | |
| EP2685452A1 (en) | Method of recognizing speech and electronic device thereof | |
| CN112151015A (en) | Keyword detection method and device, electronic equipment and storage medium | |
| CN105190746A (en) | Method and apparatus for detecting target keywords | |
| US12217751B2 (en) | Digital signal processor-based continued conversation | |
| CN107679032A (en) | Voice changes error correction method and device | |
| CN107516526B (en) | Sound source tracking and positioning method, device, equipment and computer readable storage medium | |
| CN108055617B (en) | A wake-up method, device, terminal device and storage medium for a microphone | |
| KR102409873B1 (en) | Method and system for training speech recognition models using augmented consistency regularization | |
| CN110047481A (en) | Method for voice recognition and device | |
| CN113053390A (en) | Text processing method and device based on voice recognition, electronic equipment and medium | |
| CN110858479A (en) | Voice recognition model updating method and device, storage medium and electronic equipment | |
| US20230169988A1 (en) | Method and apparatus for performing speaker diarization based on language identification |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |