CN111161714B - Voice information processing method, electronic equipment and storage medium - Google Patents
Voice information processing method, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111161714B CN111161714B CN201911353919.8A CN201911353919A CN111161714B CN 111161714 B CN111161714 B CN 111161714B CN 201911353919 A CN201911353919 A CN 201911353919A CN 111161714 B CN111161714 B CN 111161714B
- Authority
- CN
- China
- Prior art keywords
- information
- wake
- voice
- preset model
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 21
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 230000002618 waking effect Effects 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims description 39
- 101000588924 Anthopleura elegantissima Delta-actitoxin-Ael1a Proteins 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000004913 activation Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 241001071864 Lethrinus laticaudis Species 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/32—Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Telephone Function (AREA)
- Circuits Of Receivers In General (AREA)
Abstract
The embodiment of the application discloses a voice information processing method, which comprises the following steps: receiving voice data information; performing matching processing based on the voice data information and a first preset model base; wherein, the first preset model library comprises at least one piece of matching standard information; if the matching processing is information matching, performing matching processing based on the voice data information and a second preset model library, and determining a voice assistant application program to be awakened; the first preset model library is different from the second preset model library, and the second preset model library comprises at least one piece of matching standard information; and waking up the voice assistant application program to be woken up. The embodiment of the application also provides electronic equipment and a storage medium.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a voice information processing method, an electronic device, and a storage medium.
Background
With the rapid development of science and technology, electronic equipment is increasingly widely applied, and has great influence on work, life, study and entertainment of people. In order to improve the intelligentization degree of the electronic equipment, a voice assistant is often arranged in the electronic equipment.
However, at present, when a user wakes up one voice assistant application program of a plurality of voice assistant application programs installed on the electronic device through voice, the wake-up rate is low, and a situation of false wake-up easily occurs, so that the intelligent degree of the electronic device is low.
Content of the application
In order to solve the technical problems, the embodiment of the application expects to provide a voice information processing method, an electronic device and a storage medium, which solve the problems that when a voice assistant application program installed in the electronic device is awakened at present, the awakening rate is low and false awakening is easy to occur, improve the awakening rate, reduce the false awakening probability and improve the intelligent degree of the electronic device.
The technical scheme of the application is realized as follows:
in a first aspect, a method for processing voice information, the method comprising:
receiving voice data information;
performing matching processing based on the voice data information and a first preset model base; wherein, the first preset model library comprises at least one piece of matching standard information;
if the matching processing is information matching, performing matching processing based on the voice data information and a second preset model library, and determining a voice assistant application program to be awakened; the first preset model library is different from the second preset model library, and the second preset model library comprises at least one piece of matching standard information;
And waking up the voice assistant application program to be woken up.
Optionally, the matching processing based on the voice data information and the first preset model library includes:
acquiring target wake-up information from the voice data information;
matching the target wake-up information with first reference wake-up information in the first preset model library; the first reference wake-up information belongs to at least one piece of matching standard information included in the first preset model library, and is at least one piece of information of a plurality of wake-up information;
and if the target wake-up information is matched with the first reference wake-up information, determining that the information is matched.
Optionally, if the matching result is that the information is matched, performing matching processing based on the voice data information and a second preset model library, and determining the voice assistant application program to be awakened includes:
if the matching result is information matching, matching the target wake-up information with second reference wake-up information in the second preset model library; wherein the second reference wake-up information belongs to at least one matching standard information included in the second preset model library;
If the target wake-up information is matched with second reference wake-up information in the second preset model library, determining a target voice assistant application program corresponding to the target wake-up information;
the target voice assistant application is determined to be the voice assistant application to be awakened.
Optionally, the target wake-up information is a first acoustic model corresponding to a target wake-up word, the first reference wake-up information is a second acoustic model of the corresponding wake-up word, the second reference wake-up information is a third acoustic model of the corresponding wake-up word, and if the target wake-up information is matched with the second reference wake-up information in the second preset model library, determining a target voice assistant application program corresponding to the target wake-up information includes:
if the first sound model is matched with the second sound model, determining the target wake word based on the first sound model;
the target voice assistant application is determined based on the target wake word.
Optionally, after the waking the voice assistant application to be woken up, the method further includes:
determining a target control instruction based on the voice data information;
Sending the target control instruction to a server corresponding to the awakened voice assistant application program to be awakened through the awakened voice assistant application program to be awakened;
receiving feedback information sent by the server and playing the feedback information; the feedback information is obtained by the server responding to the target control instruction and executing the operation corresponding to the target control instruction.
Optionally, the determining the target control instruction based on the voice data information includes:
acquiring target voice control information from the voice data information;
and converting the target voice control information between a voice signal and characters to obtain the target control instruction.
Optionally, before the matching processing is performed on the voice data information and the first preset model library, the method further includes:
determining a first number of voice assistant applications installed in the electronic device;
acquiring wake words of each voice assistant application program in the first number of voice assistant application programs;
determining a first sound model corresponding to a wake-up word of each voice assistant application program to obtain a first preset model library;
Determining a second sound model corresponding to the wake-up word of each voice assistant application program to obtain a second preset model library; wherein the quality requirement of the first acoustic model is lower than the quality requirement of the second acoustic model.
Optionally, after determining the second model corresponding to the wake word of each voice assistant application program and obtaining the second preset model library, the method further includes:
if the electronic device is detected to be provided with the updated second number of voice assistant application programs, determining a changed voice assistant application program based on the first number of voice assistant application programs and the second number of voice assistant application programs;
updating the first preset model library and the second preset model library based on wake words corresponding to the changed voice assistant application program.
In a second aspect, an electronic device, the electronic device comprising: a microphone, a low voltage digital signal processor DSP, an application processor AP, wherein:
the microphone is used for collecting voice data information;
the low-voltage DSP is used for carrying out matching processing on the voice data information and the information in the first preset model library, waking up the AP to realize one-time waking up when the voice data information is matched with the information in the first preset model library, and sending the voice data information to the AP, so that the power consumption of the electronic equipment is reduced;
The AP is used for carrying out matching processing on the voice data information and the information in the second preset model library, waking up the voice assistant application program to be woken up corresponding to the voice data information to realize secondary waking up when the voice data information is matched with the information in the second preset model library, and executing relevant steps after waking up the voice assistant application program to be woken up.
In a third aspect, a storage medium has stored thereon a speech information processing program that, when executed by a processor, implements the steps of the speech information processing method according to any one of the preceding claims.
The embodiment of the application provides a voice information processing method, electronic equipment and a storage medium, wherein by receiving voice data information and carrying out matching processing on the basis of the voice data information and a first preset model library, if the matching processing result is information matching, the matching processing is carried out on the basis of the voice data information and a second preset model library, a voice assistant application program to be awakened is determined, and the voice assistant application program to be awakened is awakened. Therefore, when the received voice data information is matched with the information in the first preset model library, the voice data information is matched with the second preset model library again to determine the voice assistant application program to be awakened and awaken, two-stage awakening is realized, the problems that when one voice assistant application program installed in the electronic equipment is awakened at present, the awakening rate is low and false awakening is easy to occur are solved, the awakening rate is improved, the false awakening probability is reduced, and the intelligent degree of the electronic equipment is improved.
Drawings
Fig. 1 is a flow chart of a voice information processing method according to an embodiment of the present application;
fig. 2 is a flow chart of another voice information processing method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating another method for processing voice information according to an embodiment of the present disclosure;
fig. 4 is a flowchart of a voice information processing method according to another embodiment of the present application;
FIG. 5 is a flowchart illustrating another voice information processing method according to another embodiment of the present disclosure;
fig. 6 is an application scenario schematic diagram of a voice information processing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
An embodiment of the present application provides a voice information processing method, referring to fig. 1, where the method is applied to an electronic device, and the method includes the following steps:
step 101, receiving voice data information.
In the embodiment of the application, the electronic device can adopt voice acquisition equipment such as a microphone to acquire voice data information. The voice acquisition device such as a microphone may be disposed within the electronic device or may have a communication link with the electronic device. The corresponding voice data information can be voice information acquired by voice acquisition equipment such as a microphone, or can be acquired by the equipment after digital preprocessing such as preliminary sampling and the like of the voice information acquired by the voice acquisition equipment such as the microphone.
Step 102, matching processing is performed on the basis of the voice data information and the first preset model base.
The first preset model library comprises at least one piece of matching standard information.
In this embodiment of the present application, the first preset model library is preset, and includes at least one matching standard information which is the same as the voice data information format and is used for performing voice data information matching processing. The matching standard information is a plurality of different speech model information obtained in advance. The voice data information is initially matched with the voice model information in the first preset model library.
And step 103, if the matching processing is information matching, performing matching processing based on the voice data information and a second preset model library, and determining the voice assistant application program to be awakened.
The first preset model library is different from the second preset model library, and the second preset model library comprises at least one piece of matching standard information.
In the embodiment of the application, if at least one piece of matching standard information in the first preset model library is matched with the voice data information, the matching processing is determined to be information matching. And when the matching processing is information matching, carrying out accurate matching processing on the voice data information and at least one matching standard information in a second preset model library so as to determine the voice assistant application program to be awakened, wherein the determined identification information of the voice assistant application program to be awakened can be determined at the moment.
Step 104, wake up the voice assistant application to be waken.
In this embodiment of the present application, after determining the voice assistant application to be awakened, a control instruction for controlling the operation of the voice assistant application to be awakened may be generated to implement awakening, i.e. the voice assistant application to be awakened is started to enable the voice assistant application to operate, for example, voice control instruction information sent by a user is received, and corresponding operations are performed.
The embodiment of the application provides a voice information processing method, which is characterized in that voice data information is received, matching processing is carried out on the basis of the voice data information and a first preset model library, if the matching processing result is information matching, matching processing is carried out on the basis of the voice data information and a second preset model library, a voice assistant application program to be awakened is determined, and the voice assistant application program to be awakened is awakened. Therefore, when the received voice data information is matched with the information in the first preset model library, the voice data information is matched with the second preset model library again to determine the voice assistant application program to be awakened and awaken, two-stage awakening is realized, the problems that when one voice assistant application program installed in the electronic equipment is awakened at present, the awakening rate is low and false awakening is easy to occur are solved, the awakening rate is improved, the false awakening probability is reduced, and the intelligent degree of the electronic equipment is improved.
Based on the foregoing embodiments, embodiments of the present application provide a voice information processing method, referring to fig. 2, the method is applied to an electronic device, and the method includes the following steps:
step 201, receiving voice data information.
In the embodiment of the application, the electronic device collects the voice information sent by the user through the voice collection module, such as the microphone, which is arranged in the electronic device, and samples the collected voice information to obtain the corresponding voice digital information, namely the voice data information.
Step 202, performing matching processing with a first preset model base based on the voice data information.
The first preset model library comprises at least one piece of matching standard information.
In other embodiments of the present application, step 202 may be implemented by the following steps 202 a-202 c:
step 202a, obtaining target wake-up information from voice data information.
In the embodiment of the present application, the target wake-up information may be a voice vocabulary for waking up the voice assistant application, for example, when a certain brand of voice assistant application is developed, the voice assistant application is set to receive and recognize that the user sends a voice "small X", and then the voice assistant application is automatically started. The target wake-up information may be obtained from the voice data information by a voice recognition function.
Step 202b, matching the target wake-up information with the first reference wake-up information in the first preset model library.
The first reference wake-up information belongs to at least one piece of matching standard information included in a first preset model library, and the first reference wake-up information is at least one piece of information of a plurality of wake-up information.
In this embodiment of the present application, the first reference wake-up information may be at least one characteristic parameter of each of the plurality of wake-up information, that is, the first reference wake-up information represents one wake-up information set, for example, the first reference information may be that each of the plurality of wake-up information is a pronunciation of two words, a pronunciation of a first word in the pronunciation of the two words is the same, a tone of the second word is the same, and so on. In other application scenarios, the first reference wake-up information may also be a reference model with a relatively high fault tolerance corresponding to one wake-up information, but a relatively low matching degree with the target wake-up information, i.e. the matching degree between the characteristic parameter of the target wake-up information and the characteristic parameter of the first reference wake-up information may be lower than 50%, for example.
Step 202c, if the target wake-up information is matched with the first reference wake-up information, determining that the information is matched.
It should be noted that, step 202 may be implemented in a Low-voltage (Low-power) digital signal processor (Digital Signal Processor, DSP) provided in the electronic device, that is, the first preset model library is stored in a storage area of the Low-power DSP, where the Low-power DSP is operated, and other running programs in the electronic device are not operated, for example, are in a sleep state, so that power consumption of the electronic device may be reduced. The target wake-up information is matched with the first preset model library, which can be called as a primary wake-up processing process.
And 203, if the matching process is information matching, performing the matching process based on the voice data information and the second preset model library, and determining the voice assistant application program to be awakened.
The first preset model library is different from the second preset model library, and the second preset model library comprises at least one piece of matching standard information.
In this embodiment of the present application, the feature parameter of the at least one matching standard information set in the second preset model library is more than the feature parameter of the at least one matching standard information set in the first preset model library, that is, the fault tolerance of the matching standard information in the second preset model library is lower, but the matching degree requirement of the voice data information and the matching standard information in the second preset model library is higher, that is, when the corresponding voice data information matches the matching standard information in the second preset model library, the matching degree of the feature parameter corresponding to the voice data information and the feature parameter of the matching standard information in the second preset model library is higher, for example, the matching degree is more than 90%.
In other embodiments of the present application, step 203 may be implemented by the following steps 203 a-203 c:
and 203a, if the matching result is that the information is matched, matching the target wake-up information with the second reference wake-up information in the second preset model library.
The second reference wake-up information belongs to at least one matching standard information included in a second preset model library.
In the embodiment of the application, when the Low-power DSP successfully matches the target wake-up information with the first reference wake-up information in the first preset model library, that is, the matching result is information matching, a signal is sent to an Application Processor (AP) of the electronic device to wake up the Application Processor (AP) of the electronic device, and then the target wake-up information is sent to the Application Processor (AP), so that the target Application Processor (AP) performs matching processing on the target wake-up information and the second reference wake-up information in the second preset model library stored in the Application Processor (AP).
Step 203b, if the target wake-up information is matched with the second reference wake-up information in the second preset model library, determining a target voice assistant application program corresponding to the target wake-up information.
In this embodiment of the present application, the application processor AP performs matching according to a certain matching rule, for example, obtains a sound feature parameter of the target wake-up information and a sound feature parameter corresponding to the second reference wake-up information, and determines that the target wake-up information matches the second reference wake-up information when the matching degree exceeds 90%, so as to determine that the voice assistant application corresponding to the second reference wake-up information is the target voice assistant application. The process of matching the target wake-up information with the second preset model library may be referred to as a secondary wake-up process.
In other embodiments of the present application, the target wake-up information is a first acoustic model corresponding to the target wake-up word, the first reference wake-up information is a second acoustic model of the corresponding wake-up word, and the second reference wake-up information is a third acoustic model of the corresponding wake-up word, and at this time, step 203b may be implemented by the following steps a11 to a 12:
and a step a11 of determining a target wake word based on the first sound model if the first sound model is matched with the second sound model.
In this embodiment of the present application, the matching between the first acoustic model and the second acoustic model may be that the profile of the collected spectrogram of the first acoustic distribution is matched with the profile of the spectrogram of the second acoustic model, that is, the profile has a certain similarity, for example, a similarity of more than 20%, where the first acoustic model and the second acoustic model are matched may be determined. When the first sound model is matched with the second sound model, the first sound model is subjected to voice recognition, and the target wake-up word is determined to be obtained, for example, text information can be obtained.
And a step a12, determining a target voice assistant application program based on the target wake word.
In the embodiment of the application, the target wake word is used for uniquely identifying one voice assistant application program, so that the target voice assistant application program can be determined according to the target wake word.
Step 203c, determining the target voice assistant application as the voice assistant application to be awakened.
Step 204, wake up the voice assistant application to be waken.
In this embodiment of the present application, after determining the voice assistant application to be awakened, the application processor AP determines that the user wishes to execute some corresponding voice control instructions by the voice assistant application to be awakened, and therefore sends a start instruction to the voice assistant application to be awakened, so that the voice assistant application to be awakened works.
In other embodiments of the present application, referring to fig. 3, after the electronic device executes step 204, the following steps may be further selected to be executed:
step 205, determining a target control instruction based on the voice data information.
In the embodiment of the application, the voice data information sent by the user comprises a target wake-up word and a target control instruction. For example, the voice data information is "small X", please query for weather forecast ", so that the electronic device analyzes the voice data information, and can determine that the target wake-up word is" small X ", and the corresponding target control instruction is" query for weather forecast ".
In other embodiments of the present application, step 205 may be implemented by the following steps b 11-b 12:
And b11, acquiring target voice control information from the voice data information.
In the implementation of the present application, voice recognition and segmentation are performed on the received voice data information, and voice data information unrelated to the target wake-up word and the control information is deleted, for example, when the voice data information is "small X", please you query for the weather forecast of beijing today ", these information are deleted to obtain the first voice model" small X "and the target voice control information corresponding to the target wake-up word, which may be recorded as" query for the weather forecast of beijing today "or" query for "beijing", "today" and "weather forecast".
And b12, converting the target voice control information between a voice signal and characters to obtain a target control instruction.
In the embodiment of the application, the target voice control information is identified by adopting a text recognition technology, and the corresponding voice information is converted into text information to obtain a corresponding target control instruction, namely, a target control instruction of 'inquiring weather forecast of Beijing today'.
And 206, sending a target control instruction to a server corresponding to the awakened voice assistant application program to be awakened through the awakened voice assistant application program to be awakened.
In this embodiment of the present application, the server corresponding to the awakened voice assistant application may be a server that may be connected by passing through the awakened voice assistant application, that is, the awakened voice assistant application may obtain the operation content corresponding to the target control instruction information from the server. For example, the electronic device sends a target control instruction of 'inquiring the weather forecast of Beijing' to a corresponding server through the internet by the awakened voice assistant application program to be awakened.
Step 207, receiving feedback information sent by the server, and playing the feedback information.
The feedback information is obtained by the server responding to the target control instruction and executing the operation corresponding to the target control instruction.
In this embodiment of the present application, the corresponding server responds to the target control instruction, obtains the weather forecast of Beijing today, and feeds back the weather forecast as feedback information to the awakened voice assistant application program to be awakened of the electronic device, and performs voice broadcast after the awakened voice assistant application program receives the feedback information, for example, the broadcast form is "Beijing today's weather sunny, 25 degrees of temperature, breeze" and the like.
In other embodiments of the present application, referring to fig. 4, before the electronic device performs step 202, the electronic device may further perform the following steps 208 to 211:
step 208, determining a first number of voice helper applications installed in the electronic device.
In an embodiment of the present application, the first number is at least one.
Step 209, obtaining wake words for each of a first number of voice helper applications.
In embodiments of the present application, each voice assistant application may uniquely identify and identify the voice assistant application to a wake word, i.e., a wake word.
Step 210, determining a first sound model corresponding to the wake word of each voice assistant application program, and obtaining a first preset model library.
In this embodiment of the present application, the first acoustic model corresponding to the wake-up word of each voice assistant application may be obtained from a server provided by a developer, that is, according to the determined first number of wake-up words, the first acoustic model corresponding to the first number of wake-up words is obtained from the server provided by the developer, so as to obtain a first preset model library. It should be noted that, because the voice matching quality requirement of the first preset model library is low, a plurality of wake-up words can share one sound model, that is, the accuracy of the sound model is relatively low. I.e. the number of first sound models in the first pre-set model library is smaller than or equal to the first number.
Step 211, determining a second sound model corresponding to the wake word of each voice assistant application program, so as to obtain a second preset model library.
Wherein the quality requirement of the first acoustic model is lower than the quality requirement of the second acoustic model.
In the embodiment of the present application, the number of second sound models in the second preset model library is preferably the first number, that is, one wake word corresponds to one second sound model.
In other embodiments of the present application, referring to fig. 5, after the electronic device performs step 211, the following steps 212 to 213 may be further performed:
step 212, if it is detected that the electronic device has installed the updated second number of voice helper applications, determining a changing voice helper application based on the first number of voice helper applications and the second number of voice helper applications.
In this embodiment of the present application, that is, after the voice assistant application installed in the electronic device is updated, the updated second number of voice assistant applications is determined, where the second number may be greater than the first number, or may be equal to the first number, or may be less than the first number.
And 213, updating the first preset model library and the second preset model library based on wake-up words corresponding to the changed voice assistant application program.
It should be noted that, steps 212 to 213 may be performed after or before any of steps 211.
The embodiment of the application provides an application scenario, as shown in fig. 6, where an electronic device includes a microphone a, a low-voltage DSP B, an application processor AP C, and an internet terminal D, a dotted line path represents a data information transmission path for waking up an application program to be woken up, and a solid line path represents a transmission path of target voice control information, where the microphone a includes: the system comprises a voice acquisition module, an activation detection module, a forward buffer module and a bottom layer application interface; the low-voltage DSP comprises a first preset model library and a low-voltage DSP driver; the application processor AP C comprises a second preset model library, an audio hardware abstraction layer, a recorder, an installed first voice assistant application program and a second voice assistant application program; the internet end D includes: a first server and a second server. An activation detection module in the microphone A detects whether user voice input exists in real time; if the activation detection module detects the voice input of the user, the voice acquisition module is started to acquire the voice of the user; the sound collection module sends the collected voice data information to the forward buffer module for buffer storage; the voice data information stored in the forward buffer memory module is buffered and firstly sent to the low-voltage DSP B through the bottom layer application interface, and the low-voltage DSP B carries out matching processing on the received voice data information and matching standard information in a first preset model library; when the voice data information is matched with the reference matching information in the first preset model library, the voice data information is driven by the low-voltage DSP B to be sent to the application processor AP C, so that the application processor AP C is activated from a dormant state to a working state; the application processor AP C performs matching processing on the received voice data information and matching standard information in a second preset model library, determines a specific matching object, namely, determines which voice assistant application program corresponds to the voice data information, and wakes up a first voice assistant application program when determining that the voice data information corresponds to the first voice assistant application program; after waking up the first voice assistant application program, sending the voice data information cached in the microphone A forward cache module to an audio hardware abstraction layer of the application processor AP C through a low-voltage DSP drive of the low-voltage DSP B; transmitting the voice data information to a recorder through an audio hardware abstraction layer; transmitting, by the recorder, the voice data information to the awakened first voice assistant application; the first voice assistant application program is in communication link with a corresponding first server, acquires content corresponding to voice data information, and broadcasts the content.
For example, when a second voice assistant application program with a target wake-up word of "pig" is installed in an electronic device such as a mobile phone, if the voice data information acquired by the voice acquisition module is "pig, please query the weather of Beijing today", the voice data information is sent to the forward buffer module for buffer; processing the acquired ' piglet, please inquire about Beijing today's weather ' in a forward buffer module, and acquiring a corresponding target wake-up word, wherein the target wake-up word is acquired as ' piglet '; the target wake-up word 'piglet' is sent to the low-voltage DSP B through a bottom layer application interface of the microphone; the low-voltage DSP B performs matching processing on the target wake-up word and the acoustic model in the first preset model library to determine a first acoustic model corresponding to the "pig", where it should be noted that, because the fault tolerance of the acoustic model in the first preset model library is higher, when the obtained target wake-up word is, for example, "small a" or "small degree", the first acoustic model corresponding to the "pig" may also be obtained by matching, that is, may be recorded as { pig, small a, small } = first acoustic model in the first preset model library, or may of course be recorded in other manners, for example, in a voiceprint manner; because the target wake-up word 'piglet' is successfully matched in the first preset model library, the target wake-up word 'piglet' is sent to the AP C through a low-voltage DSP driver; the AP C performs matching processing on the target wake-up word 'piglet' and a sound model in a second preset model library, determines a second voice assistant application program matched with the target wake-up word from the second preset model library, and wakes up the second voice assistant application program; after waking up the second voice assistant application program, the voice data information 'piggy, please inquire about Beijing today's weather ', cached in the forward cache module is processed to obtain a target control instruction' inquire about Beijing today's weather'; the target control instruction 'inquires the weather of Beijing today' is sent to the low-voltage DSP driver of the low-voltage DSP B through the bottom application interface of the microphone A so as to be sent to the audio hardware abstraction layer in the AP C through the low-voltage DSP driver; the audio hardware abstraction layer in the AP C sends the received target control instruction ' inquiring Beijing today ' weather ' to a second voice assistant application program through a recorder, the second voice assistant application program sends the target control instruction to a corresponding second server, the second server acquires the Beijing today's weather condition and feeds back the Beijing today's weather condition to the second voice assistant application program, and the second voice assistant application program broadcasts the received Beijing weather condition through a loudspeaker corresponding to the electronic equipment, so that the function of the voice assistant application program is realized. After the electronic equipment plays the weather condition of the background, if voice data information of other installed voice assistant application programs is received, repeatedly executing the above flow, waking up the corresponding voice assistant application program, and realizing a control instruction of a corresponding user. In some application scenarios, when the primary wake-up in the low-voltage DSP fails, that is, the target wake-up word fails to match, or when the secondary wake-up in the AP fails, that is, the target wake-up word fails to match, the electronic device may generate corresponding prompt information for prompting the target voice assistant application to determine failure, where the prompt information may be, for example, "the voice assistant application is not installed", or "please re-input the voice information", etc. When the server does not return the corresponding feedback information, corresponding prompt information, for example, "no corresponding content is found" may also be generated.
It should be noted that, the type of the information fed back to the voice assistant application program by the server corresponding to the voice assistant application program may be a voice type or a text type, and when the fed back information is a text type, the second voice assistant application program may perform text-to-speech conversion processing on the received information to obtain corresponding voice data, thereby implementing broadcasting. In some application scenarios, the voice assistant application may also translate information fed back by the server according to the needs of the user. In some other application scenarios, the process of acquiring the target wake-up word and the process of acquiring the target control instruction may also be performed in the low voltage DSP. In some application scenarios, the information fed back by the server may also be displayed and stored on a display area corresponding to the voice assistant application program.
It should be noted that, in this embodiment, the descriptions of the same steps and the same content as those in other embodiments may refer to the descriptions in other embodiments, and are not repeated here.
The embodiment of the application provides a voice information processing method, which is characterized in that voice data information is received, matching processing is carried out on the basis of the voice data information and a first preset model library, if the matching processing result is information matching, matching processing is carried out on the basis of the voice data information and a second preset model library, a voice assistant application program to be awakened is determined, and the voice assistant application program to be awakened is awakened. Therefore, when the received voice data information is matched with the information in the first preset model library, the voice data information is matched with the second preset model library again to determine the voice assistant application program to be awakened and awaken, two-stage awakening is realized, the problems that when one voice assistant application program installed in the electronic equipment is awakened at present, the awakening rate is low and false awakening is easy to occur are solved, the awakening rate is improved, the false awakening probability is reduced, and the intelligent degree of the electronic equipment is improved.
Based on the foregoing embodiments, the embodiments of the present application provide an electronic device, which may be applied to the voice information processing method provided in the corresponding embodiments of fig. 1 to 5, and referring to fig. 7, the electronic device 3 may include: a microphone 31, a low voltage digital signal processor DSP 32, an application processor AP 33, wherein:
a microphone 31 for collecting voice data information;
the low-voltage DSP 32 is configured to perform matching processing on the voice data information and information in the first preset model library, wake up the AP to achieve one-time wake-up when the voice data information is matched with the information in the first preset model library, and send the voice data information to the AP, thereby reducing power consumption of the electronic device;
the AP 33 is configured to perform a matching process on the voice data information and information in the second preset model library, wake up the voice assistant application to be woken up corresponding to the voice data information to implement a secondary wake up when the voice data information is matched with the information in the second preset model library, and execute a related step after waking up the voice assistant application to be woken up.
In other embodiments of the present application, when the low-voltage DSP 32 performs the matching process based on the voice data information and the first preset model library, the following steps are specifically implemented:
Acquiring target wake-up information from voice data information;
matching the target wake-up information with first reference wake-up information in a first preset model library; the first reference wake-up information belongs to at least one piece of matching standard information included in a first preset model library, and is at least one piece of information of a plurality of wake-up information;
and if the target wake-up information is matched with the first reference wake-up information, determining that the information is matched.
In other embodiments of the present application, if the matching result is information matching, the AP 33 performs matching processing based on the voice data information and the second preset model library, and when determining that the voice assistant application program is to be awakened, the following steps are specifically implemented:
if the matching result is information matching, matching the target wake-up information with second reference wake-up information in a second preset model library;
if the target wake-up information is matched with the second reference wake-up information in the second preset model library, determining a target voice assistant application program corresponding to the target wake-up information; the second reference wake-up information belongs to at least one matching standard information included in a second preset model library;
The target voice assistant application is determined to be the voice assistant application to be awakened.
In other embodiments of the present application, the target wake-up information is a first sound model corresponding to the target wake-up word, in the low voltage DSP 32, the first reference wake-up information is a second sound model of the corresponding wake-up word, in the AP 33, the second reference wake-up information is a third sound model of the corresponding wake-up word, and correspondingly, if the target wake-up information matches with the second reference wake-up information in the second preset model library, the AP 33 performs the following steps, when determining the target voice assistant application program corresponding to the target wake-up information:
if the first sound model is matched with the second sound model, determining a target wake word based on the first sound model;
based on the target wake word, a target voice assistant application is determined.
In other embodiments of the present application, the AP 33 performs the following steps after waking up the voice assistant application to be woken up:
determining a target control instruction based on the voice data information;
sending a target control instruction to a server corresponding to the awakened voice assistant application program to be awakened through the awakened voice assistant application program to be awakened;
Receiving feedback information sent by a server and playing the feedback information; the feedback information is obtained by the server responding to the target control instruction and executing the operation corresponding to the target control instruction.
In other embodiments of the present application, the AP 33 performs the following steps when determining the target control instruction based on the voice data information:
acquiring target voice control information from voice data information;
and converting the target voice control information between the voice signal and the text to obtain a target control instruction.
In other embodiments of the present application, the AP 33 is further configured to, before the low-voltage DSP 32 performs the matching process with the first preset model library based on the voice data information, perform the following steps:
determining a first number of voice assistant applications installed in the electronic device;
acquiring wake words of each voice assistant application program in the first number of voice assistant application programs;
determining a first sound model corresponding to a wake-up word of each voice assistant application program to obtain a first preset model library;
determining a second sound model corresponding to the wake-up word of each voice assistant application program to obtain a second preset model library; wherein the quality requirement of the first acoustic model is lower than the quality requirement of the second acoustic model.
In other embodiments of the present application, the AP 33 performs the step of determining a second model corresponding to the wake word of each voice assistant application program, and after obtaining the second preset model library, is further configured to perform the following steps:
if the updated second number of voice assistant applications is detected to be installed on the electronic device, determining a changing voice assistant application based on the first number of voice assistant applications and the second number of voice assistant applications;
updating the first preset model library and the second preset model library based on wake-up words corresponding to the changing voice assistant application program.
It should be noted that, the specific implementation process of the steps executed by the processor in this embodiment may refer to the implementation process in the voice information processing method provided in the corresponding embodiment of fig. 1 to 5, which is not described herein again.
The embodiment of the application provides electronic equipment, which is used for receiving voice data information, carrying out matching processing on the basis of the voice data information and a first preset model library, carrying out matching processing on the basis of the voice data information and a second preset model library if the matching processing result is information matching, determining a voice assistant application program to be awakened, and awakening the voice assistant application program to be awakened. Therefore, when the received voice data information is matched with the information in the first preset model library, the voice data information is matched with the second preset model library again to determine the voice assistant application program to be awakened and awaken, two-stage awakening is realized, the problems that when one voice assistant application program installed in the electronic equipment is awakened at present, the awakening rate is low and false awakening is easy to occur are solved, the awakening rate is improved, the false awakening probability is reduced, and the intelligent degree of the electronic equipment is improved.
Based on the foregoing embodiments, embodiments of the present application provide a computer-readable storage medium having stored therein executable instructions that, when executed by a processor, will cause the processor to perform the methods provided by embodiments of the present application, which may be, for example, the methods provided by the corresponding embodiments of fig. 1-5.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application.
Claims (10)
1. A method of processing speech information, the method comprising:
receiving voice data information;
performing matching processing based on the voice data information and a first preset model base; the first preset model library comprises a plurality of pieces of matching standard information;
If the matching processing is information matching, performing matching processing based on the voice data information and a second preset model library, and determining a voice assistant application program to be awakened; the first preset model library is different from the second preset model library, and the second preset model library comprises a plurality of pieces of matching standard information;
and waking up the voice assistant application program to be woken up.
2. The method of claim 1, the matching process with the first preset model library based on the voice data information, comprising:
acquiring target wake-up information from the voice data information;
matching the target wake-up information with first reference wake-up information in the first preset model library; the first reference wake-up information belongs to at least one piece of matching standard information included in the first preset model library, and is at least one piece of information of a plurality of wake-up information;
and if the target wake-up information is matched with the first reference wake-up information, determining that the information is matched.
3. The method of claim 2, wherein if the matching result is information matching, performing matching processing based on the voice data information and a second preset model library, and determining the voice assistant application to be awakened, includes:
If the matching result is information matching, matching the target wake-up information with second reference wake-up information in the second preset model library; wherein the second reference wake-up information belongs to at least one matching standard information included in the second preset model library;
if the target wake-up information is matched with second reference wake-up information in the second preset model library, determining a target voice assistant application program corresponding to the target wake-up information;
the target voice assistant application is determined to be the voice assistant application to be awakened.
4. The method of claim 3, wherein the target wake information is a first acoustic model corresponding to a target wake word, the first reference wake information is a second acoustic model of the corresponding wake word, the second reference wake information is a third acoustic model of the corresponding wake word, and determining the target voice assistant application corresponding to the target wake information if the target wake information matches the second reference wake information in the second preset model library comprises:
if the first sound model is matched with the third sound model, determining the target wake word based on the first sound model;
The target voice assistant application is determined based on the target wake word.
5. The method of claim 1, after the waking of the voice assistant application to be woken up, the method further comprising:
determining a target control instruction based on the voice data information;
sending the target control instruction to a server corresponding to the awakened voice assistant application program to be awakened through the awakened voice assistant application program to be awakened;
receiving feedback information sent by the server and playing the feedback information; the feedback information is obtained by the server responding to the target control instruction and executing the operation corresponding to the target control instruction.
6. The method of claim 5, the determining a target control instruction based on the voice data information, comprising:
acquiring target voice control information from the voice data information;
and converting the target voice control information between a voice signal and characters to obtain the target control instruction.
7. The method according to any one of claims 1-6, further comprising, before the matching process with the first preset model library based on the voice data information:
Determining a first number of voice assistant applications installed in the electronic device;
acquiring wake words of each voice assistant application program in the first number of voice assistant application programs;
determining a first sound model corresponding to a wake-up word of each voice assistant application program to obtain a first preset model library;
determining a third sound model corresponding to the wake-up word of each voice assistant application program to obtain a second preset model library; wherein the quality requirement of the first acoustic model is lower than the quality requirement of the third acoustic model.
8. The method of claim 7, the determining a third acoustic model corresponding to a wake word for each voice helper application, after deriving the second library of preset models, further comprising:
if the updated second number of voice assistant applications of the electronic device is detected to be installed, determining a changed voice assistant application based on the first number of voice assistant applications and the second number of voice assistant applications;
updating the first preset model library and the second preset model library based on wake words corresponding to the changed voice assistant application program.
9. An electronic device, the electronic device comprising: a microphone, a low voltage digital signal processor DSP, an application processor AP, wherein:
the microphone is used for collecting voice data information;
the low-voltage DSP is used for carrying out matching processing on the voice data information and the information in the first preset model library, waking up the AP to realize one-time waking up when the voice data information is matched with the information in the first preset model library, and sending the voice data information to the AP, so that the power consumption of the electronic equipment is reduced;
the AP is used for carrying out matching processing on the voice data information and the information in the second preset model library, waking up the voice assistant application program to be woken up corresponding to the voice data information to realize secondary waking up when the voice data information is matched with the information in the second preset model library, and executing relevant steps after waking up the voice assistant application program to be woken up.
10. A storage medium having stored thereon a speech information processing program which, when executed by a processor, implements the steps of the speech information processing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911353919.8A CN111161714B (en) | 2019-12-25 | 2019-12-25 | Voice information processing method, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911353919.8A CN111161714B (en) | 2019-12-25 | 2019-12-25 | Voice information processing method, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111161714A CN111161714A (en) | 2020-05-15 |
CN111161714B true CN111161714B (en) | 2023-07-21 |
Family
ID=70557995
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911353919.8A Active CN111161714B (en) | 2019-12-25 | 2019-12-25 | Voice information processing method, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111161714B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111694605A (en) | 2020-05-18 | 2020-09-22 | Oppo广东移动通信有限公司 | Voice information processing method and device, storage medium and electronic equipment |
CN111667827B (en) * | 2020-05-28 | 2023-10-17 | 北京小米松果电子有限公司 | Voice control method and device for application program and storage medium |
CN111696553B (en) * | 2020-06-05 | 2023-08-22 | 北京搜狗科技发展有限公司 | Voice processing method, device and readable medium |
CN111640434A (en) * | 2020-06-05 | 2020-09-08 | 三星电子(中国)研发中心 | Method and apparatus for controlling voice device |
CN111755002B (en) * | 2020-06-19 | 2021-08-10 | 北京百度网讯科技有限公司 | Speech recognition device, electronic apparatus, and speech recognition method |
CN115148197A (en) * | 2021-03-31 | 2022-10-04 | 华为技术有限公司 | Voice wake-up method, device, storage medium and system |
CN116030804A (en) * | 2021-10-26 | 2023-04-28 | 北京小米移动软件有限公司 | Voice awakening method, voice awakening device and storage medium |
CN114578949A (en) * | 2022-03-23 | 2022-06-03 | 歌尔股份有限公司 | A wake-up method, device and smart wearable device for a smart wearable device |
CN117334182A (en) * | 2022-06-25 | 2024-01-02 | 华为技术有限公司 | Voice interaction method and related device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015074411A1 (en) * | 2013-11-20 | 2015-05-28 | 中兴通讯股份有限公司 | Terminal unlocking method, apparatus and terminal |
CN108877790A (en) * | 2018-05-21 | 2018-11-23 | 江西午诺科技有限公司 | Speaker control method, device, readable storage medium storing program for executing and mobile terminal |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5983179A (en) * | 1992-11-13 | 1999-11-09 | Dragon Systems, Inc. | Speech recognition system which turns its voice response on for confirmation when it has been turned off without confirmation |
CN103309618A (en) * | 2013-07-02 | 2013-09-18 | 姜洪明 | Mobile operating system |
CN103956164A (en) * | 2014-05-20 | 2014-07-30 | 苏州思必驰信息科技有限公司 | Sound awakening method and system |
US10235997B2 (en) * | 2016-05-10 | 2019-03-19 | Google Llc | Voice-controlled closed caption display |
CN107767863B (en) * | 2016-08-22 | 2021-05-04 | 科大讯飞股份有限公司 | Voice awakening method and system and intelligent terminal |
CN106448663B (en) * | 2016-10-17 | 2020-10-23 | 海信集团有限公司 | Voice awakening method and voice interaction device |
CN107315561A (en) * | 2017-06-30 | 2017-11-03 | 联想(北京)有限公司 | A kind of data processing method and electronic equipment |
CN107591151B (en) * | 2017-08-22 | 2021-03-16 | 百度在线网络技术(北京)有限公司 | Far-field voice awakening method and device and terminal equipment |
US20190172457A1 (en) * | 2017-11-30 | 2019-06-06 | Compal Electronics, Inc. | Notebook computer and driving method of voice assistant system |
CN107919123B (en) * | 2017-12-07 | 2022-06-03 | 北京小米移动软件有限公司 | Multi-voice assistant control method, device and computer readable storage medium |
EP3732674A4 (en) * | 2017-12-29 | 2021-09-01 | Fluent.ai Inc. | KEYWORD RECOGNITION SYSTEM WITH LOW POWER CONSUMPTION |
CN110047485B (en) * | 2019-05-16 | 2021-09-28 | 北京地平线机器人技术研发有限公司 | Method and apparatus for recognizing wake-up word, medium, and device |
CN112289313A (en) * | 2019-07-01 | 2021-01-29 | 华为技术有限公司 | Voice control method, electronic equipment and system |
CN110602624B (en) * | 2019-08-30 | 2021-05-25 | Oppo广东移动通信有限公司 | Audio test method, device, storage medium and electronic equipment |
CN110600008A (en) * | 2019-09-23 | 2019-12-20 | 苏州思必驰信息科技有限公司 | Voice wake-up optimization method and system |
-
2019
- 2019-12-25 CN CN201911353919.8A patent/CN111161714B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015074411A1 (en) * | 2013-11-20 | 2015-05-28 | 中兴通讯股份有限公司 | Terminal unlocking method, apparatus and terminal |
CN108877790A (en) * | 2018-05-21 | 2018-11-23 | 江西午诺科技有限公司 | Speaker control method, device, readable storage medium storing program for executing and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN111161714A (en) | 2020-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111161714B (en) | Voice information processing method, electronic equipment and storage medium | |
CN107704275B (en) | Intelligent device awakening method and device, server and intelligent device | |
CN111223497B (en) | Nearby wake-up method and device for terminal, computing equipment and storage medium | |
CN102999161B (en) | A kind of implementation method of voice wake-up module and application | |
CN109326289B (en) | Wake-up-free voice interaction method, device, device and storage medium | |
CN113327609B (en) | Method and apparatus for speech recognition | |
CN106782554B (en) | Voice awakening method and device based on artificial intelligence | |
CN102568478B (en) | Video play control method and system based on voice recognition | |
WO2020228270A1 (en) | Speech processing method and device, computer device and storage medium | |
WO2021082572A1 (en) | Wake-up model generation method, smart terminal wake-up method, and devices | |
CN113160815B (en) | Intelligent control method, device, equipment and storage medium for voice wakeup | |
CN110875045A (en) | Voice recognition method, intelligent device and intelligent television | |
CN105009204A (en) | Speech recognition power management | |
EP2389672A1 (en) | Method, apparatus and computer program product for providing compound models for speech recognition adaptation | |
CN112420044A (en) | Voice recognition method, voice recognition device and electronic equipment | |
KR20240090400A (en) | Continuous conversation based on digital signal processor | |
CN111583933B (en) | Voice information processing method, device, equipment and medium | |
CN109524010A (en) | A kind of sound control method, device, equipment and storage medium | |
CN111833870A (en) | Awakening method and device of vehicle-mounted voice system, vehicle and medium | |
CN116013257A (en) | Speech recognition and speech recognition model training method, device, medium and equipment | |
CN112885341A (en) | Voice wake-up method and device, electronic equipment and storage medium | |
CN112185425A (en) | Audio signal processing method, device, equipment and storage medium | |
CN113889116A (en) | Voice information processing method and device, storage medium and electronic device | |
CN112306560B (en) | Method and apparatus for waking up an electronic device | |
CN112309396A (en) | AI virtual robot state dynamic setting system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |