[go: up one dir, main page]

CN104423925B - A kind of information processing method and electronic equipment - Google Patents

A kind of information processing method and electronic equipment Download PDF

Info

Publication number
CN104423925B
CN104423925B CN201310376885.0A CN201310376885A CN104423925B CN 104423925 B CN104423925 B CN 104423925B CN 201310376885 A CN201310376885 A CN 201310376885A CN 104423925 B CN104423925 B CN 104423925B
Authority
CN
China
Prior art keywords
input operation
input
recognition engine
state
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310376885.0A
Other languages
Chinese (zh)
Other versions
CN104423925A (en
Inventor
贾旭
张渊毅
彭世峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201310376885.0A priority Critical patent/CN104423925B/en
Publication of CN104423925A publication Critical patent/CN104423925A/en
Application granted granted Critical
Publication of CN104423925B publication Critical patent/CN104423925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This application provides a kind of information processing method and electronic equipment, electronic equipment includes display unit and speech recognition engine, and method includes: the M object that display includes the first object, and the first object is the mark of speech recognition engine;Obtain input operation;It determines whether input operation meets predetermined condition, when meeting predetermined condition, when input operation is operated as the first input, controls speech recognition engine from low-power consumption and be switched to normal operating conditions;When speech recognition engine is in the radio reception state of normal operating conditions, voice input is obtained;When speech recognition engine is in identification state, the parameter information based on the second object identifies voice input;When speech recognition engine is in result feedback states, recognition result is exported.The application, which makes user only and need to execute one to be simply input operation, can make electronic equipment start function of radio receiver, and electronic equipment can be for voice input rapid feedback as a result, therefore easy to operate, better user experience.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to an information processing method and an electronic device.
Background
On a mobile phone or other intelligent platform, voice has its own advantages as an interactive mode, such as information entry, retrieval, execution of deep commands. In the prior art, a typical application of a voice interaction mode is a voice assistant Siri, and a user using Siri can read a short message, search for a contact, introduce a restaurant, inquire weather, set an alarm clock by voice, and the like through a mobile phone.
However, the inventor finds out in the process of realizing the invention: the voice interaction by using Siri takes a long time and the process is complicated. Taking the example of searching for the contact, Siri needs to be started first, enter a proprietary interface of Siri, and then receive and process a voice command, and in addition, the process of starting Siri by itself has a heavy burden on the psychology of the system and the user, which is far less direct than opening a directory, so that the user preferentially wants to search for the contact through an application program of the contact, rather than through Siri.
Disclosure of Invention
In view of the above, the present invention provides an information processing method and an electronic device, so as to solve the problems of long time and complicated process of performing voice interaction in the prior art, and the technical scheme is as follows:
an information processing method applied to an electronic device, the electronic device including a display unit and a speech recognition engine, the speech recognition engine having a low power consumption state and a normal operation state, the method comprising:
displaying M objects on the display unit, wherein the M objects comprise a first object which is an identifier of the voice recognition engine;
obtaining an input operation;
determining whether the input operation meets a predetermined condition;
when the input operation meets the preset condition and serves as a first input operation, controlling the voice recognition engine to switch from the low-power consumption state to the normal working state, wherein the first input operation can determine the first object and a second object, and the second object belongs to the M objects;
when the voice recognition engine is in the sound receiving state of the normal working state, acquiring voice input;
when the voice recognition engine is in a recognition state of the normal working state, recognizing the voice input based on the parameter information of the second object;
and outputting a recognition result when the voice recognition engine is in a result feedback state of the normal working state.
Optionally, when the input operation does not meet a predetermined condition and the input operation is taken as a second input operation, controlling the speech recognition engine to switch from the low power consumption state to the normal operating state;
when the voice recognition engine is in a sound receiving state of the normal working state, acquiring the voice input;
when the voice recognition engine is in a recognition state of the normal working state, recognizing the voice input;
and outputting a recognition result when the voice recognition engine is in a result feedback state of the normal working state.
Wherein the determining whether the input operation satisfies a predetermined condition includes:
determining a first object according to the input operation;
and prompting N objects in the M objects when the first object is determined, wherein the parameter information of each object in the N objects can act on the recognition state of the normal working state of the voice recognition engine.
Wherein the predetermined condition is whether the objects determined by the input operation include a first object and a second object, the second object being one of the M objects determined at an end point of the input operation;
and/or the first and/or second light sources,
the predetermined condition is whether the object determined by the input operation includes a first object and a second object, and the second object is one of the N objects determined at the end point of the input operation.
Wherein the input operation is an operation of two control input points;
the determining whether the input operation satisfies a predetermined condition includes:
determining a first object when a first manipulation input point of the two manipulation input points satisfies a predetermined rule, and determining a second object when a second manipulation input point of the two manipulation input points satisfies the predetermined rule;
when the input operation is finished, determining that the input operation corresponds to the first object and the second object;
or,
the input operation is a sliding input operation;
the determining whether the input operation satisfies a predetermined condition includes:
determining a first object when the track point of the input operation meets a preset rule;
determining a second object when the end point of the input operation meets a preset rule;
when the input operation is finished, determining that the input operation corresponds to the first object and the second object;
or,
the electronic equipment is provided with a touch sensing unit, a touch area of the touch sensing unit is divided into a first area and a second area, the second area is overlapped with the display unit, and the input operation is sliding operation;
the determining whether the input operation satisfies a predetermined condition includes:
when the starting point of the input operation is in the first area and the input operation moves from the first area to the dividing line of the first area and the second area, determining the first object and displaying the first object, so that the input operation controls the first object to move in the second area;
determining a second object if an end point of the input operation satisfies a predetermined rule while the input operation moves from the first area into the second area;
and when the input operation is finished, determining that the input operation corresponds to the first object and the second object. An electronic device including a display unit and a speech recognition engine having a low power consumption state and a normal operation state, the electronic device comprising:
the display module is used for displaying M objects on the display unit, wherein the M objects comprise a first object which is an identifier of the voice recognition engine;
the first acquisition module is used for acquiring input operation;
the determining module is used for determining whether the input operation meets a preset condition;
a first control module, configured to control the speech recognition engine to switch from the low power consumption state to the normal operating state when the input operation satisfies the predetermined condition and the input operation is a first input operation, where the first input operation is capable of determining the first object and a second object, and the second object belongs to the M objects;
the second acquisition module is used for acquiring voice input when the voice recognition engine is in a sound reception state of the normal working state;
the first recognition module is used for recognizing the voice input based on the parameter information of the second object when the voice recognition engine is in the recognition state of the normal working state;
and the first output module is used for outputting a recognition result when the voice recognition engine is in a result feedback state of the normal working state.
Optionally, the electronic device further includes:
the second control module is used for controlling the voice recognition engine to be switched from the low power consumption state to the normal working state when the input operation does not meet the preset condition and is taken as the second input operation;
the third acquisition module is used for acquiring the voice input when the voice recognition engine is in the sound reception state of the normal working state;
the second recognition module is used for recognizing the voice input when the voice recognition engine is in a recognition state of the normal working state;
and the second output module is used for outputting the recognition result when the voice recognition engine is in the result feedback state of the normal working state.
Wherein the determining module comprises:
a first determination submodule for determining a first object according to the input operation;
a prompt submodule, configured to prompt N objects of the M objects when the first object is determined, where parameter information of each object of the N objects is capable of acting on a recognition state of the normal operating state of the speech recognition engine.
Wherein the predetermined condition is whether the objects determined by the input operation include a first object and a second object, the second object being one of the M objects determined at an end point of the input operation;
or/and the predetermined condition is whether the objects determined by the input operation comprise a first object and a second object, wherein the second object is one of the N objects determined at the end point of the input operation.
Wherein the input operation is an operation of two control input points;
the determining module comprises:
a second determination submodule for determining a first object when a first manipulation input point of the two manipulation input points satisfies a predetermined rule;
a third determination submodule for determining a second object when a second manipulation input point of the two manipulation input points satisfies a predetermined rule;
the fourth determining submodule is used for determining that the input operation corresponds to the first object and the second object when the input operation is finished;
or,
the input operation is a sliding input operation;
the determining module comprises:
a fifth determination submodule for determining the first object when the track point of the input operation satisfies a predetermined rule;
a sixth determination sub-module for determining a second object when an end point of the input operation satisfies a predetermined rule;
a seventh determining submodule, configured to determine that the input operation corresponds to the first object and the second object when the input operation is ended;
or,
the electronic equipment is provided with a touch sensing unit, a touch area of the touch sensing unit is divided into a first area and a second area, the second area is overlapped with the display unit, and the input operation is sliding input operation;
the determining module comprises:
an eighth determination submodule configured to determine the first object and display the first object when a start point of the input operation is in the first area and the input operation moves from the first area to a dividing line between the first area and the second area, so that the input operation controls the first object to move in the second area;
a ninth determining sub-module for determining a second object if an end point of the input operation satisfies a predetermined rule when the input operation moves from the first area into the second area;
and the tenth determining submodule is used for determining that the input operation corresponds to the first object and the second object when the input operation is finished.
The technical scheme has the following beneficial effects:
the invention provides an information processing method and electronic equipment, wherein M objects are displayed on a display unit, the M objects comprise a first object, an input operation is obtained, whether the input operation meets a preset condition or not is determined, when the input operation meets the preset condition and is taken as the first input operation, a voice recognition engine is controlled to be switched from a low-power consumption state to a normal working state, when the voice recognition engine is in a sound receiving state of the normal working state, the voice input is obtained, when the voice recognition engine is in a recognition state of the normal working state, the voice input is recognized based on parameter information of a second object, and when the voice recognition engine is in a result feedback state of the normal working state, a recognition result is output. According to the information processing method and the electronic equipment, the user can start the radio function of the electronic equipment only by executing a simple input operation, and the electronic equipment can quickly feed back a result aiming at voice input, so that the operation is simple, and the user experience is better.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first information processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a second information processing method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a slide input operation in a second information processing method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a slide input operation in a second information processing method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a slide input operation in a second information processing method according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a slide input operation in a second information processing method according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a third information processing method according to an embodiment of the present invention;
fig. 8 is an operation diagram of two manipulation input points in a third information processing method according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating operations of two manipulation input points according to a third information processing method provided in the embodiment of the present invention;
FIG. 10 is a diagram illustrating operations of two manipulation input points according to a third information processing method of the present invention;
FIG. 11 is a flowchart illustrating a fourth information processing method according to an embodiment of the present invention;
fig. 12 is a schematic diagram illustrating a slide input operation in a fourth information processing method according to an embodiment of the present invention;
fig. 13 is a schematic diagram illustrating a slide input operation in a fourth information processing method according to an embodiment of the present invention;
fig. 14 is a schematic diagram illustrating a slide input operation in a fourth information processing method according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flow chart of an information processing method according to an embodiment of the present invention is shown, where the method is applied to an electronic device, the electronic device includes a display unit and a speech recognition engine, and the speech recognition engine has a low power consumption state and a normal operating state, and the method according to the embodiment of the present invention includes:
step S101: and displaying M objects on the display unit, wherein the M objects comprise a first object which is an identifier of the voice recognition engine.
Step S102: an input operation is obtained.
Step S103: it is determined whether the input operation satisfies a predetermined condition.
Step S104: and when the input operation meets a preset condition and is taken as a first input operation, controlling the voice recognition engine to switch from a low-power consumption state to a normal working state, wherein the first input operation can determine a first object and a second object, and the second object belongs to the M objects.
Step S105: when the speech recognition engine is in a sound reception state in a normal working state, speech input is obtained.
Step S106: and when the speech recognition engine is in the recognition state of the normal working state, recognizing the speech input based on the parameter information of the second object.
Step S107: and outputting the recognition result when the speech recognition engine is in a result feedback state of a normal working state.
In the information processing method provided by the embodiment of the invention, M objects are displayed on a display unit, wherein the M objects comprise a first object, an input operation is obtained, whether the input operation meets a preset condition or not is determined, when the input operation meets the preset condition and is taken as the first input operation, a voice recognition engine is controlled to be switched from a low power consumption state to a normal working state, when the voice recognition engine is in a sound receiving state of the normal working state, the voice input is obtained, when the voice recognition engine is in a recognition state of the normal working state, the voice input is recognized based on parameter information of a second object, and when the voice recognition engine is in a result feedback state of the normal working state, a recognition result is output. According to the information processing method provided by the embodiment of the invention, the user only needs to execute a simple input operation to enable the electronic equipment to start the radio function, and the electronic equipment can quickly feed back the result aiming at the voice input, so that the operation is simple, and the user experience is better.
Referring to fig. 2, a schematic flow chart of another information processing method according to an embodiment of the present invention is shown, where the method is applied to an electronic device, the electronic device includes a touch sensing unit, a display unit, and a speech recognition engine, a touch area of the touch sensing unit is divided into a first area and a second area, the second area is overlapped with the display unit, and the speech recognition engine has a low power consumption state and a normal operating state, where the method includes:
step S201: and displaying M objects on the display unit, wherein the M objects comprise a first object which is an identifier of the voice recognition engine.
Step S202: a slide input operation is obtained.
Step S203: it is determined whether the slide input operation satisfies a predetermined condition.
In this embodiment, the M objects (except the first object) displayed on the display unit may be all interactable objects, and the parameter information of each interactable object can act on the recognition state of the normal operating state of the speech recognition engine. Of course, of the M objects (except the first object) displayed on the display unit, both an interactable object and a non-interactable object may be included. When the M objects include non-interactable objects, when the first object is determined according to the input operation, N objects of the M objects may be prompted, and parameter information of each object of the N objects may act on a recognition state of a normal operating state of the speech recognition engine, that is, when the first object is determined, N interactable objects may be determined from the M objects, and the determined interactable objects are prompted, for example, a mark is displayed on the interactable objects for prompting.
The specific determination of whether the slide input operation satisfies the predetermined condition is: and determining whether the objects determined by the input operation comprise a first object and a second object, wherein the second object is an interactive object in the M objects determined by the end point of the input operation, and the parameter information of the interactive object can act on the recognition state of the normal working state of the speech recognition engine. Specifically, when the M objects (except the first object) displayed by the display unit are all interactable objects, the second object is one interactable object of the M objects (except the first object) determined by the end point of the input operation; when the M objects (except the first object) displayed on the display unit include both the interactable object and the non-interactable object, the second object is one of the interactable objects among the M objects determined by the end point of the input operation.
Further, whether the object determined by the input operation includes the first object and the second object specifically includes:
when a starting point of the slide input operation is in the first area and the slide input operation moves from the first area to a dividing line between the first area and the second area, the first object is determined and displayed so that the slide input operation controls the first object to move in the second area. Taking an electronic device as a mobile phone as an example, please refer to fig. 3, the first area may be a main screen area of the mobile phone, and the second area may be a frame area below the main screen of the mobile phone. The sliding input operation is that fingers slide from a frame area below a main screen of the mobile phone to a main screen area of the mobile phone, when the fingers slide to a dividing line between the main screen area and the frame area below the mobile phone, an identifier of a first object, namely a voice recognition engine, is displayed from the edge below the main screen, and the finger slides to drive the identifier of the voice recognition engine to move in the main screen area.
When the sliding input operation moves from the first area to the second area, if the end point of the input operation meets a predetermined rule, determining a second object, wherein the predetermined rule is that: when the object at the end point of the input operation is an interactive object, the shielding ratio of the object at the end point of the input operation to the first object is larger than the preset ratio. Specifically, if M objects (except the first object) are all interactable objects, it may be determined that the object at the end point of the slide input operation is an interactable object, and if N interactable objects are included in the M objects, when the object at the end point of the slide input operation is one of the N objects, it is determined that the object at the end point of the slide input operation is an interactable object. And after determining that the object at the end point of the sliding input operation is the interactive object, acquiring the shielding proportion of the first object and the interactive object. Since the slide input operation controls the first object to move, when the slide input operation is ended, the first object is stopped at the end point of the slide input operation, at this time, the shielding ratio of the first object to the interactive object at the end point of the slide input operation can be acquired, whether the shielding ratio is greater than the set ratio value or not is judged, and if the shielding ratio is greater than the set ratio value, the object at the end point of the slide input operation is determined to be the second object.
The input operation is finished, and the first object and the second object can be determined. After the sliding input operation is finished, the end point of the sliding input operation comprises two objects which are respectively a first object and a second object, wherein the first object is an object moving along with the sliding input operation, and the other object at the end point of the sliding input operation is the second object.
Step S204: and when the input operation meets a preset condition and is taken as a first input operation, controlling the voice recognition engine to switch from a low-power consumption state to a normal working state, wherein the first input operation can determine a first object and a second object, and the second object belongs to the M objects.
In this embodiment, the second object may be, but is not limited to, an identification of an application program, a search progress bar, a text input box, or a contact. Similarly, taking the electronic device as a mobile phone and the second object as an identifier of the music playing program as an example, please refer to fig. 4, when the user slides a finger from an edge area below the main screen area to the main screen area, the identifier of the speech recognition engine appears below the main screen area, the finger drives the identifier of the speech recognition engine to move to the identifier of the music playing program, and at this time, the speech recognition engine is switched from the low power consumption state to the normal operating state.
Step S205: when the speech recognition engine is in a sound reception state in a normal working state, speech input is obtained.
When the voice recognition engine is switched to the sound receiving state in the normal working state from the low power consumption state, a sound receiving interface can be displayed on the display unit so as to prompt a user to carry out voice input.
Step S206: when the speech recognition engine is in a recognition state of a normal operation state, the speech input is recognized based on the parameter information of the second object.
Since the second object is an interactive object, the parameter information of the interactive object can act on the recognition state of the normal working state of the speech recognition engine, and when the speech recognition engine is in the recognition state, the speech input can be recognized based on the parameter information of the second object, specifically, the speech input is recognized in the information set corresponding to the second object.
Step S207: and outputting the recognition result when the speech recognition engine is in a result feedback state of a normal working state.
In this embodiment, the output of the recognition result may be, but is not limited to, displaying the recognized information and/or controlling the application program to perform the corresponding operation.
Similarly, taking the second object as the identifier of the music playing program as an example, after receiving the voice input, the voice recognition engine enters the recognition state of the normal state, recognizes the voice input based on the information set corresponding to the identifier of the music playing program, specifically, assuming that the voice input is to play the song "invisible wings", the song "invisible wings" is searched from the information set corresponding to the icon of the music playing program, and after the song "invisible wings" is found, the recognition result is output, that is, the music playing program is started and the song "invisible wings" is played. Taking the identifier of the address book as an example, when the voice recognition engine enters the sound reception state in the normal working state, the voice recognition engine receives voice input, if the voice input is to search for the Li Ming telephone number, the voice recognition engine enters the recognition state in the normal working state from the sound reception state in the normal working state, the Li Ming telephone number is searched from the information set corresponding to the identifier of the address book, and after the Li Ming telephone number is searched, the Li Ming telephone number is displayed on the display unit.
The above process is an information processing mode when the slide input satisfies the preset condition, and the information processing mode when the slide input operation does not satisfy the preset condition is given as follows:
step 208: and when the sliding input operation does not meet the preset condition and the input operation is taken as a second input operation, controlling the voice recognition engine to be switched from the low power consumption state to the normal working state.
Specifically, the sliding input operation not meeting the preset condition includes: the object at the end point of the sliding input operation only includes the first object, or the object at the end point of the sliding input operation is an unintelligible object among the M objects. For example, the user may drag the identity of the speech recognition engine with a finger to a blank area, or drag the identity of the speech recognition engine with a finger to an interactable object.
Step 209: when the speech recognition engine is in a sound reception state in a normal working state, speech input is obtained.
Step 210: when the speech recognition engine is in a recognition state of a normal operating state, speech input is recognized.
Since the object at the end point of the sliding input operation does not include an interactive object, the electronic device cannot recognize the object according to the parameter information of a specific object, and at this time, when the speech recognition engine is in a recognition state of a normal working state, the speech input needs to be recognized in all information sets.
Step 211: and outputting the recognition result when the speech recognition engine is in the result feedback state of the normal working state.
Similarly, taking the electronic device as an example of a mobile phone, when the sliding input operation enters the main screen area, the sliding input operation controls the identifier of the voice recognition engine to move in the main screen area, when the sliding input operation is finished, the object at the end point of the sliding input operation is detected, if the object at the end point of the sliding input operation is only the identifier of the voice recognition engine, it is indicated that the finger is finally stopped in a blank area, please refer to fig. 5, at this time, the voice recognition engine can be controlled to switch from the low power consumption state to the normal working state, when the voice recognition engine is in the radio reception state of the normal working state, the voice input is obtained, and when the voice recognition engine is in the recognition state of the normal working state, the voice input is recognized for all information sets. Similarly, when the sliding input operation is finished, detecting the object at the end point of the sliding input operation, and if the object at the end point of the sliding input operation is an uninteresting object, it indicates that the finger is finally stopped at the uninteresting object, please refer to fig. 6.
It should be noted that, when the input operation satisfies the preset condition, it is determined that the input operation is the first input operation, the first input operation may determine the second object, and since the second object is an interactive object, when recognizing the voice input, the voice input may be recognized in the information set corresponding to the second object; when the input operation does not meet the preset condition, the input operation is determined to be a second input operation, and the second input operation does not include an interactive object, so that the voice input needs to be recognized in all information sets. The former is identified in the information set corresponding to the second object, and the latter is identified in all the information sets, and the comparison of the two shows that the former has strong pertinence, the information identification range is small, and the identification efficiency and the identification accuracy are higher.
In the information processing method provided by the embodiment of the invention, M objects are displayed on a display unit, the M objects comprise a first object, an input operation is obtained, whether the input operation meets a preset condition or not is determined, when the input operation meets the preset condition and the input operation is taken as the first input operation, a voice recognition engine is controlled to be switched from a low power consumption state to a normal working state, when the voice recognition engine is in a sound receiving state of the normal working state, the voice input is obtained, when the voice recognition engine is in a recognition state of the normal working state, the voice input is recognized based on parameter information of a second object, and when the voice recognition engine is in a result feedback state of the normal working state, a recognition result is output. According to the information processing method provided by the embodiment of the invention, the user only needs to execute a simple input operation to enable the electronic equipment to start the radio function, and the electronic equipment can quickly feed back the result aiming at the voice input, so that the operation is simple, and the user experience is better.
Referring to fig. 7, a schematic flow chart of another information processing method according to an embodiment of the present invention is shown, where the method is applied to an electronic device, the electronic device includes a display unit and a speech recognition engine, and the speech recognition engine has a low power consumption state and a normal operating state, and the method according to the embodiment of the present invention includes:
step S301: and displaying M objects on the display unit, wherein the M objects comprise a first object which is an identifier of the voice recognition engine.
Step S302: the operation of two manipulation input points is obtained.
Step S303: it is determined whether the operations of the two manipulation input points satisfy a predetermined condition.
In this embodiment, the M objects (except the first object) displayed on the display unit are all interactable objects, and the parameter information of each interactable object can act on the recognition state of the normal operating state of the speech recognition engine. Of course, of the M objects (except the first object) displayed on the display unit, both an interactable object and a non-interactable object may be included. When the M objects include non-interactable objects, when the first object is determined according to the input operation, N objects of the M objects may be prompted, and parameter information of each object of the N objects may act on a recognition state of a normal operating state of the speech recognition engine, that is, when the first object is determined, N interactable objects may be determined from the M objects, and the determined interactable objects are prompted, for example, a mark is displayed on the interactable objects for prompting.
The determination of whether the operations of the two manipulation input points satisfy the predetermined condition is specifically: and determining whether the objects determined by the operation of the two manipulation input points comprise a first object and a second object, wherein the second object is an object at one manipulation input point of the two manipulation input points, the object is an interactive object of the M objects, and parameter information of the interactive object can act on a recognition state of a normal working state of the voice recognition engine. Specifically, when the M objects (except the first object) displayed by the display unit are all interactable objects, the second object is one interactable object of the M objects (except the first object); when both of the interactable object and the non-interactable object are included among the M objects (except the first object) displayed on the display unit, the second object is one of the interactable objects among the M objects.
Further, whether the objects determined by the two operations of manipulating the input point include the first object and the second object may be specifically: the method includes determining a first object when a first manipulation input point of the two-point manipulation input points satisfies a predetermined rule, determining a second object when a second manipulation input point of the two-point manipulation input points satisfies the predetermined rule, and determining that the input operation corresponds to the first object and the second object when the input operation is ended. Wherein, the predetermined rule is specifically as follows: the operation time of the two control input points is longer than the preset time.
Specifically, when the first manipulation input point satisfies the predetermined rule, determining the first object includes: the method comprises the steps of obtaining the operation duration of a first control input point, judging whether the operation duration of the first control input point is larger than a first preset duration, and if so, determining that an object at the first control input point is a first object. Also, when the second manipulation input point satisfies the predetermined rule, determining the second object includes: and acquiring the operation duration of the second control input point, judging whether the operation duration of the second control input point is greater than a second preset duration, and if the operation duration of the second control input point is greater than the second preset duration and the object output by the second control input point is an interactive object, determining that the object at the second control input point is a second object. It should be noted that when the operations of the two manipulation input points are finished, only if the operation duration of the first manipulation input point is longer than the first preset duration, and the operation duration of the second manipulation input point is longer than the second preset duration, it is determined that the object at the first manipulation input point is the first object, and the object at the second manipulation input point is the second object. The first preset time length can be set to be equal to the second preset time length, and the first preset time length can also be set to be greater than the second preset time length.
Step S304: and when the input operation meets a preset condition and is taken as a first input operation, controlling the voice recognition engine to switch from a low-power consumption state to a normal working state, wherein the first input operation can determine a first object and a second object, and the second object belongs to the M objects.
In this embodiment, the second object is an interactive object, and the second object may be, but is not limited to, an identification of an application program, a search progress bar, a text input box, or a contact. Taking the second object as the identifier of the application program as an example, for example, the second object is the identifier of a music playing program, and the identifier of the speech recognition engine and the identifier of the music playing program are displayed on the display unit of the electronic device, referring to fig. 8, the user presses the identifier of the speech recognition engine with the thumb and presses the identifier of the music playing program with the index finger, the electronic device determines whether the duration of the input operation to the first manipulation point of the identifier of the speech recognition engine is longer than a first preset duration, such as 5 seconds, and at the same time, determines whether the duration of the input operation to the second manipulation point of the identifier of the music playing program is longer than a second preset duration, such as 5 seconds, if the duration of the input operation to the first manipulation point of the identifier of the speech recognition engine is longer than the first preset duration and the duration of the input operation to the second manipulation point of the identifier of the music playing program is longer than the second preset duration, the speech recognition engine is controlled to switch from a low power consumption state to a normal operating state.
Step S305: when the speech recognition engine is in a sound reception state in a normal working state, speech input is obtained.
When the voice recognition engine is switched to the sound receiving state in the normal working state from the low power consumption state, a sound receiving interface can be displayed on the display unit so as to prompt a user to carry out voice input.
Step S306: when the speech recognition engine is in a recognition state of a normal operation state, the speech input is recognized based on the parameter information of the second object.
When the speech recognition engine is in the recognition state, recognizing the speech input based on the parameter information of the second object specifically includes: recognition is performed for the speech input in a set of information corresponding to the second object.
Step S307: and outputting the recognition result when the speech recognition engine is in a result feedback state of a normal working state.
In this embodiment, the output of the recognition result may be, but is not limited to, displaying the recognized information and/or controlling the application program to perform the corresponding operation.
Also taking the second object as the identifier of the music playing program as an example: specifically, if the voice input is the invisible wing of the song to be played, searching the invisible wing of the song from the information set corresponding to the icon of the music playing program, and after the invisible wing of the song is found, outputting a recognition result, namely starting the music playing program and playing the invisible wing of the song. Taking the identifier of the address book as an example: when the voice recognition engine enters the sound receiving state in the normal working state, the voice recognition engine receives voice input, if the voice input is the telephone number of the Li Ming searched, the voice recognition engine enters the recognition state in the normal working state from the sound receiving state in the normal working state, the telephone number of the Li Ming searched from the information set corresponding to the identification of the address list, and after the telephone number of the Li Ming searched, the telephone number of the Li Ming displayed on the display unit.
The above process is an information processing manner when the operations of the two manipulation input points satisfy the preset condition, and the information processing manner when the operations of the two manipulation input points do not satisfy the preset condition is given below:
step 308: and when the operation of the two control input points does not meet the preset condition and the input operation is taken as a second input operation, controlling the voice recognition engine to be switched from the low power consumption state to the normal working state.
Specifically, the operation of two manipulation input points that do not satisfy the preset condition includes: the object at the two manipulation input points only comprises the first object, or the object at the two manipulation input points is the first object and one non-interactable object of the M objects. For example, referring to FIG. 9, the user presses a blank area with the index finger while pressing the logo of the speech recognition engine with the thumb, or, referring to FIG. 10, presses a non-interactable object with the index finger while pressing the logo of the speech recognition engine with the thumb.
Step 309: when the speech recognition engine is in a sound reception state in a normal working state, speech input is obtained.
Step 310: when the speech recognition engine is in a recognition state of a normal operating state, speech input is recognized.
Specifically, when the speech recognition engine is in a recognition state of a normal operation state, recognition is performed for the speech input in all the information sets.
Step 311: and outputting the recognition result when the speech recognition engine is in a result feedback state of a normal working state.
It should be noted that when the operations of two manipulation input points satisfy the preset condition, the input operation is determined as a first input operation, the first input operation may determine a second object, and since the second object is an interactive object, when recognizing a voice input, the voice input may be recognized in an information set corresponding to the second object; when the input operation does not meet the preset condition, the input operation is determined to be a second input operation, and the second input operation does not include an interactive object, so that the voice input needs to be recognized in all information sets. The former is identified in the information set corresponding to the second object, and the latter is identified in all the information sets, and the comparison of the two shows that the former has strong pertinence, the information identification range is small, and the identification efficiency and the identification accuracy are higher.
In the information processing method provided by the embodiment of the invention, M objects are displayed on a display unit, wherein the M objects comprise a first object, an input operation is obtained, whether the input operation meets a preset condition or not is determined, when the input operation meets the preset condition and is taken as the first input operation, a voice recognition engine is controlled to be switched from a low power consumption state to a normal working state, when the voice recognition engine is in a sound receiving state of the normal working state, the voice input is obtained, when the voice recognition engine is in a recognition state of the normal working state, the voice input is recognized based on parameter information of a second object, and when the voice recognition engine is in a result feedback state of the normal working state, a recognition result is output. According to the information processing method provided by the embodiment of the invention, the user only needs to execute a simple input operation to enable the electronic equipment to start the radio function, and the electronic equipment can quickly feed back the result aiming at the voice input, so that the operation is simple, and the user experience is better.
Referring to fig. 11, a schematic flow chart of another information processing method according to an embodiment of the present invention is shown, where the method is applied to an electronic device, the electronic device includes a display unit and a speech recognition engine, and the speech recognition engine has a low power consumption state and a normal operating state, and the method according to the embodiment of the present invention includes:
step S401: and displaying M objects on the display unit, wherein the M objects comprise a first object which is an identifier of the voice recognition engine.
Step S402: a slide input operation is obtained.
Step S403: it is determined whether the slide input operation satisfies a predetermined condition.
In this embodiment, the M objects (except the first object) displayed on the display unit are all interactable objects, and the parameter information of each interactable object can act on the recognition state of the normal operating state of the speech recognition engine. Of course, of the M objects (except the first object) displayed on the display unit, both an interactable object and a non-interactable object may be included. When the M objects include non-interactable objects, when the first object is determined according to the input operation, N objects of the M objects may be prompted, and parameter information of each object of the N objects may act on a recognition state of a normal operating state of the speech recognition engine, that is, when the first object is determined, N interactable objects may be determined from the M objects, and the determined interactable objects are prompted, for example, a mark is displayed on the interactable objects for prompting.
The specific determination of whether the slide input operation satisfies the predetermined condition is: and determining whether the objects determined by the input operation comprise a first object and a second object, wherein the second object is an interactive object in the M objects determined by the end point of the input operation, and the parameter information of the interactive object can act on the recognition state of the normal working state of the speech recognition engine. Specifically, when the M objects (except the first object) displayed by the display unit are all interactable objects, the second object is one interactable object of the M objects (except the first object) determined by the end point of the input operation; when the M objects (except the first object) displayed on the display unit include both the interactable object and the non-interactable object, the second object is one of the interactable objects among the M objects determined by the end point of the input operation.
Further, whether the object determined by the input operation includes the first object and the second object specifically includes: determining a first object when the trace point of the input operation meets a preset rule, wherein the preset rule can be that the trace point of the input operation is larger than a preset point number; determining a second object when the end point of the input operation meets a predetermined rule, wherein the predetermined rule can be that when the object at the end point of the input operation is determined to be an interactable object, the occlusion ratio of the interactable object to the first object is greater than a preset ratio; and when the input operation is finished, determining that the input operation corresponds to the first object and the second object.
Specifically, the first object can be controlled to move through the sliding input operation, and the first object can be determined through the track point of the sliding input. And if the object at the end point of the input operation is the interactive object, acquiring the occlusion ratio of the first object and the interactive object. Since the input operation controls the first object to move, when the input operation is ended, the first object stops at the end point of the input operation, at this time, the shielding ratio of the first object and the interactive object at the end point of the input operation can be acquired, whether the shielding ratio is larger than the set ratio value or not is judged, and if the shielding ratio is larger than the set ratio value, the object at the end point of the input operation is determined to be the second object.
Step S404: and when the input operation meets a preset condition and is taken as a first input operation, controlling the voice recognition engine to switch from a low-power consumption state to a normal working state, wherein the first input operation can determine a first object and a second object, and the second object belongs to the M objects.
In this embodiment, the second object may be, but is not limited to, an identification of an application program, a search progress bar, a text input box, or a contact. Similarly, taking the electronic device as a mobile phone and the second object as an identifier of the music playing program as an example, the display unit of the mobile phone displays the identifier of the speech recognition engine and the identifier of the music playing program, referring to fig. 12, the user may drag the identifier of the speech recognition engine with a finger, and when the user drags the identifier of the speech recognition engine to the identifier of the music playing program with the finger, the speech recognition engine is switched from the low power consumption state to the normal operating state.
In addition, referring to fig. 13, the user may also drag the identifier of the music playing program, and when the user drags the identifier of the music playing program to the identifier of the speech recognition engine with a finger, the speech recognition engine is switched from the low power consumption state to the normal operating state. Referring to fig. 14, the user may also drag the identifier of the speech recognition engine and the identifier of the music playing program at the same time, and drag the identifier of the speech recognition engine and the icon of the music playing program together, so that the speech recognition engine is switched from the low power consumption state to the normal operation state.
Step S405: when the speech recognition engine is in a sound reception state in a normal working state, speech input is obtained.
When the voice recognition engine is switched to the sound receiving state in the normal working state from the low power consumption state, a sound receiving interface can be displayed on the display unit so as to prompt a user to carry out voice input.
Step S406: when the speech recognition engine is in a recognition state of a normal operation state, the speech input is recognized based on the parameter information of the second object.
Step S407: and outputting the recognition result when the speech recognition engine is in a result feedback state of a normal working state.
In this embodiment, the output of the recognition result may be, but is not limited to, displaying the recognized information and/or controlling the application program to perform the corresponding operation.
Also taking the second object as the identifier of the music playing program as an example: specifically, if the voice input is the invisible wing of the song to be played, searching the invisible wing of the song from the information set corresponding to the icon of the music playing program, and after the invisible wing of the song is found, outputting a recognition result, namely starting the music playing program and playing the invisible wing of the song.
Taking the identifier of the address book as an example: when the voice recognition engine enters the sound receiving state in the normal working state, the voice recognition engine receives voice input, if the voice input is the telephone number of the Li Ming searched, the voice recognition engine enters the recognition state in the normal working state from the sound receiving state in the normal working state, the telephone number of the Li Ming searched from the information set corresponding to the identification of the address list, and after the telephone number of the Li Ming searched, the telephone number of the Li Ming displayed on the display unit.
The above process is an information processing mode when the slide input satisfies the preset condition, and the information processing mode when the slide input operation does not satisfy the preset condition is given as follows:
step 408: and when the sliding input operation does not meet the preset condition and the input operation is taken as a second input operation, controlling the voice recognition engine to be switched from the low power consumption state to the normal working state.
Specifically, the sliding input operation not meeting the preset condition includes: the object at the end point of the sliding input operation only includes the first object, or the object at the end point of the sliding input operation is an unintelligible object among the M objects. For example, the user may drag the identity of the speech recognition engine with a finger to a blank area, or drag the identity of the speech recognition engine with a finger to an interactable object.
Step 409: when the speech recognition engine is in a sound reception state in a normal working state, speech input is obtained.
Step 410: when the speech recognition engine is in a recognition state of a normal operating state, speech input is recognized.
Since the object at the end point of the sliding input operation does not include an interactive object, the electronic device cannot recognize the object according to the parameter information of a specific object, and at this time, when the speech recognition engine is in a recognition state of a normal working state, the speech input needs to be recognized in all information sets.
Step 411: and outputting the recognition result when the speech recognition engine is in a result feedback state of a normal working state.
Also taking the electronic device as an example of a mobile phone, displaying an identifier of a voice recognition engine on a display unit of the mobile phone, controlling the identifier of the voice recognition engine to move by a sliding input operation, detecting an object at an end point of the sliding input operation when the sliding input operation is ended, and if the object at the end point of the sliding input operation is only the identifier of the voice recognition engine, indicating that a finger is finally stopped in a blank area. Similarly, when the sliding input operation is finished, the object at the end point of the sliding input operation is detected, if the object at the end point of the sliding input operation is an uninteresting object, the parameter information of the uninteresting object cannot act on the recognition state of the normal operation state of the speech recognition engine, so that the speech input cannot be recognized based on the object, and at this time, the speech input is recognized for all the object sets.
It should be noted that, when the input operation satisfies the preset condition, it is determined that the input operation is the first input operation, the first input operation may determine the second object, and since the second object is an interactive object, when recognizing the voice input, the voice input may be recognized in the information set corresponding to the second object; when the input operation does not meet the preset condition, the input operation is determined to be a second input operation, and the second input operation does not include an interactive object, so that the voice input needs to be recognized in all information sets. The former is identified in the information set corresponding to the second object, and the latter is identified in all the information sets, and the comparison of the two shows that the former has strong pertinence, the information identification range is small, and the identification efficiency and the identification accuracy are higher.
In the information processing method provided by the embodiment of the invention, M objects are displayed on a display unit, wherein the M objects comprise a first object, an input operation is obtained, whether the input operation meets a preset condition or not is determined, when the input operation meets the preset condition and is taken as the first input operation, a voice recognition engine is controlled to be switched from a low power consumption state to a normal working state, when the voice recognition engine is in a sound receiving state of the normal working state, the voice input is obtained, when the voice recognition engine is in a recognition state of the normal working state, the voice input is recognized based on parameter information of a second object, and when the voice recognition engine is in a result feedback state of the normal working state, a recognition result is output. According to the information processing method provided by the embodiment of the invention, the user only needs to execute a simple input operation to enable the electronic equipment to start the radio function, and the electronic equipment can quickly feed back the result aiming at the voice input, so that the operation is simple, and the user experience is better.
Referring to fig. 15, a schematic structural diagram of an electronic device according to an embodiment of the present invention is shown, where the electronic device includes a display unit and a speech recognition engine, the speech recognition engine has a low power consumption state and a normal operating state, and the electronic device includes: the device comprises a display module 101, a first acquisition module 102, a determination module 103, a first control module 104, a second acquisition module 105, a first identification module 106 and a first output module 107. Wherein:
the display module 101 is configured to display M objects on a display unit, where the M objects include a first object, and the first object is an identifier of a speech recognition engine.
The first obtaining module 102 is configured to obtain an input operation.
A determining module 103, configured to determine whether the input operation satisfies a predetermined condition.
And the first control module 104 is used for controlling the voice recognition engine to switch from a low-power consumption state to a normal working state when the input operation meets a preset condition and is taken as a first input operation, wherein the first input operation can determine a first object and a second object, and the second object belongs to the M objects.
The second obtaining module 105 is configured to obtain a voice input when the voice recognition engine is in a sound receiving state in a normal operating state.
The first recognition module 106 is configured to, when the speech recognition engine is in a recognition state of a normal operating state, recognize the speech input based on the parameter information of the second object;
and the first output module 107 is used for outputting the recognition result when the speech recognition engine is in a result feedback state of a normal working state.
The electronic equipment provided by the embodiment of the invention displays M objects on a display unit, wherein the M objects comprise a first object, obtains input operation, determines whether the input operation meets a preset condition, controls a voice recognition engine to be switched from a low power consumption state to a normal working state when the input operation meets the preset condition and takes the input operation as the first input operation, obtains voice input when the voice recognition engine is in a sound receiving state of the normal working state, recognizes the voice input based on parameter information of a second object when the voice recognition engine is in a recognition state of the normal working state, and outputs a recognition result when the voice recognition engine is in a result feedback state of the normal working state. According to the electronic equipment provided by the embodiment of the invention, the user can start the radio function by only executing a simple input operation, and the electronic equipment can quickly feed back a result aiming at voice input, so that the operation is simple, and the user experience is better.
The above embodiment may further include: the device comprises a second control module, a third acquisition module, a second identification module and a second output module. Wherein:
and the second control module is used for controlling the voice recognition engine to be switched from the low power consumption state to the normal working state when the input operation does not meet the preset condition and is taken as the second input operation.
And the third acquisition module is used for acquiring the voice input when the voice recognition engine is in the sound reception state of the normal working state. And the second recognition module is used for recognizing the voice input when the voice recognition engine is in the recognition state of the normal working state.
And the second output module is used for outputting the recognition result when the voice recognition engine is in the result feedback state of the normal working state.
In the above embodiment, the determining module 103 may include: a first determining submodule and a prompting submodule. The first determining submodule is used for determining a first object according to the input operation; and the prompting submodule is used for prompting N objects in the M objects when the first object is determined, and the parameter information of each object in the N objects can act on the recognition state of the normal working state of the voice recognition engine.
In the above-described embodiment, the predetermined condition may be whether the objects determined by the input operation include a first object and a second object, the second object being one of the M objects determined at the end point of the input operation; or/and the predetermined condition is whether the objects determined by the input operation comprise a first object and a second object, wherein the second object is one of the N objects determined at the end point of the input operation.
In the above embodiment, the input operation may be two operations of manipulating an input point, and in this case, the determining module 103 may include: a second determination submodule, a third determination submodule, and a fourth determination submodule. Wherein the second determination submodule is configured to determine the first object when a first manipulation input point of the two-point manipulation input points satisfies a predetermined rule; a third determination submodule for determining a second object when a second manipulation input point of the two-point manipulation input points satisfies a predetermined rule; and the fourth determining submodule is used for determining that the input operation corresponds to the first object and the second object when the input operation is finished.
In the above embodiment, the input operation may also be a sliding input operation, and in this case, the determining module 103 may include: a fifth determination sub-module, a sixth determination sub-module, and a seventh determination sub-module. The fifth determining submodule is used for determining the first object when the track point of the input operation meets a preset rule; a sixth determining submodule for determining the second object when the end point of the input operation satisfies a predetermined rule; and the seventh determining submodule is used for determining that the input operation corresponds to the first object and the second object when the input operation is finished.
In the above embodiment, the electronic device has a touch sensing unit, a touch area of the touch sensing unit is divided into a first area and a second area, the second area coincides with the display unit, and the input operation is a sliding operation, where the determining module 103 includes: an eighth determination submodule, a ninth determination submodule, and a tenth determination submodule. The eighth determining submodule is used for determining the first object and displaying the first object when the starting point of the input operation is in the first area and the input operation moves from the first area to the dividing line of the first area and the second area, so that the input operation controls the first object to move in the second area; a ninth determining sub-module for determining the second object if an end point of the input operation satisfies a predetermined rule when the input operation moves from the first area into the second area; and the tenth determining submodule is used for determining that the input operation corresponds to the first object and the second object when the input operation is finished.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device or system type embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An information processing method applied to an electronic device, wherein the electronic device comprises a display unit and a voice recognition engine, the voice recognition engine has a low power consumption state and a normal working state, and the method comprises the following steps:
displaying M objects on the display unit, wherein the M objects comprise a first object which is an identifier of the voice recognition engine;
obtaining an input operation;
determining whether the input operation meets a predetermined condition;
when the input operation meets the preset condition and serves as a first input operation, controlling the voice recognition engine to switch from the low-power consumption state to the normal working state, wherein the first input operation can determine the first object and a second object, and the second object belongs to the M objects;
when the voice recognition engine is in the sound receiving state of the normal working state, acquiring voice input;
when the voice recognition engine is in a recognition state of the normal working state, recognizing the voice input based on the parameter information of the second object;
when the speech recognition engine is in a result feedback state of the normal working state, outputting a recognition result;
when the input operation does not meet the preset condition and serves as a second input operation, controlling the voice recognition engine to be switched from the low-power-consumption state to the normal working state, and performing voice recognition on voice input based on all information sets corresponding to the M objects; wherein the second input operation is capable of determining the first object.
2. The method of claim 1, wherein the performing speech recognition on the speech input based on all the sets of information corresponding to the M objects comprises:
when the voice recognition engine is in a sound receiving state of the normal working state, acquiring the voice input;
when the voice recognition engine is in a recognition state of the normal working state, recognizing the voice input based on all information sets corresponding to the M objects;
and outputting a recognition result when the voice recognition engine is in a result feedback state of the normal working state.
3. The method of claim 1, wherein the determining whether the input operation satisfies a predetermined condition comprises:
determining a first object according to the input operation;
and prompting N objects in the M objects when the first object is determined, wherein the parameter information of each object in the N objects can act on the recognition state of the normal working state of the voice recognition engine.
4. The method according to claim 3, wherein the predetermined condition is whether the objects determined by the input operation include a first object and a second object, the second object being one of the M objects determined at the end point of the input operation;
and/or the first and/or second light sources,
the predetermined condition is whether the object determined by the input operation includes a first object and a second object, and the second object is one of the N objects determined at the end point of the input operation.
5. The method according to claim 4, wherein the input operation is an operation of manipulating an input point;
the determining whether the input operation satisfies a predetermined condition includes:
determining a first object when a first manipulation input point of the two manipulation input points satisfies a predetermined rule, and determining a second object when a second manipulation input point of the two manipulation input points satisfies the predetermined rule;
when the input operation is finished, determining that the input operation corresponds to the first object and the second object;
or,
the input operation is a sliding input operation;
the determining whether the input operation satisfies a predetermined condition includes:
determining a first object when the track point of the input operation meets a preset rule;
determining a second object when the end point of the input operation meets a preset rule;
when the input operation is finished, determining that the input operation corresponds to the first object and the second object;
or,
the electronic equipment is provided with a touch sensing unit, a touch area of the touch sensing unit is divided into a first area and a second area, the second area is overlapped with the display unit, and the input operation is sliding operation;
the determining whether the input operation satisfies a predetermined condition includes:
when the starting point of the input operation is in the first area and the input operation moves from the first area to the dividing line of the first area and the second area, determining the first object and displaying the first object, so that the input operation controls the first object to move in the second area;
determining a second object if an end point of the input operation satisfies a predetermined rule while the input operation moves from the first area into the second area;
and when the input operation is finished, determining that the input operation corresponds to the first object and the second object.
6. An electronic device comprising a display unit and a speech recognition engine, the speech recognition engine having a low power consumption state and a normal operating state, the electronic device comprising:
the display module is used for displaying M objects on the display unit, wherein the M objects comprise a first object which is an identifier of the voice recognition engine;
the first acquisition module is used for acquiring input operation;
the determining module is used for determining whether the input operation meets a preset condition;
a first control module, configured to control the speech recognition engine to switch from the low power consumption state to the normal operating state when the input operation satisfies the predetermined condition and the input operation is a first input operation, where the first input operation is capable of determining the first object and a second object, and the second object belongs to the M objects;
the second acquisition module is used for acquiring voice input when the voice recognition engine is in a sound reception state of the normal working state;
the first recognition module is used for recognizing the voice input based on the parameter information of the second object when the voice recognition engine is in the recognition state of the normal working state;
the first output module is used for outputting a recognition result when the voice recognition engine is in a result feedback state of the normal working state;
the processing module is used for controlling the voice recognition engine to switch from the low-power consumption state to the normal working state and carrying out voice recognition on voice input based on all information sets corresponding to the M objects when the input operation does not meet the preset condition and serves as a second input operation; wherein the second input operation is capable of determining the first object.
7. The electronic device of claim 6, wherein the processing module comprises:
the second control module is used for controlling the voice recognition engine to be switched from the low power consumption state to the normal working state when the input operation does not meet the preset condition and is taken as the second input operation;
the third acquisition module is used for acquiring the voice input when the voice recognition engine is in the sound reception state of the normal working state;
a second recognition module, configured to, when the speech recognition engine is in a recognition state of the normal operating state, recognize the speech input based on all information sets corresponding to the M objects;
and the second output module is used for outputting the recognition result when the voice recognition engine is in the result feedback state of the normal working state.
8. The electronic device of claim 6, wherein the determining module comprises:
a first determination submodule for determining a first object according to the input operation;
a prompt submodule, configured to prompt N objects of the M objects when the first object is determined, where parameter information of each object of the N objects is capable of acting on a recognition state of the normal operating state of the speech recognition engine.
9. The electronic device according to claim 8, wherein the predetermined condition is whether the objects determined by the input operation include a first object and a second object, the second object being one of the M objects determined at the end point of the input operation;
or/and the predetermined condition is whether the objects determined by the input operation comprise a first object and a second object, wherein the second object is one of the N objects determined at the end point of the input operation.
10. The electronic device according to claim 9, wherein the input operation is an operation of two manipulation input points;
the determining module comprises:
a second determination submodule for determining a first object when a first manipulation input point of the two manipulation input points satisfies a predetermined rule;
a third determination submodule for determining a second object when a second manipulation input point of the two manipulation input points satisfies a predetermined rule;
the fourth determining submodule is used for determining that the input operation corresponds to the first object and the second object when the input operation is finished;
or,
the input operation is a sliding input operation;
the determining module comprises:
a fifth determination submodule for determining the first object when the track point of the input operation satisfies a predetermined rule;
a sixth determination sub-module for determining a second object when an end point of the input operation satisfies a predetermined rule;
a seventh determining submodule, configured to determine that the input operation corresponds to the first object and the second object when the input operation is ended;
or,
the electronic equipment is provided with a touch sensing unit, a touch area of the touch sensing unit is divided into a first area and a second area, the second area is overlapped with the display unit, and the input operation is sliding input operation;
the determining module comprises:
an eighth determination submodule configured to determine the first object and display the first object when a start point of the input operation is in the first area and the input operation moves from the first area to a dividing line between the first area and the second area, so that the input operation controls the first object to move in the second area;
a ninth determining sub-module for determining a second object if an end point of the input operation satisfies a predetermined rule when the input operation moves from the first area into the second area;
and the tenth determining submodule is used for determining that the input operation corresponds to the first object and the second object when the input operation is finished.
CN201310376885.0A 2013-08-26 2013-08-26 A kind of information processing method and electronic equipment Active CN104423925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310376885.0A CN104423925B (en) 2013-08-26 2013-08-26 A kind of information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310376885.0A CN104423925B (en) 2013-08-26 2013-08-26 A kind of information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN104423925A CN104423925A (en) 2015-03-18
CN104423925B true CN104423925B (en) 2018-12-14

Family

ID=52973025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310376885.0A Active CN104423925B (en) 2013-08-26 2013-08-26 A kind of information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN104423925B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105022582B (en) * 2015-07-20 2019-07-12 广东小天才科技有限公司 Function triggering method of point reading terminal and point reading terminal
CN110779542A (en) * 2019-09-23 2020-02-11 深圳市跨越新科技有限公司 Method and device for synchronizing vehicle track playback and playing progress bar of map system
CN111429911A (en) * 2020-03-11 2020-07-17 云知声智能科技股份有限公司 Method and device for reducing power consumption of speech recognition engine in noise scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436113A (en) * 2007-11-12 2009-05-20 捷讯研究有限公司 User interface for touchscreen device
CN101527745A (en) * 2008-03-07 2009-09-09 三星电子株式会社 User interface method and apparatus for mobile terminal having touchscreen
CN101989176A (en) * 2009-08-04 2011-03-23 Lg电子株式会社 Mobile terminal and icon collision controlling method thereof
JP2012216057A (en) * 2011-03-31 2012-11-08 Toshiba Corp Voice processor and voice processing method
CN102981763A (en) * 2012-11-16 2013-03-20 中科创达软件股份有限公司 Method of running application program under touch screen lock-out state

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436113A (en) * 2007-11-12 2009-05-20 捷讯研究有限公司 User interface for touchscreen device
CN101527745A (en) * 2008-03-07 2009-09-09 三星电子株式会社 User interface method and apparatus for mobile terminal having touchscreen
CN101989176A (en) * 2009-08-04 2011-03-23 Lg电子株式会社 Mobile terminal and icon collision controlling method thereof
JP2012216057A (en) * 2011-03-31 2012-11-08 Toshiba Corp Voice processor and voice processing method
CN102981763A (en) * 2012-11-16 2013-03-20 中科创达软件股份有限公司 Method of running application program under touch screen lock-out state

Also Published As

Publication number Publication date
CN104423925A (en) 2015-03-18

Similar Documents

Publication Publication Date Title
US9354842B2 (en) Apparatus and method of controlling voice input in electronic device supporting voice recognition
US10540074B2 (en) Method and terminal for playing media
KR102022318B1 (en) Method and apparatus for performing user function by voice recognition
CN105204730B (en) A kind of screen control method and user terminal
TWI626591B (en) System and method for switching applications
CN104820571A (en) Method and device for switching songs
CN104166458A (en) Method and device for controlling multimedia player
CN106468987B (en) Information processing method and client
CN110211589B (en) Awakening method and device of vehicle-mounted system, vehicle and machine readable medium
KR20130043245A (en) Adaptive audio feedback system and method
CN108595074A (en) Status bar operating method, device and computer readable storage medium
CN111490927B (en) A method, device and device for displaying messages
CN103106061A (en) Voice input method and device
CN106775206B (en) A screen wake-up method and device for a user terminal, and a user terminal
CN104123093A (en) Information processing method and device
CN103544973A (en) Method and device for song control of music player
CN104461348B (en) Information choosing method and device
CN102929408A (en) Method and device for quickly opening application
CN113050863B (en) Page switching method, device, storage medium and electronic device
CN105183314B (en) A kind of bright screen duration adjusting method and mobile terminal
CN104423925B (en) A kind of information processing method and electronic equipment
WO2016015181A1 (en) Method and apparatus for editing audio files
CN108062214A (en) The methods of exhibiting and device of a kind of search interface
CN105116999A (en) Control method for smart watch and smart watch
CN104898950A (en) Music control method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant