[go: up one dir, main page]

CN120010738A - Function execution method, function execution device and electronic equipment - Google Patents

Function execution method, function execution device and electronic equipment Download PDF

Info

Publication number
CN120010738A
CN120010738A CN202510118811.XA CN202510118811A CN120010738A CN 120010738 A CN120010738 A CN 120010738A CN 202510118811 A CN202510118811 A CN 202510118811A CN 120010738 A CN120010738 A CN 120010738A
Authority
CN
China
Prior art keywords
shortcut instruction
input
instruction
shortcut
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510118811.XA
Other languages
Chinese (zh)
Inventor
林涵宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202510118811.XA priority Critical patent/CN120010738A/en
Publication of CN120010738A publication Critical patent/CN120010738A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种功能执行方法、功能执行装置及电子设备,属于通信技术领域。该功能执行方法包括:接收触控输入和语音输入;响应于触控输入和语音输入,基于触控输入的输入信息,执行语音输入对应的功能操作;其中,触控输入的输入信息包括触控输入的触控对象、触控输入的起始位置信息、结束位置信息以及触控输入对应的方向信息中至少一项;触控对象包括应用图标、界面内容以及应用界面中任一项。

The present application discloses a function execution method, a function execution device and an electronic device, which belongs to the field of communication technology. The function execution method includes: receiving touch input and voice input; in response to the touch input and voice input, based on the input information of the touch input, executing the function operation corresponding to the voice input; wherein the input information of the touch input includes at least one of the touch object of the touch input, the starting position information of the touch input, the ending position information and the direction information corresponding to the touch input; the touch object includes any one of the application icon, the interface content and the application interface.

Description

Function execution method, function execution device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a function execution method, a function execution device and electronic equipment.
Background
With the development of electronic devices, more and more functions can be performed by the electronic devices.
Currently, a user may trigger an electronic device to perform a corresponding function by performing touch input on the electronic device, such as click input, long-press input, sliding input, and the like. For example, the user may trigger the mobile phone to open the main interface of the social application by clicking an icon of the social application on the desktop of the mobile phone. And then, the mobile phone can be triggered to display at least one contact head portrait by clicking and inputting the contact control on the main interface of the social application program. And then, clicking and inputting one of the contact head images, and triggering the mobile phone to display and display a chat interface between the user and the contact. And finally, inputting a chat text in a text input box on the chat interface and clicking a sending control to input, so that the mobile phone can be triggered to send the chat text to the contact to chat with the contact.
When a user triggers the electronic device to execute the function through touch input, the operation steps of the touch input are generally complicated, and therefore the operation convenience of the electronic device is poor.
Disclosure of Invention
The embodiment of the application aims to provide a function execution method, a function execution device and electronic equipment, which can improve the function operation convenience of the electronic equipment.
In a first aspect, an embodiment of the present application provides a method for performing a function, including:
receiving touch input and voice input;
Responding to the touch input and the voice input, and executing a function operation corresponding to the voice input based on the input information of the touch input;
The input information of the touch input comprises at least one of a touch object of the touch input, starting position information and ending position information of the touch input and direction information corresponding to the touch input;
The touch object includes any one of an application icon, interface content, and an application interface.
In a second aspect, an embodiment of the present application provides a function execution apparatus, including:
The receiving module is used for receiving touch input and voice input;
the execution module is used for responding to touch input and voice input, and executing functional operation corresponding to the voice input based on input information of the touch input;
The input information of the touch input comprises at least one of a touch object of the touch input, starting position information and ending position information of the touch input and direction information corresponding to the touch input;
The touch object includes any one of an application icon, interface content, and an application interface.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores a program or instructions executable on the processor, the program or instructions implementing the steps of the method for performing a function according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method for performing a function according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement a method for performing a function according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method of performing functions as described in the first aspect.
In the embodiment of the application, the touch input and the voice input are received, and the functional operation corresponding to the voice input is executed based on at least one of the touch object of the touch input, the starting position information and the ending position information of the touch input and the direction information corresponding to the touch input. Therefore, when the electronic equipment executes corresponding functional operation, the operation steps of touch input can be effectively simplified, and the operation convenience of the electronic equipment is further effectively improved.
Drawings
FIG. 1 is a schematic flow chart of a method for executing functions according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for executing functions according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a method for executing functions according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of a method for executing functions according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of a method for executing functions according to an embodiment of the present application;
FIG. 6 is a flowchart of a method for performing functions according to an embodiment of the present application;
FIG. 7 is a flowchart of a method for performing functions according to an embodiment of the present application;
FIG. 8 is a flowchart of a method for performing functions according to an embodiment of the present application;
Fig. 9A is a schematic diagram of an interface touch input provided in an embodiment of the present application;
fig. 9B is a schematic diagram of an interface touch input provided in an embodiment of the present application;
FIG. 9C is a schematic diagram of an interface touch input provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a function executing apparatus according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a function executing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 13 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The terms "at least one," "at least one," and the like in the description and in the claims, mean that they encompass any one, any two, or a combination of two or more of the objects. For example, at least one of a, b, c (item) may represent "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural. Similarly, the term "at least two" means two or more, and the meaning of the expression is similar to the term "at least one".
The function execution method, the function execution device, the electronic equipment and the medium provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The function execution method, the function execution device and the electronic equipment provided by the embodiment of the application can be applied to a scene that a user obtains service through the operation requirement of the electronic equipment for executing the function. According to the embodiment of the application, the electronic equipment can automatically execute the function operation corresponding to the touch input and the voice input based on the touch input and the voice input of the user.
The following exemplarily describes a function execution method provided by the embodiment of the present application by taking some specific scenarios as examples.
Scenario 1 user triggers the chat function of the handset
The user clicks an icon of a social application, sends a message to a friend a through voice input, and triggers the mobile phone to open a chat interface of the social application, wherein the chat interface is a chat interface between the user and the friend a.
Scene 2 user triggers the navigation function of the mobile phone
The user presses the point A and the point B on the interface of a map application, and inputs navigation through voice to trigger the mobile phone to display a navigation path from the point A to the point B.
Scene 3 user triggers the copy function of the handset
The user points press the C point, the D point, the E point and the F point on the interface of a certain document application, and copies document contents through voice input, and triggers the mobile phone to copy the document contents in an interface area formed by the C point, the D point, the E point and the F point on the interface of the document application.
Scene 4 user triggers the small window function of the mobile phone
The user slides left and down on the current interface of the mobile phone, and inputs 'small window display' through voice, and triggers the mobile phone to display the current interface in the form of small window at the left lower corner of the mobile phone screen.
Scene 5 user triggers the mobile phone's drawing function
And the user performs dragging input on an interface of a drawing application, and triggers the mobile phone to fill a line segment or pattern drawn by the user into red according to the initial position information, the end position information and the direction information corresponding to the dragging input through voice input.
It should be noted that, the above scenarios 1 to 5 are only exemplary examples of some scenarios where embodiments of the present application may be applied, and in practical implementation, embodiments of the present application may be applied to any more possible scenarios, which are not limited herein.
Based on the above scenario applied by the embodiment of the present application, the function execution method provided by the embodiment of the present application may execute a function operation corresponding to a voice input by receiving a touch input and a voice input, and responding to the touch input and the voice input, based on at least one of a touch object of the touch input, start position information, end position information and direction information corresponding to the touch input. In the scheme, the user can trigger and execute corresponding functional operation through touch input and voice input. Therefore, the operation steps of touch input are simplified, the operation convenience of the electronic equipment is improved, and the man-machine interaction performance is improved.
The execution body of the function execution method provided by the embodiment of the application can be a function execution device. The function executing apparatus may be an electronic device, or may be a functional component or a functional entity in the electronic device. The function execution method provided by the embodiment of the present application will be described in an exemplary manner by taking the execution body as an electronic device.
Fig. 1 is a flow chart of a function execution method according to an embodiment of the present application, and as shown in fig. 1, the function execution method according to an embodiment of the present application may include the following steps 101 and 102.
Step 101, the electronic device receives touch input and voice input.
In some embodiments of the present application, the touch input may be an input of a user in the first interface. The first interface may be any interface including an interactive interface, for example, the first interface may be a desktop interface, an application interface, or the like.
In some embodiments of the present application, the application interface may be an interface of a first application, and the first application may be any application in an electronic device, which is not limited in the present application.
In some embodiments of the present application, the first application may include, but is not limited to, any of a navigation-type application, a social software-type application, a shopping-type application, a video-type application, an album-type application, a music-type application, a painting-type application, a document-type application, and the like. The specific requirements can be determined according to actual use, and the embodiment of the application is not limited.
In some embodiments of the present application, the touch input may be a touch input performed by a user on the electronic device through a touch device such as a finger or a stylus.
Optionally, in the embodiment of the present application, the touch input may be a click input, a single click input, a double click input, a click input with any number of times, a sliding input, a drag input, a pressure recognition input, a long press input, etc., and a specific touch input form may be determined according to actual needs, which is not limited in some embodiments.
Step 102, the electronic device responds to the touch input and the voice input, and executes the functional operation corresponding to the voice input based on the input information of the touch input.
In some embodiments of the present application, the input information of the touch input includes at least one of a touch object of the touch input, start position information and end position information of the touch input, and direction information corresponding to the touch input;
the touch object comprises any one of an application icon, interface content and an application interface.
In some embodiments of the present application, in a case where the first interface is a desktop interface, the input information may include an application icon on the desktop interface.
In some embodiments of the present application, the electronic device may obtain a touch position of the touch input on the desktop interface, and then determine, according to interface content of the desktop interface, an application icon corresponding to the touch position.
In an example one, in combination with scenario 1, when an application icon of a social application is displayed on the top left corner of the desktop interface, if a user performs touch input on the top left corner of the desktop interface, that is, the touch position of the touch input on the desktop interface is the top left corner of the desktop interface, the touch object is the application icon of the social application.
In some embodiments of the present application, in a case where the first interface is an application interface, the touch object may include any one of interface content and an application interface.
In some embodiments of the present application, the touch input may include at least one touch sub-input. The electronic device may obtain a touch position of each touch sub-input in the at least one touch sub-input on the application interface, and then obtain the interface content, where the interface content may be interface content corresponding to each touch position, or the interface content may be interface content corresponding to the at least one touch position.
In an example two, in combination with scenario 2, when the application interface is an application interface of the map application, the touch input may include a touch sub-input of a point a on the application interface of the map application by a user and a touch sub-input of a point B on the application interface of the map application, that is, touch positions of two touch sub-inputs on the application interface of the map application are the point a and the point B, and then the electronic device may determine, according to interface contents of the application interface of the map application, a geographic position corresponding to the point a and a geographic position corresponding to the point B, that is, the interface contents corresponding to each touch position are the geographic position corresponding to the point a and the geographic position corresponding to the point B.
In an example three, in combination with scenario 3, when the application interface is an application interface of a document application, the touch input may include a touch sub-input of a user to a point C on the application interface of the document application, a touch sub-input of a point D on the application interface of the document application, a touch sub-input of a point E on the application interface of the document application, and a touch sub-input of a point F on the application interface of the document application, that is, touch positions of four touch sub-inputs on the application interface of the document application are points C, D, E and F, and then the electronic device may determine, according to an interface content of the application interface of the document application, a document content in an interface area formed by points C, D, E and F on the application interface of the document application, that is, a document content in an interface area formed by points C, D, E and F on the application interface of the document application.
In combination with scenario 4 or scenario 5, the electronic device may determine that the touch object is the application interface when the touch input is a slide input or a drag input on the application interface.
In some embodiments of the present application, as shown in fig. 2 in conjunction with fig. 1, the above step 102 may be implemented by the following steps 102a to 102 c:
step 102a, the electronic device responds to the touch input to obtain a shortcut instruction set corresponding to the input information.
In some embodiments of the present application, when the touch object includes the application icon, the electronic device may obtain, in response to the touch input, a shortcut instruction set with a target application, where the application icon is an application icon of the target application.
In some embodiments of the present application, when the touch object includes interface content or an application interface, the electronic device may acquire a shortcut instruction set corresponding to the first application, and then acquire a shortcut instruction subset corresponding to the input information from the shortcut instruction set corresponding to the first application.
In a fifth example, in combination with the first example, the shortcut instruction set may be a shortcut instruction set of a social application. The electronic device can acquire a shortcut instruction set of the social application through a shortcut instruction interface of the social application. The shortcut instruction set of the social application may include a shortcut instruction to send a message to any contact of the social application, a shortcut instruction to initiate a video call to any contact of the social application, a shortcut instruction to initiate a voice call to any contact of the social application, a shortcut instruction to add friends, a shortcut instruction to share a social circle, and the like.
In an example six, in combination with the example two, the electronic device may obtain, through a shortcut instruction interface of the map application, a shortcut instruction set of the map application, and then obtain, from the shortcut instruction set of the map application, a shortcut instruction subset corresponding to the geographic position corresponding to the point a and the geographic position corresponding to the point B, where the shortcut instruction subset corresponding to the geographic position corresponding to the point a and the geographic position corresponding to the point B may include a shortcut instruction for measuring a distance between the geographic position corresponding to the point a and the geographic position corresponding to the point B, and navigate from the geographic position corresponding to the point a to the geographic position corresponding to the point B.
In a seventh example, in combination with the third example, the electronic device may obtain a shortcut instruction set of the document application through a shortcut instruction interface of the document application, and then obtain a shortcut instruction subset corresponding to document content in an interface area formed by C point, D point, E point and F point on an application interface of the document application from the shortcut instruction set of the document application, where the shortcut instruction subset corresponding to the document content in the interface area formed by C point, D point, E point and F point on the application interface of the document application may include a shortcut instruction such as copying the document content in the interface area formed by C point, D point, E point and F point on the application interface of the document application, deleting the shortcut instruction of the document content in the interface area formed by C point, D point, E point and F point on the application interface of the document application, modifying a shortcut instruction of a font of the document content in the interface area formed by C point, D point, E point and F point on the application interface of the document application, adding a shortcut instruction in a lower portion of a font of the document content in the interface area formed by C point, D point, E point and F point on the application interface of the document application, and so on.
In an eighth example, in combination with the fourth example, the electronic device may obtain a shortcut instruction set of the first application through a shortcut instruction interface of the first application, and then obtain a shortcut instruction subset corresponding to the application interface of the first application and the direction information input by sliding down left from a shortcut instruction set of the document application, where the shortcut instruction subset corresponding to the application interface of the first application and the direction information input by sliding down left may include a shortcut instruction for running the application interface of the first application in the background, a shortcut instruction for displaying the application interface of the first application in a small window at the lower left of the screen, a shortcut instruction for closing the application interface of the first application, a shortcut instruction for reducing display brightness of the application interface of the first application, and so on.
In an example nine, in combination with the example five, the electronic device may obtain a shortcut instruction set of the drawing application through a shortcut instruction interface of the drawing application, and then obtain a shortcut instruction subset corresponding to start position information, end position information, and direction information of an application interface of the drawing application and a shortcut instruction set corresponding to drag input on the application interface of the drawing application from the shortcut instruction set of the drawing application, where the application interface of the drawing application and the shortcut instruction subset corresponding to start position information, end position information, and direction information of the drag input on the application interface of the drawing application may include a shortcut instruction such as erasing a line segment or a pattern corresponding to start position information, end position information, and direction information on the application interface of the drawing application, a shortcut instruction for drawing a line segment or a pattern corresponding to start position information, end position information, and direction information on the application interface of the drawing application may be filled with shortcut instructions of different colors, a shortcut instruction for changing a display position of the line segment or a shortcut instruction corresponding to start position information, end position information, and direction information on the application interface of the drawing application may be drawn.
Step 102b, the electronic device responds to the voice input, and obtains at least one shortcut instruction matched with the semantic information from the shortcut instruction set based on the semantic information corresponding to the voice input.
In some embodiments of the present application, the electronic device may perform a recognition operation on the voice input in response to the voice input, obtain text information corresponding to the voice input, and then perform semantic understanding on the text information to obtain semantic information corresponding to the voice input.
In some embodiments of the present application, at least one shortcut instruction matching with the semantic information is obtained from the shortcut instruction set, which may be understood that at least one shortcut instruction having a similarity with the semantic information greater than or equal to the second threshold is obtained from the shortcut instruction set.
In an example ten, in combination with scenario 1, the electronic device may obtain, from a shortcut instruction set of the social application, at least one shortcut execution having a matching degree with the above semantic information greater than or equal to a second threshold.
In an eleventh example, in combination with scenario 2, the electronic device may obtain, from a shortcut instruction subset corresponding to the geographic location corresponding to the point a and the geographic location corresponding to the point B, at least one shortcut execution having a matching degree with the above semantic information greater than or equal to the first threshold.
In an example twelve, in combination with scenario 3, the electronic device may obtain at least one shortcut execution with the matching degree of the semantic information above being greater than or equal to the second threshold from a shortcut instruction subset corresponding to the document content in an interface area formed by points C, D, E, and F on an application interface of the document application.
In an example thirteenth, in combination with scenario 4, the electronic device may obtain, from a shortcut instruction subset corresponding to the application interface of the first application and the direction information input by sliding down left, at least one shortcut execution having a matching degree with the above semantic information greater than or equal to the second threshold.
In an example fourteen, in combination with the scenario 5, the electronic device may obtain at least one shortcut execution with the matching degree of the semantic information being greater than or equal to the second threshold from a shortcut instruction subset corresponding to the application interface of the drawing application and the start position information, the end position information, and the direction information of the drag input on the application interface of the drawing application.
Step 102c, the electronic device executes the functional operation corresponding to the voice input based on the matching degree of each shortcut instruction in the at least one shortcut instruction and the semantic information.
The electronic equipment responds to the touch input to acquire a shortcut instruction set corresponding to the input information, responds to the voice input to acquire at least one shortcut instruction matched with the semantic information from the shortcut instruction set based on the semantic information corresponding to the voice input, and executes the functional operation corresponding to the voice input based on the matching degree of each shortcut instruction in the at least one shortcut instruction and the semantic information, so that the operation steps of the touch input can be effectively simplified when the electronic equipment executes the corresponding functional operation, and the operation convenience of the electronic equipment is further effectively improved.
In some embodiments of the present application, as shown in fig. 3 in conjunction with fig. 2, the above step 102c may be implemented by the following step 102c1 or the following step 102c 2:
Step 102c1, the electronic device executes the function operation indicated by the first shortcut instruction when the difference between the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is greater than or equal to the first threshold.
In some embodiments of the present application, the first shortcut instruction is a shortcut instruction with a maximum matching degree with the semantic information in the at least one shortcut instruction, and the second shortcut instruction is any shortcut instruction except the first shortcut instruction in the at least one shortcut instruction.
In some embodiments of the present application, a difference between the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is greater than or equal to the first threshold, which indicates that the matching degree corresponding to the first shortcut instruction is far greater than the matching degree corresponding to the shortcut instruction matched with the semantic information, and the electronic device may directly execute the functional operation indicated by the first shortcut instruction.
In some embodiments of the present application, the electronic device may implement the functional operation indicated by the first shortcut instruction by executing the first shortcut instruction.
Fifteen examples, in combination with scenario 1, when the voice input is "sending a message to friend a", the electronic device may obtain, from the shortcut instruction set of the social application, the first shortcut instruction that is obtained may be a shortcut instruction for sending a message to friend a of the contact of the social application.
In an example sixteen, in combination with scenario 2, when the voice input is "navigation", the electronic device may obtain, from the shortcut instruction subset corresponding to the geographic location corresponding to the point a and the geographic location corresponding to the point B, a first shortcut instruction that may be a shortcut instruction for navigating from the geographic location corresponding to the point a to the geographic location corresponding to the point B.
In a seventeenth example, in combination with scenario 3, when the voice input is "copy", the electronic device may obtain, from a shortcut instruction subset corresponding to the document content in an interface area formed by points C, D, E, and F on an application interface of the document application, a first shortcut instruction that may be a shortcut instruction for copying the document content in an interface area formed by points C, D, E, and F on an application interface of the document application.
For example, eighteen examples, in combination with scenario 4, when the above-mentioned voice input is "small window display", the electronic device may obtain the first shortcut instruction from the shortcut instruction subset corresponding to the application interface of the first application and the direction information of the lower left sliding input, and the obtained first shortcut instruction may be a shortcut instruction for displaying the application interface of the first application with a small window at the lower left side of the screen.
In nineteenth example, in combination with scenario 5, when the above-mentioned voice input is "red", the electronic device may obtain, from the shortcut instruction subset corresponding to the application interface of the drawing application and the start position information, the end position information, and the direction information of the drag input on the application interface of the drawing application, the first shortcut instruction may be a shortcut instruction for filling, in red, a line segment or a pattern corresponding to the start position information, the end position information, and the direction information on the application interface of the drawing application.
Step 102c2, the electronic device executes the function operation indicated by the third shortcut instruction when the difference between the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is smaller than the first threshold.
In some embodiments of the application, the third shortcut command is a shortcut command selected by a user from the at least one shortcut command.
In some embodiments of the present application, the difference between the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is smaller than the first threshold, which indicates that the matching degree corresponding to the first shortcut instruction is close to the matching degree corresponding to other shortcut instructions matched with the semantic information, that is, the matching degrees corresponding to all shortcut instructions in the at least one shortcut instruction are close, and the electronic device may prompt the user to select one shortcut instruction from the at least one shortcut instruction. After determining that the third shortcut instruction is selected from the at least one shortcut instruction, the electronic device may execute the function operation indicated by the third shortcut instruction.
In some embodiments of the present application, the electronic device may implement the functional operation indicated by executing the third shortcut instruction.
It should be noted that, if it is determined that only one shortcut instruction matching the semantic information exists in the shortcut instruction set, the electronic device may execute a function operation corresponding to the one shortcut instruction matching the semantic information.
The electronic equipment can effectively execute the accuracy of the operation function by executing the function operation indicated by the first shortcut instruction under the condition that the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is larger than or equal to a first threshold value, and can execute the function operation indicated by the third shortcut instruction under the condition that the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is smaller than the first threshold value, so that the electronic equipment can execute the corresponding function operation according to the selection of a user, and the flexibility of the operation function of the electronic equipment is further improved.
In the function execution method provided by the embodiment of the application, the function operation corresponding to the voice input is executed by receiving the touch input and the voice input, and responding to at least one of a touch object of the touch input based on the touch input, starting position information, ending position information and direction information corresponding to the touch input. Therefore, when the electronic equipment executes corresponding functional operation, the operation steps of touch input can be effectively simplified, and the operation convenience of the electronic equipment is further effectively improved.
In some embodiments of the present application, as shown in fig. 4 in conjunction with fig. 3, before the step 102c2, the method for performing a function provided in the embodiment of the present application may further include the following step 103, where the step 102c2 may be implemented by the following step a:
Step103, the electronic equipment displays at least one instruction identifier.
In some embodiments of the application, each of the plurality of command identifiers corresponds to one of the at least one shortcut command.
In some embodiments of the present application, each instruction identifier may be an instruction identifier of one of the at least one shortcut instruction.
In some embodiments of the present application, the instruction identifier may be a static identifier of a text, a picture, a symbol, etc., or may be a dynamic identifier of a moving picture, a video, etc., and the specific instruction identifier form may be determined according to actual requirements, which is not limited in some embodiments.
In some embodiments of the application, the electronic device may display the at least one instruction identification on the screen assembly.
In some embodiments of the present application, the screen component may be a control in an electronic device that is not dependent on an application. That is, the screen assembly described above may be displayed in any interface of the electronic device.
In some embodiments of the application, the screen component described above may also be an atomic island.
In some embodiments of the present application, the atomic islands, also referred to as interactive elements, surround the front-facing camera to form an interactive region similar to an "island". The region can also be understood as an interaction region in a screen corresponding to a front camera of the electronic device, and can also be called an atomic island region. The atomic islands may include various status information such as time, network status, battery power, etc., and may also display some simple notification messages such as short messages, mail notifications, etc. The atomic island is a brand new embedded interactive module of status bar, which can dynamically display status information of electronic equipment such as mobile phone and application information of background operation, can interact through clicking, long pressing and sliding, can display various information such as music playing, recording, bluetooth earphone connection, timer, bell sound, taxi taking, flying mode and the like, and can call out a suspension menu for simple operation such as song switching, timer pausing and the like.
In some embodiments of the present application, the shape of the screen assembly may be any possible shape, such as circular, rectangular, triangular, diamond, circular ring, or polygonal. The method can be specifically determined according to actual use requirements, and the embodiment of the application is not limited.
In some embodiments of the present application, the screen assembly may be fixedly displayed at a certain position in the screen.
For example, the screen assembly may be fixedly displayed in a preset position of a screen of the electronic device. For example, the preset position comprises at least one of a status bar, a notification bar, a task bar and a virtual navigation bar.
In some embodiments of the application, the screen component may be a hover control.
For example, the screen assembly may be displayed in a floating state at any position in the screen, and may be moved on the screen according to a drag operation of the user when the user drags the screen assembly.
It should be noted that, when the user wants to operate the screen assembly, the user may control the screen assembly to move through a specific input.
In some embodiments of the present application, the screen assembly may be further displayed in a first position in a superimposed manner with a preset transparency.
For example, assuming that the preset transparency is denoted as T, the value range of T may be 0% < T <100%.
In some embodiments of the present application, the background color of the screen assembly may also change. Illustratively, if the first application content of the first application changes, the background color of the screen component also changes.
In some embodiments of the present application, the screen assembly may also be displayed with a high brightness or a low brightness to alert the user to the specific display location of the screen assembly.
In practical implementations, the screen assembly may be displayed in any one of possible forms at a preset position. The embodiment of the present application is not particularly limited. For example, the screen component may hover the status bar displayed in the screen.
And step A, if the electronic equipment receives the input of the instruction identification of the third shortcut instruction in the first time period, executing the function operation indicated by the third shortcut instruction.
In some embodiments of the present application, the electronic device may start the timer while displaying the at least one instruction identifier. When the timing duration of the timer is smaller than the first duration, if the electronic device receives the input of the instruction identifier of the third shortcut instruction, the electronic device can execute the function operation indicated by the third shortcut instruction.
In some embodiments of the present application, the input of the command identifier of the third shortcut command includes, but is not limited to, performing touch input on the command identifier of the third shortcut command by a user through a touch device such as a finger or a stylus, or inputting a voice command for the user, or inputting a specific gesture for the user, or inputting other feasibility. The specific determination may be determined according to actual use requirements, and the embodiment of the application is not limited.
In some embodiments of the present application, the specific gesture may be any one of a single click gesture, a swipe gesture, a drag gesture, a pressure recognition gesture, a long press gesture, an area change gesture, a double press gesture, and a double click gesture.
In some embodiments of the present application, the click input may be a single click input, a double click input, or any number of click inputs, and may also be a long press input or a short press input.
In some embodiments of the present application, the input of the instruction identifier of the third shortcut instruction is used to trigger the electronic device to run the first application in the background.
If the input of the instruction identification of the third shortcut instruction is received within the first time, the electronic equipment executes the function operation indicated by the third shortcut instruction, so that the electronic equipment can execute corresponding function operation in time according to the selection of a user, and further the efficiency and the flexibility of the operation function of the electronic equipment are improved.
In some embodiments of the present application, in conjunction with fig. 4, as shown in fig. 5, the method for performing a function provided in the embodiment of the present application may further include the following step 104:
Step 104, if the electronic device does not receive the input of the at least one instruction identifier within the first duration, displaying first prompt information.
In some embodiments of the present application, the first prompt message is used to prompt to resume at least one of touch input and voice input.
In some embodiments of the present application, if the time duration of the timer reaches the first time duration, the electronic device may update the at least one instruction identifier displayed in the screen component to the first prompt information if the electronic device does not receive the input of the at least one instruction identifier within the first time duration.
In some embodiments of the present application, if the time duration of the timer reaches the first time duration, if the electronic device does not receive the input of the at least one instruction identifier within the first time duration, the electronic device may further display an information prompt popup window, where the information prompt popup window includes the first prompt information.
In this way, when the electronic device does not receive the input of the at least one instruction identifier within the first time period, the first prompt information is displayed, so that the user can be prompted to carry out at least one of touch input and voice input again in time, the electronic device can execute corresponding functional operation according to at least one of touch input and voice input again by the user, and further the flexibility of executing the functional operation by the electronic device is improved.
In some embodiments of the present application, in conjunction with fig. 2, as shown in fig. 6, the method for performing a function provided in the embodiment of the present application may further include the following step 105:
step 105, if the shortcut instruction set does not have the shortcut instruction matched with the voice information, the electronic device displays the first prompt information.
In some embodiments of the present application, the absence of a shortcut command in the shortcut command set that matches the voice information may be understood that a matching degree between any shortcut command in the shortcut command set and the voice information is less than the second threshold.
In some embodiments of the present application, the electronic device displays the first prompt information in the screen component if it is determined that a shortcut instruction matching the voice information does not exist in the shortcut instruction set.
In this way, when the electronic device determines that the shortcut instruction set does not have the shortcut instruction matched with the voice information, the electronic device displays the first prompt information, and can prompt the user to carry out at least one of touch input and voice input again, so that the electronic device carries out corresponding functional operation according to at least one of touch input and voice input again by the user, and further flexibility of the electronic device in executing the functional operation is improved.
Fig. 7 is a schematic flow chart of a method for executing functions according to an embodiment of the present application, taking an electronic device as a mobile phone and a touch input as a single click input as an example, the method may include the following steps 201 to 209.
Step 201, the mobile phone receives touch input and voice input of a user.
For example, as shown in fig. 9A, the touch input may be a click input of a touch on the screen by the user, and the voice input may be a command (e.g., "send message to friend a") input by the user through voice.
Step 202, the mobile phone acquires application information of the touch point and stores the application information into a semantic set.
The mobile phone can identify an application corresponding to the touch point by combining the position information of the touch point on the desktop, which is input by the touch, and the UI state of the desktop during touch, and create a semantic set, and store the application information into the semantic set.
Step 203, the mobile phone converts the voice corresponding to the voice input into text by means of voice recognition.
Step 204, the mobile phone disassembles the converted text into semantics, stores the semantics into a 'semantic set', and retrieves at least one matched shortcut instruction according to the semantic set.
For example, the mobile phone may further perform semantic analysis on the converted text, extract key semantics (such as "messaging", "friend a") of the user instruction, and store the key semantics in the semantic set.
Step 205, if the mobile phone determines that the matching degree corresponding to a shortcut instruction is higher than the threshold and far higher than other shortcut instructions, step 209 is performed.
Step 206, if the mobile phone determines that the matching degree of more than two shortcut instructions is higher than the threshold or the matching degree of several shortcut instructions with highest matching degree is close, step 207 is executed.
Step 207, the mobile phone displays more than two shortcut instructions above the threshold or several shortcut instructions with highest matching degree.
Step 208, the mobile phone receives a selection operation of the user for executing one shortcut in the displayed at least one shortcut instruction, and executes step 209.
Illustratively, the mobile phone may retrieve the shortcut instruction interface provided by the application according to the semantics in the semantic collection. Setting matching degree, if the matching degree of a certain instruction is higher than a threshold value and far higher than other options, directly executing the instruction, if more than two instructions are higher than the threshold value or the matching degrees of the several instructions with the highest matching degree are close, providing a selection frame, enabling a user to select the instruction to be executed through touch control, and if the option which reaches the threshold value is not available or the user does not adopt an alternative instruction, prompting the re-user to carry out at least one of touch control input and voice input again.
Step 209, the mobile phone executes the corresponding shortcut instruction.
For example, the handset may execute shortcuts with a degree of match above a threshold and far above other shortcuts, or user-selected shortcuts.
The function execution method provided by the embodiment of the application can simplify the interaction of the multi-level complex functions applied to the mobile phone, improve the man-machine interaction efficiency and reduce the learning cost.
Fig. 8 is a flowchart of a method for executing functions according to an embodiment of the present application, taking an electronic device as a mobile phone, where a touch input includes a plurality of touch sub-inputs, the method may include the following steps 301 to 305.
Step 301, the mobile phone receives touch input and voice input of a user.
For example, as shown in fig. 9B, the touch input may be a click input of multiple points in the screen by the user (e.g., a click input of two touch points on the map application interface), and the voice input may be "riding navigation".
Step 302, the mobile phone acquires input information of touch input and stores the input information into a semantic set.
For example, the mobile phone may store the content on the touch points and the association relationship between the touch points as input information into a "semantic set" according to the UI content on the screen interface. (e.g., the geographic locations represented by the touch points, the distance between the geographic locations, etc.)
Step 303, the mobile phone acquires semantic information corresponding to the voice input, stores the semantic information into a semantic set, and retrieves at least one matched shortcut instruction according to the semantic set.
It should be noted that, the implementation process of the above step 303 may refer to the above step 203 and the above step 204, which are not described herein.
Step 304, the mobile phone uses the language collection to match the application specific shortcut instruction and gives feedback to the user.
It should be noted that, the implementation process of the above step 304 may refer to the above step 205 to the above step 208, which is not described herein.
Step 305, the mobile phone executes the corresponding shortcut instruction.
It should be noted that, the implementation process of the step 305 may refer to the step 209, and this embodiment is not repeated here.
According to the function execution method provided by the embodiment of the application, on one hand, interaction of multi-level complex functions applied to the mobile phone can be simplified, man-machine interaction efficiency is improved, and learning cost is reduced. On the other hand, finer instruction input can be performed in a scene requiring multi-touch interaction in the application.
In some embodiments of the present application, the touch input may also be a drag input or a slide input, providing richer start position information, end position information, and direction information for determining the shortcut instruction to be executed
For example, as shown in FIG. 9C, the user slides the screen in a certain direction (e.g., bottom left to top right) within a certain application while the voice inputs a "small window". The mobile phone can store the semantics of the 'left lower to right upper' and the 'small window' into the 'instruction semantic set', namely, the shortcut instruction which can cut the application into the small window and display the small window at the right upper corner of the screen can be quickly executed.
For another example, the user performs drag input in the drawing application, and simultaneously inputs "red" by voice, and the mobile phone may execute a shortcut instruction for filling a line segment or pattern drawn by the user and corresponding to the start position information, the end position information and the direction information of the drag input into red.
The embodiment of the invention can provide more accurate input for the instruction requiring the direction information or the track information, and improves the man-machine interaction efficiency.
It should be noted that, the foregoing method embodiments, or various possible implementation manners in the method embodiments, may be executed separately, or may be executed in any two or more combinations with each other, and may specifically be determined according to actual use requirements, which is not limited by the embodiment of the present application.
According to the function execution method provided by the embodiment of the application, the execution main body can be a function execution device. In the embodiment of the present application, a method for executing a function by a function executing device is taken as an example, and the function executing device provided by the embodiment of the present application is described.
Fig. 10 is a schematic structural diagram of a function execution device 100 according to an embodiment of the present application, where the function execution device 1000 includes a receiving module 1001 and an executing module 1002.
The receiving module 1001 is configured to receive touch input and voice input;
the execution module is used for responding to the touch input and the voice input and executing the functional operation corresponding to the voice input based on the input information of the touch input;
The input information of the touch input comprises at least one of a touch object of the touch input, starting position information and ending position information of the touch input and direction information corresponding to the touch input;
The touch object includes any one of an application icon, interface content, and an application interface.
In some embodiments of the present application, the execution module 1002 is specifically configured to:
responding to the touch input, and acquiring a shortcut instruction set corresponding to the input information;
responding to the voice input, and acquiring at least one shortcut instruction matched with the semantic information from the shortcut instruction set based on the semantic information corresponding to the voice input;
And executing the functional operation corresponding to the voice input based on the matching degree of each shortcut instruction in the at least one shortcut instruction and the semantic information.
In some embodiments of the present application, the execution module 1002 is specifically configured to:
Executing the function operation indicated by the first shortcut instruction under the condition that the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is larger than or equal to a first threshold value;
Or alternatively
Executing a function operation indicated by a third shortcut instruction when the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is smaller than a first threshold, wherein the third shortcut instruction is a shortcut instruction selected by a user from the at least one shortcut instruction;
the first shortcut instruction is the shortcut instruction with the largest matching degree with the semantic information in the at least one shortcut instruction, and the second shortcut instruction is any shortcut instruction except the first shortcut instruction in the at least one shortcut instruction.
In some embodiments of the present application, as shown in fig. 11 in conjunction with fig. 10, the apparatus 1000 further includes a display module 1003 configured to display at least one instruction identifier before executing the functional operation indicated by the third shortcut instruction, where each instruction identifier corresponds to one shortcut instruction in the at least one shortcut instruction, and the execution module 1002 is specifically configured to execute the functional operation indicated by the third shortcut instruction if an input of the instruction identifier of the third shortcut instruction is received within a first period of time.
In some embodiments of the present application, the display module 1003 is further configured to:
and if the input of the at least one instruction identifier is not received within the first time period, displaying first prompt information, wherein the first prompt information is used for prompting to carry out at least one of touch input and voice input again.
In some embodiments of the present application, the display module 1003 is further configured to display a first prompt message if a shortcut instruction matching the voice message does not exist in the shortcut instruction set, where the first prompt message is used to prompt to resume at least one of touch input and voice input.
In the function execution device provided by the embodiment of the application, the function operation corresponding to the voice input is executed by receiving the touch input and the voice input, and responding to the touch input and the voice input based on at least one of a touch object of the touch input, starting position information and ending position information of the touch input and direction information corresponding to the touch input. Therefore, when the function executing device executes corresponding function operation, the operation steps of touch input can be effectively simplified, and the operation convenience of the function executing device is further effectively improved.
The function executing device in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a mobile internet surfing device, an augmented reality/virtual reality device, a robot, a wearable device, an ultra mobile personal computer, a netbook or a personal digital assistant, or may be a server, a network attached storage (NetworkAttached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, or the like, which is not particularly limited.
The function executing device in the embodiment of the present application may be a device having a functional operating system. The functional operating system may be an Android (Android) functional operating system, an ios functional operating system, or other possible functional operating systems, and the embodiment of the present application is not limited specifically.
The function executing device provided by the embodiment of the present application can implement each process implemented by each embodiment of the function executing method, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 12, the embodiment of the present application further provides an electronic device 1200, including a processor 1201 and a memory 1202, where the memory 1202 stores a program or an instruction that can be executed on the processor 1201, and the program or the instruction implements each step of the above-mentioned embodiment of the function execution method when executed by the processor 1201, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 13 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1300 includes, but is not limited to, a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, and a processor 1310.
Those skilled in the art will appreciate that the electronic device 1300 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1310 by a power management system, such as to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the input unit 1304 is used for receiving touch input and voice input;
a processor 1310, configured to respond to the touch input and the voice input, and execute a functional operation corresponding to the voice input based on input information of the touch input;
The input information of the touch input comprises at least one of a touch object of the touch input, starting position information and ending position information of the touch input and direction information corresponding to the touch input;
The touch object includes any one of an application icon, interface content, and an application interface.
In some embodiments of the application, the processor 1310 is specifically configured to:
responding to the touch input, and acquiring a shortcut instruction set corresponding to the input information;
responding to the voice input, and acquiring at least one shortcut instruction matched with the semantic information from the shortcut instruction set based on the semantic information corresponding to the voice input;
And executing the functional operation corresponding to the voice input based on the matching degree of each shortcut instruction in the at least one shortcut instruction and the semantic information.
The electronic equipment responds to the touch input to acquire a shortcut instruction set corresponding to the input information, responds to the voice input to acquire at least one shortcut instruction matched with the semantic information from the shortcut instruction set based on the semantic information corresponding to the voice input, and executes the functional operation corresponding to the voice input based on the matching degree of each shortcut instruction in the at least one shortcut instruction and the semantic information, so that the operation steps of the touch input can be effectively simplified when the electronic equipment executes the corresponding functional operation, and the operation convenience of the electronic equipment is further effectively improved.
In some embodiments of the application, the processor 1310 is specifically configured to:
Executing the function operation indicated by the first shortcut instruction under the condition that the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is larger than or equal to a first threshold value;
Or alternatively
Executing a function operation indicated by a third shortcut instruction when the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is smaller than a first threshold, wherein the third shortcut instruction is a shortcut instruction selected by a user from the at least one shortcut instruction;
the first shortcut instruction is the shortcut instruction with the largest matching degree with the semantic information in the at least one shortcut instruction, and the second shortcut instruction is any shortcut instruction except the first shortcut instruction in the at least one shortcut instruction.
The electronic equipment can effectively execute the accuracy of the operation function by executing the function operation indicated by the first shortcut instruction under the condition that the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is larger than or equal to a first threshold value, and can execute the function operation indicated by the third shortcut instruction under the condition that the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is smaller than the first threshold value, so that the electronic equipment can execute the corresponding function operation according to the selection of a user, and the flexibility of the operation function of the electronic equipment is further improved.
In some embodiments of the present application, the display unit 1306 is configured to display at least one instruction identifier, each instruction identifier corresponding to one of the at least one shortcut instruction, before performing the functional operation indicated by the third shortcut instruction;
the processor 1310 is specifically configured to:
And if the input of the instruction identification of the third shortcut instruction is received in the first time period, executing the function operation indicated by the third shortcut instruction.
If the input of the instruction identification of the third shortcut instruction is received within the first time, the electronic equipment executes the function operation indicated by the third shortcut instruction, so that the electronic equipment can execute corresponding function operation in time according to the selection of a user, and further the efficiency and the flexibility of the operation function of the electronic equipment are improved.
In some embodiments of the present application, the display unit 1306 is further configured to:
and if the input of the at least one instruction identifier is not received within the first time period, displaying first prompt information, wherein the first prompt information is used for prompting to carry out at least one of touch input and voice input again.
In this way, when the electronic device does not receive the input of the at least one instruction identifier within the first time period, the first prompt information is displayed, so that the user can be prompted to carry out at least one of touch input and voice input again in time, the electronic device can execute corresponding functional operation according to at least one of touch input and voice input again by the user, and further the flexibility of executing the functional operation by the electronic device is improved.
In some embodiments of the present application, the display unit 1306 is further configured to:
And if the shortcut instruction set does not have the shortcut instruction matched with the voice information, displaying first prompt information, wherein the first prompt information is used for prompting to carry out at least one of touch input and voice input again.
In this way, when the electronic device determines that the shortcut instruction set does not have the shortcut instruction matched with the voice information, the electronic device displays the first prompt information, and can prompt the user to carry out at least one of touch input and voice input again, so that the electronic device carries out corresponding functional operation according to at least one of touch input and voice input again by the user, and further flexibility of the electronic device in executing the functional operation is improved.
In the electronic equipment provided by the embodiment of the application, the functional operation corresponding to the voice input is executed by receiving the touch input and the voice input and responding to the touch input and the voice input based on at least one of a touch object of the touch input, starting position information and ending position information of the touch input and direction information corresponding to the touch input. Therefore, when the electronic equipment executes corresponding functional operation, the operation steps of touch input can be effectively simplified, and the operation convenience of the electronic equipment is further effectively improved.
It should be appreciated that in embodiments of the present application, the input unit 1304 may include a graphics processor (Graphics Processing Unit, GPU) 13041 and a microphone 13042, the graphics processor 13041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 1306 may include a display panel 13061, and the display panel 13061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1307 includes at least one of a touch panel 13071 and other input devices 13072. Touch panel 13071, also referred to as a touch screen. The touch panel 13071 may include two parts, a touch detection device and a touch controller. Other input devices 13072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a function joystick, which are not described in detail herein.
Memory 1309 may be used to store software programs as well as various data. The memory 1309 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store a functional operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1309 may include volatile memory or nonvolatile memory, or the memory 1309 may include both volatile and nonvolatile memory. The nonvolatile memory may be Read-only memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable ROM (ElectricallyEPROM, EEPROM), or flash memory. The volatile memory may be random access memory (RandomAccess Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 1309 in embodiments of the application include, but are not limited to, these and any other suitable types of memory.
The processor 1310 may include one or more processing units, and optionally the processor 1310 integrates an application processor that primarily processes functional operations involving a functional operating system, user interface, application program, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1310.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-described embodiment of the method for executing a function, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the function execution method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described function execution method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the function execution method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (11)

1. A method of performing a function, the method comprising:
receiving touch input and voice input;
Responding to the touch input and the voice input, and executing a functional operation corresponding to the voice input based on input information of the touch input;
The input information of the touch input comprises at least one of a touch object of the touch input, starting position information and ending position information of the touch input and direction information corresponding to the touch input;
the touch object comprises any one of an application icon, interface content and an application interface.
2. The method of claim 1, wherein the performing, in response to the touch input and the voice input, a functional operation corresponding to the voice input based on input information of the touch input comprises:
responding to the touch input, and acquiring a shortcut instruction set corresponding to the input information;
responding to the voice input, and acquiring at least one shortcut instruction matched with the semantic information from the shortcut instruction set based on the semantic information corresponding to the voice input;
And executing the functional operation corresponding to the voice input based on the matching degree of each shortcut instruction in the at least one shortcut instruction and the semantic information.
3. The method of claim 2, wherein the performing the functional operation corresponding to the voice input based on the degree of matching of each of the at least one shortcut instruction to the semantic information comprises:
Executing the function operation indicated by the first shortcut instruction under the condition that the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is larger than or equal to a first threshold value;
Or alternatively
Executing a function operation indicated by a third shortcut instruction when the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is smaller than a first threshold, wherein the third shortcut instruction is a shortcut instruction selected by a user from the at least one shortcut instruction;
the first shortcut instruction is the shortcut instruction with the largest matching degree with the semantic information in the at least one shortcut instruction, and the second shortcut instruction is any shortcut instruction except the first shortcut instruction in the at least one shortcut instruction.
4. A method according to claim 3, wherein prior to performing the functional operation indicated by the third shortcut instruction, the method further comprises:
Displaying at least one instruction identifier, wherein each instruction identifier corresponds to one shortcut instruction in the at least one shortcut instruction;
the executing the functional operation indicated by the third shortcut instruction includes:
And if the input of the instruction identification of the third shortcut instruction is received in the first time period, executing the function operation indicated by the third shortcut instruction.
5. The method according to claim 4, wherein the method further comprises:
and if the input of the at least one instruction identifier is not received within the first time period, displaying first prompt information, wherein the first prompt information is used for prompting to carry out at least one of touch input and voice input again.
6. A function execution device, characterized in that the device comprises:
The receiving module is used for receiving touch input and voice input;
the execution module is used for responding to the touch input and the voice input and executing the functional operation corresponding to the voice input based on the input information of the touch input;
The input information of the touch input comprises at least one of a touch object of the touch input, starting position information and ending position information of the touch input and direction information corresponding to the touch input;
the touch object comprises any one of an application icon, interface content and an application interface.
7. The device according to claim 6, wherein the execution module is specifically configured to:
responding to the touch input, and acquiring a shortcut instruction set corresponding to the input information;
responding to the voice input, and acquiring at least one shortcut instruction matched with the semantic information from the shortcut instruction set based on the semantic information corresponding to the voice input;
And executing the functional operation corresponding to the voice input based on the matching degree of each shortcut instruction in the at least one shortcut instruction and the semantic information.
8. The apparatus according to claim 7, wherein the execution module is specifically configured to:
Executing the function operation indicated by the first shortcut instruction under the condition that the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is larger than or equal to a first threshold value;
Or alternatively
Executing a function operation indicated by a third shortcut instruction when the difference value of the matching degree corresponding to the first shortcut instruction and the matching degree corresponding to the second shortcut instruction is smaller than a first threshold, wherein the third shortcut instruction is a shortcut instruction selected by a user from the at least one shortcut instruction;
the first shortcut instruction is the shortcut instruction with the largest matching degree with the semantic information in the at least one shortcut instruction, and the second shortcut instruction is any shortcut instruction except the first shortcut instruction in the at least one shortcut instruction.
9. The apparatus of claim 8, wherein the apparatus further comprises:
The display module is used for displaying at least one instruction identifier before executing the functional operation indicated by the third shortcut instruction, and each instruction identifier corresponds to one shortcut instruction in the at least one shortcut instruction;
the execution module is specifically configured to:
And if the input of the instruction identification of the third shortcut instruction is received in the first time period, executing the function operation indicated by the third shortcut instruction.
10. The apparatus of claim 9, wherein the display module is further configured to:
and if the input of the at least one instruction identifier is not received within the first time period, displaying first prompt information, wherein the first prompt information is used for prompting to carry out at least one of touch input and voice input again.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the function execution method of any one of claims 1 to 5.
CN202510118811.XA 2025-01-24 2025-01-24 Function execution method, function execution device and electronic equipment Pending CN120010738A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510118811.XA CN120010738A (en) 2025-01-24 2025-01-24 Function execution method, function execution device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510118811.XA CN120010738A (en) 2025-01-24 2025-01-24 Function execution method, function execution device and electronic equipment

Publications (1)

Publication Number Publication Date
CN120010738A true CN120010738A (en) 2025-05-16

Family

ID=95663439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510118811.XA Pending CN120010738A (en) 2025-01-24 2025-01-24 Function execution method, function execution device and electronic equipment

Country Status (1)

Country Link
CN (1) CN120010738A (en)

Similar Documents

Publication Publication Date Title
US12164745B2 (en) Device, method, and graphical user interface for managing folders
US12236079B2 (en) Device, method, and graphical user interface for managing folders with multiple pages
US20230214107A1 (en) User interface for receiving user input
US12189865B2 (en) Navigating user interfaces using hand gestures
US20220035522A1 (en) Device, Method, and Graphical User Interface for Displaying a Plurality of Settings Controls
KR102367838B1 (en) Device, method, and graphical user interface for managing concurrently open software applications
CN111339032B (en) Devices, methods and graphical user interfaces for managing folders with multiple pages
US9542949B2 (en) Satisfying specified intent(s) based on multimodal request(s)
US20110175826A1 (en) Automatically Displaying and Hiding an On-screen Keyboard
US20150309692A1 (en) Device, method, and graphical user interface for copying formatting attributes
US20110179372A1 (en) Automatic Keyboard Layout Determination
US20230393865A1 (en) Method of activating and managing dual user interface operating modes
WO2024046204A1 (en) Message processing method and apparatus, electronic device, and storage medium
US20170168686A1 (en) Method and electronic device for processing list item operation
CN120010738A (en) Function execution method, function execution device and electronic equipment
CN106502515B (en) Picture input method and mobile terminal
CN118484114A (en) Interface display method and device
CN118535049A (en) Information display method, device, electronic device and storage medium
CN118733183A (en) Processing method, device and electronic equipment
CN116627312A (en) Display method, device, equipment and storage medium
HK1161375A (en) Automatically displaying and hiding an on-screen keyboard

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination