CN107592416B - Voice message transmitting method, terminal and computer readable storage medium - Google Patents
Voice message transmitting method, terminal and computer readable storage medium Download PDFInfo
- Publication number
- CN107592416B CN107592416B CN201710777842.1A CN201710777842A CN107592416B CN 107592416 B CN107592416 B CN 107592416B CN 201710777842 A CN201710777842 A CN 201710777842A CN 107592416 B CN107592416 B CN 107592416B
- Authority
- CN
- China
- Prior art keywords
- communication application
- application software
- voice
- interface
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Telephone Function (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a voice message sending method, a terminal and a computer readable storage medium, wherein the terminal is provided with an entity key which is in communication connection with a processor of the terminal, and the method comprises the following steps: respectively displaying corresponding interfaces after the display screen is split; when the entity key receives a pressing operation, a voice acquisition instruction is triggered, voice information is acquired, and a voice file is generated according to the acquired voice information; if the communication application software is displayed in the interface after the screen splitting, displaying the voice file on the communication application software in a floating ball form; and when the floating ball is detected to be dragged to the area of the contact person of the communication application software, sending the voice file to the contact person. The invention improves the convenience and efficiency of sending the voice information.
Description
Technical Field
The present invention relates to the field of terminal technologies, and in particular, to a method for sending voice information, a terminal, and a computer-readable storage medium.
Background
With the rapid development of terminal technology, voice control of terminals is more and more common, and accordingly, more and more terminal applications support voice input.
In the existing voice information sending mode, a user is generally required to manually click an application icon to open an application, then manually select a contact person in the application, and after the contact person is manually selected, click a recording button to input voice and send the voice. When the user needs to send voice information to a contact person in another application, the user needs to quit the current application, then click an icon of the other application, and click a recording button to input voice and send the voice after the other application manually selects the contact person. Obviously, the operation process of the voice information sending mode is very complicated.
Disclosure of Invention
The invention mainly aims to provide a voice message sending method, a terminal and a computer readable storage medium, and aims to solve the problem that the existing voice message sending mode is complicated to operate.
In order to achieve the above object, the present invention provides a voice message sending method, which is applied to a terminal, wherein the terminal is provided with an entity button, the entity button is in communication connection with a processor of the terminal, and the voice message sending method includes:
respectively displaying corresponding interfaces after the display screen is split;
when the entity key receives a pressing operation, a voice acquisition instruction is triggered, voice information is acquired, and a voice file is generated according to the acquired voice information;
if the communication application software is displayed in the interface after the screen splitting, displaying the voice file on the communication application software in a floating ball form;
and when the floating ball is detected to be dragged to the area of the contact person of the communication application software, sending the voice file to the contact person.
Optionally, if the communication application software is displayed in the interface after the screen splitting, the step of displaying the voice file on the communication application software in a form of a floating ball includes:
and if a plurality of communication application software are displayed in the interface after the screen division, respectively displaying the voice file on each communication application software in a floating ball form.
Optionally, before the step of sending the voice file to the contact when the hover ball is detected to be dragged to the area where the contact of the communication application software is located, the method further includes:
when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding direction of the sliding touch operation instruction;
and simulating the sliding touch operation instruction in other communication application software after screen splitting, and controlling the other communication application software to slide according to the sliding direction so as to switch and display the contact persons of each communication application software.
Optionally, after the step of simulating the sliding touch operation instruction in other communication application software after screen splitting, the method further includes:
and controlling other communication application software to slide according to the opposite direction of the sliding direction so as to switch and display the contact persons of each communication application software.
Optionally, before the step of sending the voice file to the contact when the hover ball is detected to be dragged to the area where the contact of the communication application software is located, the method further includes:
when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding speed of the sliding touch operation instruction;
and controlling the communication application instruction receiving the sliding touch operation and other communication application instructions to slide at different speeds, wherein the sliding speed corresponding to the communication application software receiving the sliding touch operation is higher than the sliding speed of other communication application software.
Optionally, after the step of triggering a voice acquisition instruction and acquiring voice information and generating a voice file according to the acquired voice information when the entity-based key receives a pressing operation, the method further includes:
if the system interface is displayed in the interface after the screen splitting, reducing each application icon in the system interface according to a preset proportion;
and displaying each reduced application icon on the system interface.
Optionally, after the step of triggering a voice acquisition instruction and acquiring voice information and generating a voice file according to the acquired voice information when the entity-based key receives a pressing operation, the method further includes:
if the non-communication application software is displayed in the interface after the screen splitting, displaying the voice file on the edge area of the non-communication application software in a floating ball mode;
determining whether a voice assistant is present for the non-communicating application;
if the information exists, starting a voice assistant in the non-communication application, and sending the voice file to the voice assistant for searching the information when the condition that the floating ball is dragged to a non-edge area of the non-communication application software is detected.
Optionally, after the step of determining whether the voice assistant is present in the non-communicating application, the method further comprises:
if the floating ball does not exist in the non-edge area of the non-communication application software, when the floating ball is dragged to the non-edge area of the non-communication application software, converting the voice information in the voice file into text information so as to send the text information to a search interface of the non-communication application software for searching information.
In addition, to achieve the above object, the present invention further provides a terminal including a memory, a processor, and a voice information transmission program stored on the memory and operable on the processor, wherein the voice information transmission program, when executed by the processor, implements the steps of the voice information transmission method as described above.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a voice information transmission program which, when executed by a processor, realizes the steps of the voice information transmission method as described above.
The technical scheme is applied to a terminal, the terminal is provided with entity keys, the entity keys are in communication connection with a processor of the terminal, corresponding interfaces are respectively displayed after a display screen is split, when a pressing operation is received based on the entity keys, a voice acquisition instruction is triggered and voice information is acquired, so that a voice file is generated according to the acquired voice information, if communication application software is displayed in the split interface, the voice file is displayed on the communication application software in a suspension ball mode, and when the suspension ball is detected to be dragged to an area where a contact of the communication application software is located, the voice file is sent to the contact. According to the invention, the voice information is collected through the pressing operation received by the entity key, and when the communication application software after screen division detects that the floating ball of the voice file is dragged to the area of the contact of the communication application software, the voice information can be sent, so that the convenience and the efficiency of sending the voice information are improved.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention;
fig. 2 is a diagram of a communication network system architecture according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a method for sending a voice message according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of the present invention showing various application interfaces in a split screen;
FIG. 5 is a schematic diagram of dragging a floating ball to send a voice file in a communication application according to the present invention;
fig. 6 is a flowchart illustrating a voice message transmitting method according to a third embodiment of the present invention;
fig. 7 is a flowchart illustrating a fifth exemplary embodiment of a method for sending a voice message according to the present invention;
fig. 8 is a flowchart illustrating a method for sending voice information according to a sixth embodiment of the present invention.
The implementation, functional features and advantages of the present invention will be described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution), etc.
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
Further, in the terminal shown in fig. 1, the memory 109 stores a voice information transmission program running on the processor 110, and the terminal calls the voice information transmission program stored in the memory 109 through the processor 110 and executes the following steps:
respectively displaying corresponding interfaces after the display screen is split;
when the entity key receives a pressing operation, a voice acquisition instruction is triggered, voice information is acquired, and a voice file is generated according to the acquired voice information;
if the communication application software is displayed in the interface after the screen splitting, displaying the voice file on the communication application software in a floating ball form;
and when the floating ball is detected to be dragged to the area of the contact person of the communication application software, sending the voice file to the contact person.
Further, if the communication application software is displayed in the interface after the screen division, the terminal calls a voice information sending program stored in the memory 109 through the processor 110 to realize the step of displaying the voice file on the communication application software in a floating ball form:
and if a plurality of communication application software are displayed in the interface after the screen division, respectively displaying the voice file on each communication application software in a floating ball form.
Further, before the step of sending the voice file to the contact when the detected floating ball is dragged to the area where the contact of the communication application software is located, the terminal calls a voice information sending program stored in the memory 109 through the processor 110 to implement the following steps:
when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding direction of the sliding touch operation instruction;
and simulating the sliding touch operation instruction in other communication application software after screen splitting, and controlling the other communication application software to slide according to the sliding direction so as to switch and display the contact persons of each communication application software.
Further, after the step of simulating the sliding touch operation instruction in the other communication application software after screen splitting, the terminal calls a voice message sending program stored in the memory 109 through the processor 110 to implement the following steps:
and controlling other communication application software to slide according to the opposite direction of the sliding direction so as to switch and display the contact persons of each communication application software.
Further, before the step of sending the voice file to the contact when the detected floating ball is dragged to the area where the contact of the communication application software is located, the terminal calls a voice information sending program stored in the memory 109 through the processor 110 to implement the following steps:
when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding speed of the sliding touch operation instruction;
and controlling the communication application instruction receiving the sliding touch operation and other communication application instructions to slide at different speeds, wherein the sliding speed corresponding to the communication application software receiving the sliding touch operation is higher than the sliding speed of other communication application software.
Further, after the step of triggering a voice acquisition instruction and acquiring voice information when receiving a pressing operation based on the entity key, and generating a voice file according to the acquired voice information, the terminal calls a voice information sending program stored in the memory 109 through the processor 110, so as to implement the following steps:
if the system interface is displayed in the interface after the screen splitting, reducing each application icon in the system interface according to a preset proportion;
and displaying each reduced application icon on the system interface.
Further, after the step of triggering a voice acquisition instruction and acquiring voice information when receiving a pressing operation based on the entity key, and generating a voice file according to the acquired voice information, the terminal calls a voice information sending program stored in the memory 109 through the processor 110, so as to implement the following steps:
if the non-communication application software is displayed in the interface after the screen splitting, displaying the voice file on the edge area of the non-communication application software in a floating ball mode;
determining whether a voice assistant is present for the non-communicating application;
if the information exists, starting a voice assistant in the non-communication application, and sending the voice file to the voice assistant for searching the information when the condition that the floating ball is dragged to a non-edge area of the non-communication application software is detected.
Further, after the step of determining whether the non-communication application has the voice assistant, the terminal calls a voice message sending program stored in the memory 109 through the processor 110 to implement the following steps:
if the floating ball does not exist in the non-edge area of the non-communication application software, when the floating ball is dragged to the non-edge area of the non-communication application software, converting the voice information in the voice file into text information so as to send the text information to a search interface of the non-communication application software for searching information.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above terminal hardware structure and communication network system, the present invention provides various embodiments of the voice message transmission method.
The invention provides a voice message sending method.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for sending voice information according to a first embodiment of the present invention.
In this embodiment, the voice message sending method may be optionally applied to a terminal, where the terminal may be a mobile terminal as shown in fig. 1, the terminal is provided with an entity key, and the entity key is in communication connection with a processor of the terminal, and the voice message sending method includes the following steps:
respectively displaying corresponding interfaces after the display screen is split; when the entity key receives a pressing operation, a voice acquisition instruction is triggered, voice information is acquired, and a voice file is generated according to the acquired voice information; if the communication application software is displayed in the interface after the screen splitting, displaying the voice file on the communication application software in a floating ball form; and when the floating ball is detected to be dragged to the area of the contact person of the communication application software, sending the voice file to the contact person.
In this embodiment, the entity key may be a newly designed key in the terminal, and the entity key may be a multifunctional key including a voice wake-up function, an application start function, an interface start function, and other functions. The entity key can be arranged at any position of the terminal, and can be optionally arranged on the side or the back of the terminal, and the specific arrangement position is not limited and is determined according to the actual situation. In this embodiment, the entity key may also be an existing key of the terminal, such as a home key of the terminal, and when the entity key is the existing key of the terminal, more functions are multiplexed to the entity key, so that the entity key has multiple functions, such as a voice wake-up function, an application start function, or an interface start function. The entity key arranged on the terminal is in communication connection with a processor of the terminal so as to send the received operation instruction to the processor, and the processor executes corresponding control operation according to the received operation instruction, such as starting an application, starting an interface or executing voice information acquisition operation and the like. In this embodiment, the entity key is a capacitive key, and a fingerprint identification sensor is disposed on a surface of the capacitive key and used for receiving a pressing operation input by a user and collecting fingerprint information corresponding to the pressing operation.
The following are specific steps of implementing voice information transmission in this embodiment:
step S10, respectively displaying corresponding interfaces after the display screen is split;
in this embodiment, an interface displayed in a split screen manner on a display screen is an interface of a preset application, where the preset application may be one application or multiple applications, if the preset application is one application, the multiple interfaces of the application are displayed in the split screen manner, and if the preset application is multiple applications, the interfaces of the applications are displayed in the split screen manner. It should be understood that the number of the applications is preset, the number of the screens displayed during subsequent split screens is correspondingly multiple, and the display mode of each screen can be selected from left-right split screen display or up-down split screen display. In this embodiment, the preset application is preferably a plurality of applications, and the interface corresponding to the preset application includes: the application interface comprises a terminal main system interface (terminal main interface), a system application interface (system application is the existing application of the terminal, such as setting application) and a third party application interface (such as WeChat application and QQ application), wherein the third party application interface can also comprise communication application software and non-communication application software, which applications are specifically set and are set according to the actual situation, and the setting is not limited here.
After the interface corresponding to the preset application is displayed on the screen of the terminal, the terminal determines the interface type corresponding to each interface of the split screen, wherein the interface types comprise the following types: a terminal main system interface, a system application interface, communication application software or non-communication application software.
In the embodiment of the present invention, the trigger mode of the split-screen display interface of the display screen may be selected as split-screen display when the press operation is received, and the receiving mode includes several types:
1) in the first mode, the pressing operation is received in the terminal system interface (i.e. the terminal main interface) based on the entity key, and in this case, when the pressing operation is received based on the entity key, the application is displayed in a split screen manner, and the terminal system interface and the communication application software can be selectively displayed.
2) In a second mode, the pressing operation is received based on the entity key in an interface (such as a setting application) of the system application, and in this case, when the pressing operation is received based on the entity key, the application interface is displayed in a split screen manner, and the system application interface and the communication application software can be displayed optionally.
3) And in the third mode, the pressing operation is received in the interface of the third-party application based on the entity key, in this case, when the pressing operation is received based on the entity key, the application interface is displayed in a split screen mode, the third-party application interface and the communication application software can be displayed optionally, and if the third-party application interface is also the communication application software such as a WeChat application interface, other communication interfaces such as a QQ application interface can be displayed.
In addition, the trigger mode of the split-screen display interface of the display screen can also be split-screen display when an interface split-screen instruction is received, and the specific trigger mode can be screen touch, voice touch or gesture trigger and the like.
Step S20, when receiving the press operation based on the entity key, triggering the voice acquisition instruction and acquiring the voice information, and generating a voice file according to the acquired voice information;
in this embodiment, if the split screen of the display screen is based on the triggering operation input received by the entity key, the voice acquisition instruction may be directly triggered and the voice information may be acquired after the corresponding interfaces are respectively displayed after the split screen of the display screen, and the voice file may be generated according to the acquired voice information. Namely: when the entity key receives a pressing operation, the display screen displays corresponding interfaces after being split, a voice acquisition instruction is triggered, voice information is acquired, and a voice file is generated according to the acquired voice information.
In this embodiment, when an entity key set based on a terminal receives a pressing operation, a voice acquisition instruction is triggered and voice information is acquired, so as to generate a voice file according to the acquired voice information. The voice file generated by the terminal can be a temporary voice file, and the temporary voice file is automatically deleted when the temporary voice file expires.
Further, in order to improve the safety of the terminal in use, the terminal can be selected to collect the fingerprint information corresponding to the pressing operation firstly when the pressing operation is received based on the entity key, then the collected fingerprint information is verified, specifically, the collected fingerprint information is compared with the pre-stored fingerprint information to determine whether the collected fingerprint information is matched with the pre-stored fingerprint information, and when the coverage rate of the collected fingerprint information and the pre-stored fingerprint information reaches a preset proportion, the collected fingerprint information is determined to be matched with the pre-stored fingerprint information. When the collected fingerprint information is matched with the pre-stored fingerprint information, a voice collecting instruction is triggered and voice information is collected so as to generate a voice file. By verifying the fingerprint information and acquiring the voice information when the verification is successful, the safety of the terminal use is improved.
Step S30, if the communication application software is displayed in the split-screen interface, displaying the voice file on the communication application software in a floating ball form;
in this embodiment, if only one communication application software exists in the interface after the screen splitting, the voice file can be displayed on the communication application software in a floating ball form. In addition, communication application software is displayed in the interface after screen splitting, a floating ball of a voice file can also be displayed on the terminal screen, at the moment, the floating ball of the voice file is an independent floating ball and is positioned at the upper layer of each communication application software and does not belong to any communication application software of screen splitting, and when the voice file of the floating ball needs to be sent to a contact of any communication application software subsequently, the floating ball is dragged to the area where the determined contact is positioned. The position of the floating ball displayed in the communication application software is not limited.
And step S40, when the floating ball is dragged to the area where the contact of the communication application software is located, sending the voice file to the contact.
In this embodiment, in the communication application software, if it is detected that the floating ball displayed in the software is dragged to the area where the contact of the communication application software is located, the voice file may be sent to the contact, so as to complete sending of the voice information.
For better understanding of the present embodiment, the exemplary application scenarios are as follows: referring to fig. 4 and 5:
the interface corresponding to the preset application is displayed in a split screen mode, as shown in fig. 4, the interface displayed in the split screen mode comprises a WeChat application interface (communication application software containing contact information) and a terminal main system interface (containing application icons), after each interface is displayed in the split screen mode, when a terminal receives a pressing operation based on an entity key, a voice acquisition instruction is triggered, voice information is acquired, a voice file is generated according to the acquired voice information, at the moment, the communication application software is displayed in the interface after the split screen mode, and the voice file is displayed on the communication application software in a floating ball mode; if the communication application software detects that the floating ball displayed in the interface is dragged to the area where the contact of the communication application software is located, as shown in fig. 5, the voice file is sent to the contact, and as shown in fig. 5, the voice file is sent to a post.
The technical scheme of the embodiment is applied to a terminal, the terminal is provided with an entity key, the entity key is in communication connection with a processor of the terminal, corresponding interfaces are respectively displayed after a display screen is split, when a pressing operation is received based on the entity key, a voice acquisition instruction is triggered and voice information is acquired so as to generate a voice file according to the acquired voice information, if communication application software is displayed in the interface after the split screen, the voice file is displayed on the communication application software in a suspension ball form, and when a contact person where the suspension ball is dragged to the communication application software is detected, the voice file is sent to the contact person. According to the invention, the voice information is collected through the pressing operation received by the entity key, and when the communication application software after screen division detects that the floating ball of the voice file is dragged to the area of the contact of the communication application software, the voice information can be sent, so that the convenience and the efficiency of sending the voice information are improved.
Further, a second embodiment of the voice message transmission method according to the present invention is proposed based on the first embodiment.
The second embodiment of the voice information transmission method is different from the first embodiment of the voice information transmission method in that the step S30 includes:
and if a plurality of communication application software are displayed in the interface after the screen division, respectively displaying the voice file on each communication application software in a floating ball form.
In this embodiment, if a plurality of communication application software are displayed in the interface after the screen splitting, the voice file is respectively displayed on each communication application software in a floating ball form, and then, when a contact is selected in each communication application software to send the voice file, the floating ball displayed in the interface can be dragged to the area where the contact of the interface is located in each communication application software, so that the voice file is sent to the contact displayed in the interface. The position of the floating ball displayed in each communication application software is not limited.
In this embodiment, the floating ball is displayed for each communication application software, which is helpful for accurately dragging the floating ball to the area where the contact person on the interface is located, thereby preventing sending errors caused by dragging the floating ball across areas, and improving the accuracy of contact person selection, thereby improving the accuracy of voice information sending.
Further, a third embodiment of the voice message transmission method according to the present invention is proposed based on the first embodiment.
The third embodiment of the voice information transmission method is different from the first embodiment of the voice information transmission method in that, referring to fig. 6, before the step S40, the method further includes:
step S50, when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding direction of the sliding touch operation instruction;
and step S60, simulating the sliding touch operation instruction in other communication application software after screen splitting, and controlling the other communication application software to slide according to the sliding direction so as to switch and display the contact of each communication application software.
That is, before the hover ball is dragged in the communication application software, if the contact displayed in the communication application software is not the contact of the voice information to be sent, the sliding touch operation can be input in the communication application software to switch the contact displayed in the communication application software.
In this embodiment, in order to improve the intelligence and efficiency of contact switching in each communication application software, when a terminal receives a sliding touch operation in any communication application software after splitting a screen, a sliding direction of the sliding touch operation is determined, specifically, fingerprint information corresponding to the sliding touch operation is collected, a sliding track of the fingerprint information is determined, and the sliding direction of the sliding touch operation is determined according to a starting point and an end point of the sliding track. And after the terminal determines the sliding direction corresponding to the sliding touch operation, the communication application software receiving the sliding touch operation instruction is controlled to install the sliding direction for sliding so as to switch the contact persons, in addition, the sliding touch operation instruction is simulated in other communication application software after screen splitting, and other communication application software is controlled to slide according to the sliding direction so as to switch and display the contact persons of each communication application software. That is, as long as one communication application software is slid, other communication application software after being split can be slid, and the sliding direction is kept consistent so as to realize the switching display of the contact in each communication application software. And subsequently, in any communication application software, if the floating ball of the communication application software is detected to be dragged to the area of the contact person of the communication application software, the voice file can be sent to the contact person.
Further, after the step S50, the method further includes the steps of:
and controlling other communication application software to slide according to the opposite direction of the sliding direction so as to switch and display the contact persons of each communication application software.
In the embodiment, when sliding touch is performed in any communication application software after split-screen, the switching of the contact persons can be controlled by each other communication application software, and the contact persons do not need to be switched by sliding in each interface, so that the contact persons can be found quickly, the efficiency of searching the contact persons is improved, and the efficiency of sending subsequent voice information is improved.
It should be noted that the technical solutions in the second embodiment are also applicable to this embodiment.
Further, a fourth embodiment of the voice message transmission method according to the present invention is proposed based on the third embodiment.
The fourth embodiment of the voice information transmission method is different from the third embodiment of the voice information transmission method in that, before the step S40, the method further includes:
when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding speed of the sliding touch operation instruction;
and controlling the communication application instruction receiving the sliding touch operation and other communication application instructions to slide at different speeds, wherein the sliding speed corresponding to the communication application software receiving the sliding touch operation is higher than the sliding speed of other communication application software.
In this embodiment, when a sliding touch operation instruction is received in any communication application software after screen splitting, the sliding speed of the sliding touch operation instruction is determined, the communication application software receiving the sliding touch operation and other communication application software are optionally controlled to slide at different speeds, and specifically, the sliding speed corresponding to the communication application software receiving the sliding touch operation is controlled to be higher than the sliding speeds of the other communication application software, so that a user can conveniently view contacts of the communication application software at the same time. In addition, the sliding speed of other communication application software is slowed down to prevent the user from having no time to see the contents of other communication application software when dragging the floating ball in the communication application software to send the voice file.
Furthermore, when the terminal is controlled to drag the floating ball to the contact person for sending, other communication application software stops sliding, or continues to slide to switch and display the contact person, so that the flexibility of switching and displaying the contact person is improved.
Further, a fifth embodiment of the voice information transmitting method of the present invention is proposed based on the first to fourth embodiments.
The fifth embodiment of the voice information transmission method is different from the first to fourth embodiments of the voice information transmission method in that, referring to fig. 7, after step S20, the method further includes:
step S70, if the system interface is displayed in the split interface, reducing each application icon in the system interface according to a preset proportion;
and step S80, displaying each reduced application icon on the system interface.
In this embodiment, if a system interface (i.e., a terminal main interface) is displayed in the interface after the split screen, at this time, each application icon is reduced in the system interface according to a preset ratio, in this embodiment, the preset ratio may be optionally set to 1/2 of the normally displayed application icon, and of course, may also be set to other values, which are not described herein. And after each application icon is reduced according to a preset proportion, displaying each reduced application icon on a system interface after screen division.
In this embodiment, in a process of using the terminal, each interface is displayed in a split screen manner, and each interface includes communication application software and a system interface, if a pressing operation is received based on an entity key, a voice file corresponding to voice information is obtained first, then a hover ball is dragged to an area where a contact is located in the communication application software to send the voice file, and meanwhile, it is ensured that the terminal can continue to display the system interface, and a user can continue to perform other operations in the system interface, such as viewing information, watching videos, and the like, so that multi-screen use of the terminal is achieved, and it needs to be noted that each interface after the split screen does not affect each other.
In the embodiment, the flexibility and the intelligence of the terminal use are improved through the processing mode.
Further, a sixth embodiment of the voice information transmission method of the present invention is proposed based on the first to fourth embodiments.
The sixth embodiment of the voice information transmission method differs from the first to fourth embodiments of the voice information transmission method in that, referring to fig. 8, after the step S20, the method further includes:
step S90, if the non-communication application software is displayed in the split-screen interface, displaying the voice file on the edge area of the non-communication application software in a floating ball form;
step S100, determining whether the non-communication application has a voice assistant;
and step S110, if the information exists, starting a voice assistant in the non-communication application, and sending the voice file to the voice assistant for searching the information when the floating ball is detected to be dragged to the non-edge area of the non-communication application interface.
Further, after step S100, the method further includes:
and step A, if the floating ball does not exist in the non-edge area of the non-communication application interface, when the floating ball is dragged to the non-edge area of the non-communication application interface, converting the voice information in the voice file into character information so as to send the character information to the search interface of the non-communication application for searching information.
In this embodiment, if the non-communication application software exists in the split-screen interface, at this time, a floating ball of the voice file is displayed in an edge region of the non-communication application software, and a voice assistant is started in the non-communication application software, and if the voice assistant can be started, the voice file can be sent to the voice assistant to search for information when the floating ball is detected to be dragged to the non-edge region of the non-communication application software. If the voice assistant cannot be started, when the floating ball is detected to be dragged to the non-edge area of the non-communication application software, converting the voice information in the voice file into text information so as to send the text information to a search interface of the non-communication application for searching information.
In this embodiment, when the non-communication application software cannot process the voice information, the voice information is converted into text information for processing, and by this processing method, the flexibility and intelligence of the terminal use are improved.
It should be noted that the technical solution in the fifth embodiment is also applicable to this embodiment.
In addition, the embodiment of the invention also provides a computer readable storage medium.
The computer-readable storage medium having stored thereon a voice information transmission program which, when executed by a processor, performs the steps of:
respectively displaying corresponding interfaces after the display screen is split;
when the entity key receives a pressing operation, a voice acquisition instruction is triggered, voice information is acquired, and a voice file is generated according to the acquired voice information;
if the communication application software is displayed in the interface after the screen splitting, displaying the voice file on the communication application software in a floating ball form;
and when the floating ball is detected to be dragged to the area of the contact person of the communication application software, sending the voice file to the contact person.
Further, if the communication application software is displayed in the interface after the screen splitting, when the voice information sending program is executed by the processor, the step of displaying the voice file on the communication application software in a floating ball form is also realized:
and if a plurality of communication application software are displayed in the interface after the screen division, respectively displaying the voice file on each communication application software in a floating ball form.
Further, before the step of sending the voice file to the contact when the detected floating ball is dragged to the area of the contact of the communication application software, when the processor executes the voice information sending program, the following steps are also implemented:
when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding direction of the sliding touch operation instruction;
and simulating the sliding touch operation instruction in other communication application software after screen splitting, and controlling the other communication application software to slide according to the sliding direction so as to switch and display the contact persons of each communication application software.
Further, after the step of simulating the sliding touch operation instruction in the other communication application software after screen splitting, when the voice message sending program is executed by the processor, the following steps are also implemented:
and controlling other communication application software to slide according to the opposite direction of the sliding direction so as to switch and display the contact persons of each communication application software.
Further, before the step of sending the voice file to the contact when the detected floating ball is dragged to the area of the contact of the communication application software, when the processor executes the voice information sending program, the following steps are also implemented:
when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding speed of the sliding touch operation instruction;
and controlling the communication application instruction receiving the sliding touch operation and other communication application instructions to slide at different speeds, wherein the sliding speed corresponding to the communication application software receiving the sliding touch operation is higher than the sliding speed of other communication application software.
Further, after the step of triggering a voice acquisition instruction and acquiring voice information when receiving a pressing operation based on the entity key, and generating a voice file according to the acquired voice information, when the voice information sending program is executed by the processor, the following steps are also implemented:
if the system interface is displayed in the interface after the screen splitting, reducing each application icon in the system interface according to a preset proportion;
and displaying each reduced application icon on the system interface.
Further, after the step of triggering a voice acquisition instruction and acquiring voice information when receiving a pressing operation based on the entity key, and generating a voice file according to the acquired voice information, when the voice information sending program is executed by the processor, the following steps are also implemented:
if the non-communication application software is displayed in the interface after the screen splitting, displaying the voice file on the edge area of the non-communication application software in a floating ball mode;
determining whether a voice assistant is present for the non-communicating application;
if the information exists, starting a voice assistant in the non-communication application, and sending the voice file to the voice assistant for searching the information when the condition that the floating ball is dragged to a non-edge area of the non-communication application interface is detected.
Further, after the step of determining whether the voice assistant is present in the non-communicating application, the voice messaging program when executed by the processor further performs the steps of:
if the floating ball does not exist in the non-edge area of the non-communication application interface, when the floating ball is dragged to the non-edge area of the non-communication application interface, converting the voice information in the voice file into text information so as to send the text information to a search interface of the non-communication application for searching information.
In the technical scheme of the invention, when the voice information sending program is executed by the processor, the following steps are realized: the method comprises the steps that corresponding interfaces are respectively displayed after a display screen is split, when a pressing operation is received based on an entity key, a voice acquisition instruction is triggered, voice information is acquired, a voice file is generated according to the acquired voice information, if communication application software is displayed in the split interface, the voice file is displayed on the communication application software in a floating ball mode, and when the floating ball is detected to be dragged to a contact person area of the communication application software, the voice file is sent to the contact person. According to the invention, the voice information is collected through the pressing operation received by the entity key, and when the communication application software after screen division detects that the floating ball of the voice file is dragged to the area of the contact of the communication application software, the voice information can be sent, so that the convenience and the efficiency of sending the voice information are improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A voice message sending method is applied to a terminal, the terminal is provided with an entity key, the entity key is in communication connection with a processor of the terminal, and the voice message sending method comprises the following steps:
respectively displaying corresponding interfaces after the display screen is split;
when the entity key receives a pressing operation, a voice acquisition instruction is triggered, voice information is acquired, and a voice file is generated according to the acquired voice information;
if the communication application software is displayed in the interface after the screen splitting, displaying the voice file on the communication application software in a floating ball form;
and when the floating ball is detected to be dragged to the area of the contact person of the communication application software, sending the voice file to the contact person.
2. The method for sending voice message according to claim 1, wherein if the communication application software is displayed in the interface after the screen splitting, the step of displaying the voice file on the communication application software in a floating ball form comprises:
and if a plurality of communication application software are displayed in the interface after the screen division, respectively displaying the voice file on each communication application software in a floating ball form.
3. The method according to claim 2, wherein before the step of sending the voice file to the contact when the hover ball is detected to be dragged to the area of the contact of the communication application, the method further comprises:
when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding direction of the sliding touch operation instruction;
and simulating the sliding touch operation instruction in other communication application software after screen splitting, and controlling the other communication application software to slide according to the sliding direction so as to switch and display the contact persons of each communication application software.
4. The method for sending voice messages according to claim 3, wherein after the step of simulating the sliding touch operation command in the other communication application software after the screen splitting, the method further comprises:
and controlling other communication application software to slide according to the opposite direction of the sliding direction so as to switch and display the contact persons of each communication application software.
5. The method according to claim 2, wherein before the step of sending the voice file to the contact when the hover ball is detected to be dragged to the area of the contact of the communication application, the method further comprises:
when a sliding touch operation instruction is received in any communication application software after screen splitting, determining the sliding speed of the sliding touch operation instruction;
and controlling the communication application software receiving the sliding touch operation and other communication application software to slide at different speeds, wherein the sliding speed corresponding to the communication application software receiving the sliding touch operation is higher than that of the other communication application software.
6. The method as claimed in any one of claims 1 to 5, wherein after the step of triggering a voice collecting instruction and collecting voice information when the physical key receives a pressing operation, and generating a voice file according to the collected voice information, the method further comprises:
if the system interface is displayed in the interface after the screen splitting, reducing each application icon in the system interface according to a preset proportion;
and displaying each reduced application icon on the system interface.
7. The method as claimed in any one of claims 1 to 5, wherein after the step of triggering a voice collecting instruction and collecting voice information when the physical key receives a pressing operation, and generating a voice file according to the collected voice information, the method further comprises:
if the non-communication application software is displayed in the interface after the screen splitting, displaying the voice file on the edge area of the non-communication application software in a floating ball mode;
determining whether a voice assistant is present for the non-communicating application;
if the information exists, starting a voice assistant in the non-communication application, and sending the voice file to the voice assistant for searching the information when the condition that the floating ball is dragged to a non-edge area of the non-communication application software is detected.
8. The method of claim 7, wherein after the step of determining whether a voice assistant is present for the non-communicating application, the method further comprises:
if the floating ball does not exist in the non-edge area of the non-communication application software, when the floating ball is dragged to the non-edge area of the non-communication application software, converting the voice information in the voice file into text information so as to send the text information to a search interface of the non-communication application software for searching information.
9. A terminal, characterized in that the terminal comprises a memory, a processor and a voice information transmission program stored on the memory and executable on the processor, the voice information transmission program, when executed by the processor, implementing the steps of the voice information transmission method according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that a voice information transmission program is stored thereon, which when executed by a processor implements the steps of the voice information transmission method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710777842.1A CN107592416B (en) | 2017-08-31 | 2017-08-31 | Voice message transmitting method, terminal and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710777842.1A CN107592416B (en) | 2017-08-31 | 2017-08-31 | Voice message transmitting method, terminal and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107592416A CN107592416A (en) | 2018-01-16 |
CN107592416B true CN107592416B (en) | 2020-11-17 |
Family
ID=61050177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710777842.1A Active CN107592416B (en) | 2017-08-31 | 2017-08-31 | Voice message transmitting method, terminal and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107592416B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110298153B (en) * | 2018-03-21 | 2022-12-27 | 阿里巴巴集团控股有限公司 | Fingerprint identification method, mobile device and fingerprint identification system |
CN108920238A (en) * | 2018-06-29 | 2018-11-30 | 上海连尚网络科技有限公司 | Operate method, electronic equipment and the computer-readable medium of application |
CN110677532A (en) * | 2018-07-02 | 2020-01-10 | 深圳市汇顶科技股份有限公司 | Voice assistant control method and system based on fingerprint identification and electronic equipment |
CN109491562B (en) * | 2018-10-09 | 2020-07-07 | 珠海格力电器股份有限公司 | Interface display method of voice assistant application program and terminal equipment |
CN109788132A (en) * | 2018-12-29 | 2019-05-21 | 努比亚技术有限公司 | A kind of message treatment method, mobile terminal and computer readable storage medium |
CN110231896B (en) * | 2019-04-26 | 2022-09-09 | 平安科技(深圳)有限公司 | Information sending method and device, electronic equipment and storage medium |
CN113485608A (en) * | 2021-07-09 | 2021-10-08 | 维沃移动通信(杭州)有限公司 | Information control method and device and electronic equipment |
CN114489420A (en) * | 2022-01-14 | 2022-05-13 | 维沃移动通信有限公司 | Voice information sending method and device and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6903743B2 (en) * | 2002-10-16 | 2005-06-07 | Motorola, Inc. | Dynamic interactive animated screen saver |
CN103197911A (en) * | 2013-04-12 | 2013-07-10 | 广东国笔科技股份有限公司 | Method, system and device for providing speech input |
CN103716374A (en) * | 2013-12-04 | 2014-04-09 | 宇龙计算机通信科技(深圳)有限公司 | Method for sharing files and server |
CN106302137A (en) * | 2016-10-31 | 2017-01-04 | 努比亚技术有限公司 | Group chat message processing apparatus and method |
CN106303004A (en) * | 2016-08-04 | 2017-01-04 | 北京奇虎科技有限公司 | The way of recording, device and mobile terminal under screen lock state |
CN106941000A (en) * | 2017-03-21 | 2017-07-11 | 百度在线网络技术(北京)有限公司 | Voice interactive method and device based on artificial intelligence |
CN106959746A (en) * | 2016-01-12 | 2017-07-18 | 百度在线网络技术(北京)有限公司 | The processing method and processing device of speech data |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101047656A (en) * | 2006-03-31 | 2007-10-03 | 腾讯科技(深圳)有限公司 | Method for implementing E-mail quickly transmitting and its system |
CN102111503A (en) * | 2011-02-18 | 2011-06-29 | 宇龙计算机通信科技(深圳)有限公司 | Quick operation method and mobile terminal |
CN102780653B (en) * | 2012-08-09 | 2016-03-09 | 上海量明科技发展有限公司 | Quick method, client and the system communicated in instant messaging |
CN102833185B (en) * | 2012-08-22 | 2016-05-25 | 青岛飞鸽软件有限公司 | Pull the method that word starts immediate communication tool chatting window |
CN103281683B (en) * | 2013-06-08 | 2016-08-17 | 网易(杭州)网络有限公司 | A kind of method and device sending speech message |
CN103473027B (en) * | 2013-09-16 | 2017-01-04 | 张智锋 | A kind of communicating terminal split screen multi-task interaction method and communicating terminal |
CN103533135A (en) * | 2013-10-18 | 2014-01-22 | 广东欧珀移动通信有限公司 | Method and mobile terminal for shortcut operation of contacts |
CN103558958B (en) * | 2013-10-29 | 2017-04-12 | 宇龙计算机通信科技(深圳)有限公司 | Application program function calling method and terminal |
CN106445271A (en) * | 2015-08-06 | 2017-02-22 | 天津三星通信技术研究有限公司 | Method and device for quickly sharing contents in terminal |
CN105843466B (en) * | 2016-03-14 | 2020-03-31 | 广州趣丸网络科技有限公司 | Real-time voice method and device |
CN105847594A (en) * | 2016-05-25 | 2016-08-10 | 宁波萨瑞通讯有限公司 | System and method for quickly sending file |
CN106527883B (en) * | 2016-09-29 | 2020-03-17 | 北京小米移动软件有限公司 | Content sharing method and device and terminal |
CN106527882A (en) * | 2016-09-29 | 2017-03-22 | 北京小米移动软件有限公司 | Content sharing method, device and terminal |
CN106648535A (en) * | 2016-12-28 | 2017-05-10 | 广州虎牙信息科技有限公司 | Live client voice input method and terminal device |
-
2017
- 2017-08-31 CN CN201710777842.1A patent/CN107592416B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6903743B2 (en) * | 2002-10-16 | 2005-06-07 | Motorola, Inc. | Dynamic interactive animated screen saver |
CN103197911A (en) * | 2013-04-12 | 2013-07-10 | 广东国笔科技股份有限公司 | Method, system and device for providing speech input |
CN103716374A (en) * | 2013-12-04 | 2014-04-09 | 宇龙计算机通信科技(深圳)有限公司 | Method for sharing files and server |
CN106959746A (en) * | 2016-01-12 | 2017-07-18 | 百度在线网络技术(北京)有限公司 | The processing method and processing device of speech data |
CN106303004A (en) * | 2016-08-04 | 2017-01-04 | 北京奇虎科技有限公司 | The way of recording, device and mobile terminal under screen lock state |
CN106302137A (en) * | 2016-10-31 | 2017-01-04 | 努比亚技术有限公司 | Group chat message processing apparatus and method |
CN106941000A (en) * | 2017-03-21 | 2017-07-11 | 百度在线网络技术(北京)有限公司 | Voice interactive method and device based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
CN107592416A (en) | 2018-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107592416B (en) | Voice message transmitting method, terminal and computer readable storage medium | |
CN107222613B (en) | Display method and terminal | |
CN107943367B (en) | Interface display method of double-screen terminal, double-screen terminal and computer storage medium | |
CN107562336A (en) | A kind of method, equipment and computer-readable recording medium for controlling suspension ball | |
CN107885448B (en) | Control method for application touch operation, mobile terminal and readable storage medium | |
CN110187808B (en) | Dynamic wallpaper setting method and device and computer-readable storage medium | |
CN108459805B (en) | Screen capture method, mobile terminal and computer-readable storage medium | |
CN107145385A (en) | A kind of multitask interface display methods, mobile terminal and computer-readable storage medium | |
CN107741802B (en) | Application starting method, terminal and computer readable storage medium | |
CN108459803B (en) | Picture sending method based on double-sided screen, mobile terminal and readable storage medium | |
CN107682540B (en) | Picture processing method, terminal and computer readable storage medium | |
CN107832032B (en) | Screen locking display method and mobile terminal | |
CN107172605B (en) | Emergency call method, mobile terminal and computer readable storage medium | |
CN109375846B (en) | Method and device for displaying breathing icon, mobile terminal and readable storage medium | |
CN107707755B (en) | Key using method, terminal and computer readable storage medium | |
CN108563388B (en) | Screen operation method, mobile terminal and computer-readable storage medium | |
CN109408187B (en) | Head portrait setting method and device, mobile terminal and readable storage medium | |
CN108418966B (en) | Message prompting method, mobile terminal and computer readable storage medium | |
CN108810262B (en) | Application configuration method, terminal and computer readable storage medium | |
CN108629863B (en) | Method for automatically signing in application program, mobile terminal and readable storage medium | |
CN108038362B (en) | Unread information display method and device and computer readable storage medium | |
CN107678622B (en) | Application icon display method, terminal and storage medium | |
CN113326012A (en) | Processing method, mobile terminal and storage medium | |
CN109522064B (en) | Interaction method and interaction device of portable electronic equipment with double screens | |
CN107580106B (en) | Call control method, mobile terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |