[go: up one dir, main page]

CN104951230A - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN104951230A
CN104951230A CN201510288274.XA CN201510288274A CN104951230A CN 104951230 A CN104951230 A CN 104951230A CN 201510288274 A CN201510288274 A CN 201510288274A CN 104951230 A CN104951230 A CN 104951230A
Authority
CN
China
Prior art keywords
touch point
touch
input
executing
input data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510288274.XA
Other languages
Chinese (zh)
Inventor
陈小翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510288274.XA priority Critical patent/CN104951230A/en
Publication of CN104951230A publication Critical patent/CN104951230A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an information processing method. The information processing method is applied to electronic equipment. The electronic equipment supports touch screen input, the touch screen corresponds to N input units, the N input units corresponds to N touching areas on the touch screen, the touch area corresponded by each input unit is different, and the N touching areas constitute the whole touching area of the touching screen. The information processing method comprises the steps that a first operation is acquired, a first instruction is generated, and the first instruction indicates and monitors an input event; the first instruction is responded, the corresponding input areas and input units are confirmed and input in a touch mode, and input data are acquired through the confirmed input units; comparison between the acquired input data and the preset event trigger condition is conducted, whether the input data meet the event trigger condition or not is judged, and when the input data meet the trigger condition of a first event, a first treatment corresponding to the first event is executed. The invention further discloses the electronic equipment.

Description

Information processing method and electronic equipment
Technical Field
The invention relates to the technical field of intelligent terminals, in particular to an information processing method and electronic equipment.
Background
With the rapid development of the intelligent terminal, users are used to perform various operations such as surfing the internet, reading news, reading novels, and the like by using the intelligent terminal. With the development of the intelligent terminal with the touch screen, a variety of touch gestures have been implemented to enrich various gesture operations of the user, but as various operation demands of the user increase, the existing gesture types generally cannot meet actual operation needs of the user. Therefore, how to enrich the touch gestures of the intelligent terminal with the touch screen to meet the actual operation needs of the user as much as possible becomes a technical problem to be solved for a long time.
Disclosure of Invention
In order to solve the existing technical problems, the invention provides an information processing method and electronic equipment.
The invention provides an information processing method, which is applied to electronic equipment, wherein the electronic equipment supports touch screen input, the touch screen is correspondingly provided with N input units, the N input units correspond to N touch areas on the touch screen, and the touch areas corresponding to each input unit are different; the method comprises the following steps:
obtaining a first operation, and generating a first instruction, wherein the first instruction indicates to monitor an input event;
responding to the first instruction, determining an input area and an input unit corresponding to the touch input, and obtaining input data through the determined input unit;
and comparing the obtained input data with a preset event trigger condition, judging whether the event trigger condition is met, and executing first processing corresponding to a first event when the input data is determined to meet the first event trigger condition.
In the foregoing solution, the comparing the obtained input data with a preset event trigger condition to determine whether the event trigger condition is met includes:
if the obtained input data is used for representing the position coordinates and the time information of two touch points in the first input area, when the distance between the first touch point and the second touch point is smaller than or equal to a preset first threshold value, and the time difference between the touch ending time of the first touch point and the touch starting time of the second touch point is smaller than or equal to a preset second threshold value, it is determined that the input data meets a first event triggering condition.
In the foregoing solution, the comparing the obtained input data with a preset event trigger condition to determine whether the event trigger condition is met includes:
after the position coordinate and the time information of the first touch point are obtained, continuously acquiring the position coordinate of a second touch point at a first frequency;
and judging whether the distance between the second touch point and the first touch point is greater than a third threshold value or not according to the position coordinate of the second touch point and the position coordinate of the first touch point, and determining that the input data meets a first event triggering condition when the distance is determined to be greater than the third threshold value.
In the foregoing solution, the executing the first process corresponding to the first event trigger condition includes:
the method comprises the steps of obtaining text content in a display interface, judging the language type of the text content, and calling a translation engine to translate the text content from a first language type to a second language type.
In the foregoing solution, the executing the first process corresponding to the first event trigger condition includes:
and calculating to obtain the position change rate of the touch point according to the distance between the second touch point and the first frequency, obtaining a corresponding interface scaling according to the position change rate, and executing scaling processing on the content in the display interface according to the scaling.
In the foregoing solution, the performing scaling processing on content in a display interface according to a scaling ratio includes:
comparing the Y-axis coordinates of the second touch point and the first touch point, and judging whether the second touch point moves upwards or downwards relative to the first touch point;
when the second touch point is judged to move upwards relative to the first touch point, executing the zooming-in processing of the content in the display interface according to the zooming scale, and when the second touch point is judged to move downwards relative to the first touch point, executing the zooming-out processing of the content in the display interface according to the zooming scale; or, when the second touch point is judged to move upwards relative to the first touch point, executing the reduction processing of the content in the display interface according to the scaling, and when the second touch point is judged to move downwards relative to the first touch point, executing the enlargement processing of the content in the display interface according to the scaling.
The invention also provides electronic equipment which supports touch screen input, wherein the touch screen is correspondingly provided with N input units, the N input units correspond to N touch areas on the touch screen, and the touch areas corresponding to the input units are different; the electronic device includes:
the instruction generation unit is used for obtaining a first operation and generating a first instruction, wherein the first instruction indicates a monitoring input event;
the instruction execution unit is used for responding to the first instruction, determining an input area and an input unit corresponding to the touch input, and obtaining input data through the determined input unit; and comparing the obtained input data with a preset event trigger condition, judging whether the event trigger condition is met, and executing first processing corresponding to a first event when the input data is determined to meet the first event trigger condition.
In the foregoing solution, the instruction execution unit is further configured to, if the obtained input data is used to represent position coordinates and time information of two touch points in the first input area, determine that the input data satisfies the first event trigger condition when a distance between the first touch point and the second touch point is less than or equal to a preset first threshold, and a time difference between a touch end time of the first touch point and a touch start time of the second touch point is less than or equal to a preset second threshold.
In the above solution, the instruction execution unit is further configured to,
after the position coordinate and the time information of the first touch point are obtained, continuously acquiring the position coordinate of a second touch point at a first frequency;
and judging whether the distance between the second touch point and the first touch point is greater than a third threshold value or not according to the position coordinate of the second touch point and the position coordinate of the first touch point, and determining that the input data meets a first event triggering condition when the distance is determined to be greater than the third threshold value.
In the foregoing solution, the instruction execution unit is further configured to execute the following first processing:
the method comprises the steps of obtaining text content in a display interface, judging the language type of the text content, and calling a translation engine to translate the text content from a first language type to a second language type.
In the foregoing solution, the instruction execution unit is further configured to execute the following first processing:
and calculating to obtain the position change rate of the touch point according to the distance between the second touch point and the first frequency, obtaining a corresponding interface scaling according to the position change rate, and executing scaling processing on the content in the display interface according to the scaling.
In the above solution, the instruction execution unit is further configured to,
comparing the Y-axis coordinates of the second touch point and the first touch point, and judging whether the second touch point moves upwards or downwards relative to the first touch point;
when the second touch point is judged to move upwards relative to the first touch point, executing the zooming-in processing of the content in the display interface according to the zooming scale, and when the second touch point is judged to move downwards relative to the first touch point, executing the zooming-out processing of the content in the display interface according to the zooming scale; or, when the second touch point is judged to move upwards relative to the first touch point, executing the reduction processing of the content in the display interface according to the scaling, and when the second touch point is judged to move downwards relative to the first touch point, executing the enlargement processing of the content in the display interface according to the scaling.
According to the information processing method and the electronic equipment, the touch screen of the electronic equipment is divided into multiple areas, different input units are respectively registered for different touch areas, and different event trigger conditions are preset for the different touch areas, so that richer user touch gestures and richer application functions can be realized.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to a first embodiment of the present invention;
fig. 2 is a schematic diagram illustrating region division of a touch screen of an electronic device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to a second embodiment of the invention;
fig. 4 is a schematic diagram of an internal frame of an electronic device according to a third embodiment of the present invention;
fig. 5 is a schematic flowchart illustrating a process of executing text translation by a user double-clicking an area where a left border and a right border are located according to a fourth embodiment of the present invention;
fig. 6 is a schematic diagram illustrating an internal logic flow of an electronic device according to a fifth embodiment of the present invention;
fig. 7 is a schematic view illustrating a process of identifying a double-click side frame area according to a sixth embodiment of the present invention;
fig. 8 is a schematic view illustrating a process of identifying sliding of the side frame region according to a seventh embodiment of the present invention;
FIG. 9 is a schematic diagram of an external representation of another electronic device according to an embodiment of the invention.
Detailed Description
The technical solution of the present invention is further elaborated below with reference to the drawings and the specific embodiments.
Example one
The information processing method provided by the embodiment of the invention is applied to electronic equipment, the electronic equipment supports touch screen input, the touch screen is correspondingly provided with N input units, the N input units correspond to N touch areas on the touch screen, and the touch areas corresponding to the input units are different. For example, a plurality of touch regions (or input regions) are divided in the touch screen of the electronic device shown in fig. 1, and at least include a1 region, a2 region, and a B region, where the a1 region is a left side frame of the touch screen of the electronic device, the a2 region is a right side frame of the touch screen of the electronic device, and the B region is a middle region of the touch screen of the electronic device, then the a1 region, the a2 region, and the B region respectively correspond to respective input units, and the input units are used for monitoring touch operations in the region and reporting the touch operations.
As shown in fig. 2, the method includes:
step 201, obtaining a first operation, and generating a first instruction, where the first instruction indicates to monitor an input event.
The electronic equipment generates a first instruction after obtaining the touch operation of the user, wherein the first instruction indicates to monitor an input event.
Step 202, responding to the first instruction, determining an input area and an input unit corresponding to the touch input, and obtaining input data through the determined input unit.
After the first instruction is generated, the electronic equipment judges an input area where the touch operation is located, and determines an input unit corresponding to the input area, so that input data corresponding to the touch operation of a user is obtained through the determined input unit.
For example: the user touches the area A1 in the picture 1 by a finger, the electronic device generates a first instruction, determines the input unit corresponding to the user according to the area A1 touched by the user finger, and obtains input data corresponding to the user touch operation through the input unit corresponding to the area A1;
for another example: when a user finger touches the area A2 in FIG. 1, the electronic device generates a first command, determines the input unit corresponding to the user finger according to the area A2 touched by the user finger, and obtains the input data corresponding to the user touch operation through the input unit corresponding to the area A2.
Step 203, comparing the obtained input data with a preset event trigger condition, judging whether the event trigger condition is met, and executing a first process corresponding to a first event when the input data is determined to meet the first event trigger condition.
For example: referring to fig. 1, if the electronic device obtains input data through an input unit corresponding to the a1 area of the electronic device, the obtained input data is compared with a preset event trigger condition corresponding to the a1 area, whether the event trigger condition of the a1 area is met is determined, and when it is determined that a certain event trigger condition of the a1 area is met, a processing operation of a corresponding event is performed;
for another example: referring to fig. 1, if the electronic device obtains input data through an input unit corresponding to the a2 area of the electronic device, the obtained input data is compared with a preset event trigger condition corresponding to the a2 area, whether the event trigger condition of the a2 area is satisfied is determined, and when it is determined that a certain event trigger condition of the a2 area is satisfied, a processing operation of the corresponding event is performed.
In an embodiment, the comparing the obtained input data with a preset event trigger condition to determine whether the event trigger condition is met includes:
if the obtained input data is used for representing the position coordinates and the time information of two touch points in the first input area, when the distance between the first touch point and the second touch point is smaller than or equal to a preset first threshold value, and the time difference between the touch ending time of the first touch point and the touch starting time of the second touch point is smaller than or equal to a preset second threshold value, it is determined that the input data meets a first event triggering condition.
That is, if a user double-clicks a touch area, the electronic device needs to determine whether to be a valid double click according to the distance between the two touch points and the interval time, and if it is determined to be a valid double click, it is determined that the trigger condition of the double click event is satisfied; otherwise, it is determined that the trigger condition for the double click event is not satisfied.
For example: if the user double-clicks the area a1 in fig. 1, the electronic device needs to determine whether to be a valid double click according to the distance and the separation time between the two touch points, and if so, it is determined that the trigger condition of the double click event in the area a1 is satisfied; otherwise, it is determined that the trigger condition for the A1 area double click event is not satisfied.
For another example: if the user double-clicks the area a2 in fig. 1, the electronic device needs to determine whether to be a valid double click according to the distance and the separation time between the two touch points, and if so, it is determined that the trigger condition of the double click event in the area a2 is satisfied; otherwise, it is determined that the trigger condition for the A2 area double click event is not satisfied.
In an embodiment, the comparing the obtained input data with a preset event trigger condition to determine whether the event trigger condition is met includes:
after the position coordinate and the time information of the first touch point are obtained, continuously acquiring the position coordinate of a second touch point at a first frequency;
and judging whether the distance between the second touch point and the first touch point is greater than a third threshold value or not according to the position coordinate of the second touch point and the position coordinate of the first touch point, and determining that the input data meets a first event triggering condition when the distance is determined to be greater than the third threshold value.
That is, the electronic apparatus determines whether the input by the user is a valid slide operation input in the above manner. For example: when the finger of the user slides on the area a1 in fig. 1, the electronic device first obtains the position coordinates and time information of the first touch point, and then acquires the position coordinates of the subsequent touch point at a certain frequency (e.g. 1/85 seconds); and judging whether the input is effective sliding input or not by comparing the second touch point coordinate acquired by acquisition with the first touch point coordinate. For another example: when the finger of the user slides on the area a2 in fig. 1, the electronic device first obtains the position coordinates and time information of the first touch point, and then acquires the position coordinates of the subsequent touch point at a certain frequency (e.g. 1/85 seconds); and judging whether the input is effective sliding input or not by comparing the second touch point coordinate acquired by acquisition with the first touch point coordinate.
In one embodiment, the executing the first process corresponding to the first event trigger condition includes:
the method comprises the steps of obtaining text content in a display interface, judging the language type of the text content, and calling a translation engine to translate the text content from a first language type to a second language type.
That is, through the user input operation identification, when it is determined that a double-click operation is effective for a specific input area, the electronic device may be triggered to perform a language translation operation on the text content in the current display interface; or,
through the user input operation recognition, when the sliding operation is judged to be effective for a specific input area, the electronic equipment can be triggered to execute the language translation operation on the text content in the current display interface.
In the specific implementation process, what user input operation and processing mode is specifically adopted to correspond to each other can be selected according to actual needs.
In one embodiment, the executing the first process corresponding to the first event trigger condition includes:
and calculating to obtain the position change rate of the touch point according to the distance between the second touch point and the first frequency, obtaining a corresponding interface scaling according to the position change rate, and executing scaling processing on the content in the display interface according to the scaling.
That is, when it is determined that the slide operation is effective for a specific input region once through the user input operation recognition, the reduction or enlargement processing of the content displayed in the current display interface may be performed; the specific scaling may be determined according to the measured finger sliding speed of the user, for example: the preset mapping relation table can be searched according to the measured sliding speed of the finger to obtain the corresponding scaling.
In one embodiment, the scaling process for scaling the content in the display interface includes:
comparing the Y-axis coordinates of the second touch point and the first touch point, and judging whether the second touch point moves upwards or downwards relative to the first touch point;
when the second touch point is judged to move upwards relative to the first touch point, executing the zooming-in processing of the content in the display interface according to the zooming scale, and when the second touch point is judged to move downwards relative to the first touch point, executing the zooming-out processing of the content in the display interface according to the zooming scale; or, when the second touch point is judged to move upwards relative to the first touch point, executing the reduction processing of the content in the display interface according to the scaling, and when the second touch point is judged to move downwards relative to the first touch point, executing the enlargement processing of the content in the display interface according to the scaling.
That is, when it is determined that a slide operation is once valid for a specific input area through the aforementioned user input operation recognition, if it is determined that the slide operation is an upward slide, the enlargement processing of the content in the display interface is performed at a zoom scale, and if it is determined that the slide operation is a downward slide, the reduction processing of the content in the display interface is performed at a zoom scale; or,
when the user input operation identification is judged to be effective once aiming at a specific input area, if the sliding operation is judged to be upward sliding, the reduction processing of the content in the display interface is executed according to the scaling, and if the sliding operation is judged to be downward sliding, the enlargement processing of the content in the display interface is executed according to the scaling.
Example two
The second embodiment of the invention provides electronic equipment, wherein the electronic equipment supports touch screen input, the touch screen is correspondingly provided with N input units, the N input units correspond to N touch areas on the touch screen, and the touch areas corresponding to the input units are different; as shown in fig. 2, the electronic device includes:
an instruction generating unit 10, configured to obtain a first operation, and generate a first instruction, where the first instruction indicates to listen for an input event;
an instruction execution unit 20, configured to determine, in response to the first instruction, an input area and an input unit corresponding to the touch input, and obtain input data through the determined input unit; and comparing the obtained input data with a preset event trigger condition, judging whether the event trigger condition is met, and executing first processing corresponding to a first event when the input data is determined to meet the first event trigger condition.
In an embodiment, the instruction executing unit 20 is further configured to, if the obtained input data is used to represent position coordinates and time information of two touch points in the first input area, determine that the input data satisfies the first event trigger condition when a distance between the first touch point and the second touch point is less than or equal to a preset first threshold, and a time difference between a touch end time of the first touch point and a touch start time of the second touch point is less than or equal to a preset second threshold.
In one embodiment, instruction execution unit 20 is further configured to,
after the position coordinate and the time information of the first touch point are obtained, continuously acquiring the position coordinate of a second touch point at a first frequency;
and judging whether the distance between the second touch point and the first touch point is greater than a third threshold value or not according to the position coordinate of the second touch point and the position coordinate of the first touch point, and determining that the input data meets a first event triggering condition when the distance is determined to be greater than the third threshold value.
In one embodiment, the instruction execution unit 20 is further configured to perform the following first process:
the method comprises the steps of obtaining text content in a display interface, judging the language type of the text content, and calling a translation engine to translate the text content from a first language type to a second language type.
In one embodiment, the instruction execution unit 20 is further configured to perform the following first process:
and calculating to obtain the position change rate of the touch point according to the distance between the second touch point and the first frequency, obtaining a corresponding interface scaling according to the position change rate, and executing scaling processing on the content in the display interface according to the scaling.
In one embodiment, instruction execution unit 20 is further configured to,
comparing the Y-axis coordinates of the second touch point and the first touch point, and judging whether the second touch point moves upwards or downwards relative to the first touch point;
when the second touch point is judged to move upwards relative to the first touch point, executing the zooming-in processing of the content in the display interface according to the zooming scale, and when the second touch point is judged to move downwards relative to the first touch point, executing the zooming-out processing of the content in the display interface according to the zooming scale; or, when the second touch point is judged to move upwards relative to the first touch point, executing the reduction processing of the content in the display interface according to the scaling, and when the second touch point is judged to move downwards relative to the first touch point, executing the enlargement processing of the content in the display interface according to the scaling.
It should be noted that, in the embodiment of the present invention, the touch screen of the electronic device is divided into multiple regions, different input units are respectively registered for different touch regions, and different event trigger conditions are preset for different touch regions, so that richer user touch gestures and richer application functions can be implemented. The embodiment of the present invention does not limit the user touch gesture and the application function that can be implemented, and any user touch gesture and application function that can be applied to the electronic device in practical applications should in principle belong to the protection scope of the embodiment of the present invention.
The information processing method and the electronic device according to the embodiments of the present invention are further described below with reference to specific application examples.
EXAMPLE III
An internal frame of an electronic device shown in the third embodiment of the present invention is shown in fig. 4, and mainly includes:
a main body module: the device mainly comprises an equipment management module, a judgment module, a prompt module and a display module; the device management module is mainly an entry point of the interface of the technical scheme of the embodiment of the invention; the judgment module is a key point for monitoring gesture events; the prompting module is used for judging that different gestures are performed with different event processing according to different gesture events of a user; and the display module displays different data to the user under different gestures.
A coordination module: the gesture display system mainly comprises a gesture support module, and is used for analyzing gesture information and sending different gesture information to a client side for displaying.
Example four
The fourth embodiment of the present invention provides a process in which a user performs text translation by double-clicking an area where a left frame and a right frame are located, as shown in fig. 5, the process mainly includes:
step 501, start, enter the page of the device, such as surfing the internet, watching news, watching novels, etc.;
it should be noted that, when the touch screen driver is initialized, two input (input) units are registered, for example, the area a (including a1 and a2) corresponds to input0, and the area B corresponds to input 1; when a contact point reports, judging whether the contact point is located in the area A or the area B, and reporting by using a corresponding input unit.
Step 502, a device gesture monitors for an event start.
Step 503, the background device service analyzes the gesture through the sensor, monitors the gesture action event, and if the device left frame is double-clicked, the process goes to step 504, if the device right frame is double-clicked, the process goes to step 505, and if the device left and right frames are slid, the process goes to step 506.
Step 504, the background language service parses the current gesture, translates the text content of the current page, and translates Chinese into English. Then go to step 507.
And 505, the background language service analyzes the current gesture, translates the text content of the current page, and translates English into Chinese. Then go to step 507.
Step 506, the background language service analyzes the current gesture, and slides the left side frame or the right side frame upwards to amplify the characters; otherwise, the characters are reduced.
In step 507, it is detected whether the left frame or the right frame is double-clicked, if so, the step 508 is performed, and if not, the step 511 is performed.
If yes, go to step 507 from step 504, go to step 508 if double-click on the left frame is detected, go to step 511 if double-click on the left frame is not detected; or, the process may also go to step 508 when a double-click right frame is detected, and go to step 511 when a double-click right frame is not detected;
if yes, go to step 507 from step 505, go to step 508 when double-click on the right frame is detected, go to step 511 when double-click on the right frame is not detected; alternatively, the process may go to step 508 when a double-click on the left frame is detected, and go to step 511 when a double-click on the left frame is not detected.
Step 508, the background language service parses the current gesture, restores to the text before translation, and goes to step 511.
Step 509, double-click the screen, if yes, go to step 510, and if no, go to step 511.
Step 510, the background language service parses the current gesture, restores the original size of the text, and goes to step 511.
And step 511, ending.
As shown in fig. 6, the input event flow is represented as an android system, Kernel reports touch screen input events, a Native Framework layer is responsible for receiving and distributing the input events, and a Java Framework layer receives the input events distributed by the Native Framework layer by using an InputEventReceiver and performs corresponding processing.
As shown in fig. 7, the process of identifying the double-click side frame area can be described as follows:
assuming that two continuous touch points in an input event sequence reported by a touch screen are Pn and Pn + 1; judging Pn by clicking once in the process of pressing and lifting Pn; if Pn is not an effective click, Pn and Pn +1 do not form an effective double-click operation; otherwise, continuing to judge;
when Pn +1 is pressed down, recording position coordinates and a time stamp of touch X, Y of Pn +1, if the time difference between the time stamp when Pn +1 is pressed down and the time stamp when Pn is lifted up is larger than a preset time threshold, then Pn and Pn +1 do not form an effective double-click operation; otherwise, continuing to judge;
if the distance between the touch position pressed by the Pn +1 and the touch position when the Pn is lifted is larger than a preset distance threshold value, the Pn and the Pn +1 do not form an effective double-click operation; otherwise, continuing to judge;
judging Pn +1 by clicking once, if Pn +1 is effective clicking once, then Pn and Pn +1 form effective double-click operation once; otherwise Pn, Pn +1 do not constitute a valid double-click operation.
As shown in fig. 8, the process of identifying the sliding of the side frame region can be described as follows:
and when a touch point is pressed on the touch screen, the position coordinate and the pressing time of the touch point are driven to be reported, and the position coordinate and the pressing time of the touch point are recorded by the Framework or the upper application. And then, the touch screen is driven to report the current position coordinate of the contact at intervals (for example, 1/85 seconds), the Framework or the upper layer application calculates the distance between the current position and the pressed position to judge whether the contact slides, and the sliding direction is judged by comparing the size of the Y axis of the two-point position coordinate.
Specifically, the method comprises the following steps:
1. when a touch point is pressed down on a touch screen, recording the position coordinate (down X, down Y) and the pressing time (down time) of the pressed touch point by the Framework or the upper application;
2. reporting the coordinates (currentX, currentY) of the current position of the contact point by the touch screen at intervals (such as 1/85 seconds);
3. and calculating the distance between the current position and the pressing position of the contact, and considering that the contact slides when the distance is greater than a certain threshold value. There are two methods for calculating the distance:
3.1、
3.2, distance ═ currentY-down |;
4. the magnitude of the Y-axis of the coordinates of the current position and the pressed position of the touch point is compared in the following two methods:
4.1, currentY > down, with the direction downward; currentY < downY direction is upward;
4.2, tmp is currentY-down, and tmp is more than 0 direction and is downward; tmp <0 direction up.
That is, the sliding operation and the sliding direction are determined according to the position distance and the Y-axis coordinate of different touch points at different times.
In summary, the embodiment of the present invention performs multi-region division on the touch screen of the electronic device, registers different input units for different touch regions, and presets different event trigger conditions for different touch regions, thereby implementing richer user touch gestures and richer application functions. In the application, the embodiment of the invention can very simply translate and adjust the size of the text on the current page by operating the input area of the side frame of the electronic equipment, which is an indispensable convenient function in reading novels and browsing foreign information and is also very helpful for improving the foreign language level of a user.
The embodiment of the present invention does not limit the user touch gesture and the application function that can be implemented, and any user touch gesture and application function that can be applied to the electronic device in practical applications should in principle belong to the protection scope of the embodiment of the present invention.
In addition, the embodiment of the invention is not only applicable to the touch screen with a flat surface as shown in fig. 1, but also applicable to the touch screen with a bent side edge as shown in fig. 9.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (12)

1. An information processing method is applied to electronic equipment and is characterized in that the electronic equipment supports touch screen input, the touch screen is provided with N corresponding input units, the N corresponding input units correspond to N touch areas on the touch screen, and the touch areas corresponding to each input unit are different; the method comprises the following steps:
obtaining a first operation, and generating a first instruction, wherein the first instruction indicates to monitor an input event;
responding to the first instruction, determining an input area and an input unit corresponding to the touch input, and obtaining input data through the determined input unit;
and comparing the obtained input data with a preset event trigger condition, judging whether the event trigger condition is met, and executing first processing corresponding to a first event when the input data is determined to meet the first event trigger condition.
2. The information processing method according to claim 1, wherein the comparing the obtained input data with a preset event trigger condition to determine whether the event trigger condition is satisfied comprises:
if the obtained input data is used for representing the position coordinates and the time information of two touch points in the first input area, when the distance between the first touch point and the second touch point is smaller than or equal to a preset first threshold value, and the time difference between the touch ending time of the first touch point and the touch starting time of the second touch point is smaller than or equal to a preset second threshold value, it is determined that the input data meets a first event triggering condition.
3. The information processing method according to claim 1, wherein the comparing the obtained input data with a preset event trigger condition to determine whether the event trigger condition is satisfied comprises:
after the position coordinate and the time information of the first touch point are obtained, continuously acquiring the position coordinate of a second touch point at a first frequency;
and judging whether the distance between the second touch point and the first touch point is greater than a third threshold value or not according to the position coordinate of the second touch point and the position coordinate of the first touch point, and determining that the input data meets a first event triggering condition when the distance is determined to be greater than the third threshold value.
4. The information processing method according to claim 2 or 3, wherein the executing of the first processing corresponding to the first event trigger condition includes:
the method comprises the steps of obtaining text content in a display interface, judging the language type of the text content, and calling a translation engine to translate the text content from a first language type to a second language type.
5. The information processing method according to claim 3, wherein the executing of the first processing corresponding to the first event trigger condition includes:
and calculating to obtain the position change rate of the touch point according to the distance between the second touch point and the first frequency, obtaining a corresponding interface scaling according to the position change rate, and executing scaling processing on the content in the display interface according to the scaling.
6. The information processing method according to claim 5, wherein the scaling of the content in the display interface includes:
comparing the Y-axis coordinates of the second touch point and the first touch point, and judging whether the second touch point moves upwards or downwards relative to the first touch point;
when the second touch point is judged to move upwards relative to the first touch point, executing the zooming-in processing of the content in the display interface according to the zooming scale, and when the second touch point is judged to move downwards relative to the first touch point, executing the zooming-out processing of the content in the display interface according to the zooming scale; or, when the second touch point is judged to move upwards relative to the first touch point, executing the reduction processing of the content in the display interface according to the scaling, and when the second touch point is judged to move downwards relative to the first touch point, executing the enlargement processing of the content in the display interface according to the scaling.
7. An electronic device is characterized in that the electronic device supports touch screen input, the touch screen is provided with N corresponding input units, the N corresponding input units correspond to N touch areas on the touch screen, and the touch areas corresponding to the input units are different; the electronic device includes:
the instruction generation unit is used for obtaining a first operation and generating a first instruction, wherein the first instruction indicates a monitoring input event;
the instruction execution unit is used for responding to the first instruction, determining an input area and an input unit corresponding to the touch input, and obtaining input data through the determined input unit; and comparing the obtained input data with a preset event trigger condition, judging whether the event trigger condition is met, and executing first processing corresponding to a first event when the input data is determined to meet the first event trigger condition.
8. The electronic device according to claim 7, wherein the instruction execution unit is further configured to determine that the input data satisfies the first event triggering condition when a distance between the first touch point and the second touch point is less than or equal to a preset first threshold and a time difference between a touch ending time of the first touch point and a touch starting time of the second touch point is less than or equal to a preset second threshold if the obtained input data is used to represent position coordinates and time information of two touch points in the first input area.
9. The electronic device of claim 7, wherein the instruction execution unit is further configured to,
after the position coordinate and the time information of the first touch point are obtained, continuously acquiring the position coordinate of a second touch point at a first frequency;
and judging whether the distance between the second touch point and the first touch point is greater than a third threshold value or not according to the position coordinate of the second touch point and the position coordinate of the first touch point, and determining that the input data meets a first event triggering condition when the distance is determined to be greater than the third threshold value.
10. The electronic device of claim 8 or 9, wherein the instruction execution unit is further configured to perform the following first process:
the method comprises the steps of obtaining text content in a display interface, judging the language type of the text content, and calling a translation engine to translate the text content from a first language type to a second language type.
11. The electronic device of claim 9, wherein the instruction execution unit is further configured to perform a first process of:
and calculating to obtain the position change rate of the touch point according to the distance between the second touch point and the first frequency, obtaining a corresponding interface scaling according to the position change rate, and executing scaling processing on the content in the display interface according to the scaling.
12. The electronic device of claim 11, wherein the instruction execution unit is further configured to,
comparing the Y-axis coordinates of the second touch point and the first touch point, and judging whether the second touch point moves upwards or downwards relative to the first touch point;
when the second touch point is judged to move upwards relative to the first touch point, executing the zooming-in processing of the content in the display interface according to the zooming scale, and when the second touch point is judged to move downwards relative to the first touch point, executing the zooming-out processing of the content in the display interface according to the zooming scale; or, when the second touch point is judged to move upwards relative to the first touch point, executing the reduction processing of the content in the display interface according to the scaling, and when the second touch point is judged to move downwards relative to the first touch point, executing the enlargement processing of the content in the display interface according to the scaling.
CN201510288274.XA 2015-05-29 2015-05-29 Information processing method and electronic equipment Pending CN104951230A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510288274.XA CN104951230A (en) 2015-05-29 2015-05-29 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510288274.XA CN104951230A (en) 2015-05-29 2015-05-29 Information processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN104951230A true CN104951230A (en) 2015-09-30

Family

ID=54165913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510288274.XA Pending CN104951230A (en) 2015-05-29 2015-05-29 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN104951230A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106020700A (en) * 2016-05-26 2016-10-12 维沃移动通信有限公司 Translation method and terminal equipment
CN111045578A (en) * 2018-10-12 2020-04-21 阿里巴巴集团控股有限公司 Display control method, display control device, terminal device and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100302281A1 (en) * 2009-05-28 2010-12-02 Samsung Electronics Co., Ltd. Mobile device capable of touch-based zooming and control method thereof
CN102023735A (en) * 2009-09-21 2011-04-20 联想(北京)有限公司 Touch input equipment, electronic equipment and mobile phone
CN102270197A (en) * 2010-06-01 2011-12-07 英业达股份有限公司 Touch translation system and method thereof
CN102541319A (en) * 2010-12-20 2012-07-04 联想(北京)有限公司 Electronic equipment and display processing method thereof
CN103246382A (en) * 2012-02-13 2013-08-14 联想(北京)有限公司 Control method and electronic equipment
CN104063142A (en) * 2013-03-21 2014-09-24 联想(北京)有限公司 Information processing method, device and electronic equipment
CN104395873A (en) * 2012-06-20 2015-03-04 三星电子株式会社 Apparatus including a touch screen and screen change method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100302281A1 (en) * 2009-05-28 2010-12-02 Samsung Electronics Co., Ltd. Mobile device capable of touch-based zooming and control method thereof
CN102023735A (en) * 2009-09-21 2011-04-20 联想(北京)有限公司 Touch input equipment, electronic equipment and mobile phone
CN102270197A (en) * 2010-06-01 2011-12-07 英业达股份有限公司 Touch translation system and method thereof
CN102541319A (en) * 2010-12-20 2012-07-04 联想(北京)有限公司 Electronic equipment and display processing method thereof
CN103246382A (en) * 2012-02-13 2013-08-14 联想(北京)有限公司 Control method and electronic equipment
CN104395873A (en) * 2012-06-20 2015-03-04 三星电子株式会社 Apparatus including a touch screen and screen change method thereof
CN104063142A (en) * 2013-03-21 2014-09-24 联想(北京)有限公司 Information processing method, device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106020700A (en) * 2016-05-26 2016-10-12 维沃移动通信有限公司 Translation method and terminal equipment
CN111045578A (en) * 2018-10-12 2020-04-21 阿里巴巴集团控股有限公司 Display control method, display control device, terminal device and electronic device

Similar Documents

Publication Publication Date Title
CN106484266B (en) Text processing method and device
US7103852B2 (en) Dynamic resizing of clickable areas of touch screen applications
US9195345B2 (en) Position aware gestures with visual feedback as input method
CN108829327B (en) Writing method and device of interactive intelligent equipment
CN103677594B (en) Text handling method and device
CN103197880B (en) The method and apparatus that keyboard is shown in the terminal with touch-screen
JP2015153420A (en) Multitask switching method and system and electronic equipment having the same system
EP2905689A1 (en) Method and apparatus for displaying character on touchscreen
EP2869174A1 (en) Method and device for text input and display of intelligent terminal
CN102768595B (en) A kind of method and device identifying touch control operation instruction on touch-screen
CN103425394A (en) Method and device for changing icon position for touch screen
CN106415472A (en) Gesture control method, device, terminal apparatus and storage medium
CN104199607A (en) Candidate selection method and device based on input method
CN109144309B (en) Touch control method and device, storage medium and terminal equipment
US9542764B2 (en) Displaying contents of a file in different regions
CN107577404B (en) Information processing method and device and electronic equipment
CN104951230A (en) Information processing method and electronic equipment
TWI607369B (en) System and method for adjusting image display
CN102855076A (en) Control method and control device of touch screen and mobile terminal device
CN101615100B (en) Computer and notebook computer
CN112162689B (en) Input method and device and electronic equipment
US10635224B2 (en) Information input method and apparatus for touch screen
CN104407763A (en) Content input method and system
CN104731920B (en) Information search method and device
CN103207746B (en) A kind of funcall method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150930

RJ01 Rejection of invention patent application after publication