[go: up one dir, main page]

CN110799933A - Disambiguating gesture input types using multi-dimensional heat maps - Google Patents

Disambiguating gesture input types using multi-dimensional heat maps Download PDF

Info

Publication number
CN110799933A
CN110799933A CN201880039649.0A CN201880039649A CN110799933A CN 110799933 A CN110799933 A CN 110799933A CN 201880039649 A CN201880039649 A CN 201880039649A CN 110799933 A CN110799933 A CN 110799933A
Authority
CN
China
Prior art keywords
user input
determining
heat maps
user
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880039649.0A
Other languages
Chinese (zh)
Inventor
菲利普·奎因
翟树民
冯雯馨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Publication of CN110799933A publication Critical patent/CN110799933A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A computing device is described that receives an indication representative of a user input entered at a region of a presence-sensitive screen over a duration of time. The computing device may determine, based on the indication representative of the user input, a plurality of multi-dimensional heat maps indicative of the user input. Based on the plurality of multi-dimensional heat maps, the computing device may determine a change in a shape of the plurality of multi-dimensional heat maps over the duration of time and determine a classification of the user input in response to the change in the shape of the plurality of multi-dimensional heat maps. The computing device may then perform an operation associated with the classification of the user input.

Description

Disambiguating gesture input types using multi-dimensional heat maps
Background
Some computing devices (e.g., mobile phones, tablet computers) may receive user input entered at a presence-sensitive screen. For example, a presence-sensitive screen of a computing device may output a graphical user interface (e.g., an interface of a game or operating system) that allows a user to input commands by tapping and/or gesturing on other graphical elements (e.g., buttons, scroll bars, icons, etc.) displayed at the presence-sensitive screen. The commands may be associated with different operations that the computing device may perform, such as invoking an application associated with the graphical element, repositioning the graphical element within the graphical user interface, switching between different aspects (e.g., pages) of the graphical user interface, scrolling within various aspects of the graphical user interface, and so forth.
To differentiate between different types of gesture inputs, a computing device may determine one or more intermediate locations that indicate, for example, a relevant location within the presence-sensitive display at which a gesture is initiated, an end location within the presence-sensitive display at which a gesture is terminated, and possibly a gesture location that occurs between the start location and the end location. The computing device may also determine one or more durations of the gestures (e.g., durations associated with each relevant location). Based on the duration and the relevant location of the gesture, the computing device may determine a classification of the type of the gesture, such as whether the gesture is a tap, a long press (e.g., measured by the determined duration exceeding a long press duration threshold), a long press swipe, and so on.
Such duration-based gesture classification can be slow (due to having to wait for various duration thresholds to pass). Furthermore, the duration-based gesture classification may not be accurate in view of reducing gestures to a sequence of one or more locations and one or more durations. The slow, imprecise nature of duration-based gesture classification may result in the computing device determining a classification of a command that is inconsistent with a command that the user intended to be input via the gesture. Resulting in a potentially unresponsive and non-intuitive user experience.
Disclosure of Invention
In one example, the present disclosure is directed to a method comprising: receiving, by one or more processors of a computing device, an indication representing a user input entered at an area of a presence-sensitive screen for a duration of time; and determining, by the one or more processors and based on the indication representative of the user input, a plurality of multi-dimensional heat maps indicative of the user input. The method further comprises the following steps: determining, by the one or more processors and based on the plurality of multi-dimensional heat maps, a change in a shape of the plurality of multi-dimensional heat maps over a duration of time; and determining, by the one or more processors and in response to a change in the shape of the plurality of multi-dimensional heat maps, a classification of the user input. The method further comprises the following steps: performing, by one or more processors, an operation associated with the classification of the user input.
In another example, the present disclosure is directed to a computing device comprising: a presence-sensitive screen configured to output an indication representative of a user input entered at a region of the presence-sensitive screen for a duration of time; and one or more processors configured to determine, based on the indication representative of the user input, a plurality of multi-dimensional heat maps indicative of the user input. The one or more processors are further configured to: determining, based on the plurality of multi-dimensional heat maps, a change in a shape of the plurality of multi-dimensional heat maps over a duration of time; and determining a classification of the user input in response to a change in the shape of the plurality of multi-dimensional heat maps. The one or more processors are further configured to perform operations associated with the classification of the user input.
In another example, the present disclosure is directed to a computer-readable medium having instructions stored thereon that, when executed, cause one or more processors to: receiving an indication representative of a user input entered at an area of a presence-sensitive screen for a duration of time; based on the indication representing the user input, determining a plurality of multi-dimensional heat maps indicative of the user input; determining, based on the plurality of multi-dimensional heat maps, a change in a shape of the plurality of multi-dimensional heat maps over a duration of time; determining a classification of the user input in response to a change in shape of the plurality of multi-dimensional heat maps; and performing an operation associated with the classification of the user input.
The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
Drawings
Fig. 1 is a block diagram illustrating an example computing device configured to disambiguate user input according to one or more aspects of the present disclosure.
Fig. 2 is a block diagram illustrating another example computing device configured to disambiguate user input according to one or more aspects of the present disclosure.
Fig. 3 is a block diagram illustrating an example computing apparatus outputting graphical content for display at a remote device in accordance with one or more techniques of the present disclosure.
Fig. 4A-4C are conceptual diagrams illustrating example sequences of heat maps used by a computing device to perform disambiguation of user inputs in accordance with aspects of the technology described in this disclosure.
Fig. 5 is a conceptual diagram illustrating an example heat map used by a computing device to determine classifications of user inputs in accordance with various aspects of the technology described in this disclosure.
Fig. 6 is a flowchart illustrating example operations of a computing device configured to perform disambiguation of user inputs in accordance with one or more aspects of the present disclosure.
Detailed Description
In general, this disclosure relates to techniques for enabling a computing device to disambiguate user input received via a presence-sensitive screen over a duration of time based on multiple multi-dimensional heat maps associated with the user input. By analyzing how the multi-dimensional heat map changes shape over a duration of time, the computing device may perform an operation that may be referred to as shape-based disambiguation. In addition to relying on disambiguation schemes that only consider the duration of time that a user interacts with a computing device, shape-based disambiguation techniques set forth in this disclosure may consider how the actual shape of a heat map associated with user input and/or the shape of a heat map associated with user input changes over a duration of time.
The computing device may identify when the user presses the presence-sensitive screen based on how the shape changes over the duration. That is, using a multi-dimensional heat map that indicates capacitances detected via a two-dimensional area of a presence-sensitive display, a computing device may use shape-based disambiguation to identify when a user is pressing hard on the presence-sensitive display rather than tapping on the presence-sensitive display. To illustrate, as the capacitance value in the multi-dimensional heat map increases, the computing device may determine that the user is pressing the presence-sensitive display.
As such, techniques of this disclosure may improve operation of a computing device. As one example, the techniques may configure the computing device in a manner that facilitates faster classification of user input as compared to disambiguation schemes that rely solely on temporal thresholds. Further, the techniques may facilitate more accurate classification of user inputs by an increased amount of information, resulting in fewer misclassifications for user inputs. Both benefits may improve user interaction with the computing device, allowing the computing device to recognize user inputs more efficiently (in terms of processor cycles and power utilization). The faster classification provided by the technique may allow the computing device to utilize fewer time processing cycles, thereby saving power. The better accuracy provided by the techniques may allow the computing device to respond in a manner desired by the user such that the user need not undo accidental operations initiated by misclassification of user input, and may re-enter user input to attempt to perform desired operations, which may reduce the number of processing cycles, thereby saving power.
Throughout this disclosure, examples are described in which a computing device and/or computing system may analyze information (e.g., a heat map) associated with the computing device, a user of the computing device, only when the computing device and/or computing system receives explicit permission to analyze the information from the user of the computing device. For example, in the case discussed below, where the computing device and/or computing system may collect or may utilize communication information associated with the user and the computing device, the user may be provided with an opportunity to provide input to control whether programs or features of the computing device and/or computing system may collect and utilize user information (e.g., heatmaps), or to indicate whether and/or how the computing device and/or computing system may receive content related to the user. Additionally, certain data may be processed in one or more ways before being stored or used by the computing device and/or computing system such that personally identifiable information is removed. For example, the identity of the user may be manipulated such that personally identifiable information about the user cannot be determined. Thus, the user may have control over how information is collected about the user and used by the computing device and/or computing system.
Fig. 1 is a conceptual diagram illustrating a computing device 110 as an example computing device configured to disambiguate user input according to one or more aspects of the present disclosure. Computing device 110 may represent a mobile device, such as a smartphone, tablet computer, laptop computer, computerized watch, computerized eyewear, computerized gloves, or any other type of portable computing device. Additional examples of computing device 110 include a desktop computer, a television, a Personal Digital Assistant (PDA), a portable gaming system, a media player, an electronic book reader, a mobile television platform, a car navigation and entertainment system, a vehicle cabin display, or any other type of wearable and non-wearable mobile or non-mobile computing device that can output a graphical keyboard for display.
Computing device 110 includes a presence-sensitive display (PSD)112 (which may represent one example of a presence-sensitive screen), a User Interface (UI) module 120, a gesture module 122, and one or more application modules 124A-124N ("application modules 124"). Modules 120 through 124 may perform operations described using hardware or a combination of hardware and software and/or firmware residing in and/or executing in computing device 110. Computing device 110 may execute modules 120-124 with multiple processors or multiple devices. Computing device 110 may execute modules 120-124 as virtual machines executing on the underlying hardware. Modules 120 through 124 may execute as one or more services of an operating system or computing platform. Modules 120-124 may execute at the application layer of the computing platform as one or more executable programs.
PSD112 of computing device 110 may represent one example of a presence-sensitive screen and serve as a respective input and/or output device for computing device 110. The PSD112 may be implemented using various techniques. For example, PSD112 may function as an input device using a presence-sensitive input screen, such as a resistive touch screen, a surface acoustic wave touch screen, a capacitive touch screen, a projected capacitance touch screen, a pressure-sensitive screen, an acoustic pulse recognition touch screen, or another presence-sensitive screen technology. The PSD112 may also serve as an output (e.g., display) device using any one or more display devices, such as a Liquid Crystal Display (LCD), a dot matrix display, a Light Emitting Diode (LED) display, an Organic Light Emitting Diode (OLED) display, electronic ink, or similar monochrome or color display capable of outputting visible information to a user of the computing device 110.
PSDs 112 may receive tactile input from a user of a respective computing device 110. The PSD112 may receive indications of tactile input by detecting one or more gestures from a user (e.g., the user touching or pointing to one or more locations of the PSD112 with a finger or stylus). PSD112 may output information to a user as a user interface, which may be associated with functionality provided by computing device 110. For example, PSD112 may present various user interfaces (e.g., an electronic messaging application, an internet browser application, a mobile or desktop operating system, etc.) related to a keyboard, application module 124, an operating system, or other features of a computing platform, operating system, application, and/or service executing at computing device 110 or accessible from computing device 110.
UI module 120 manages user interaction with PSD112 and other components of computing device 110. For example, UI module 120 may output a user interface and may cause PSD112 to display the user interface when a user of computing device 110 views the output and/or provides input at PSD 112. In the example of fig. 1, UI module 120 may interface with PSD112 to present user interface 116. The user interface 116 includes graphical elements 118A-118C displayed at various regions of the PSD 112. When a user interacts with a user interface (e.g., PSD 112), UI module 120 may receive one or more input indications from the user. UI module 120 may interpret the detected input at PSD112 and may relay information related to the detected input to one or more associated platforms, operating systems, application modules 124, and/or services executing at computing device 110, e.g., to cause computing device 110 to perform operations.
In other words, UI module 120 may represent a unit configured to interface with PSD112 to present a user interface (such as user interface 116), and to receive an indication at a region of PSD112 that represents user input. PSD112 may output an indication to UI module 120 that represents the user input, including identifying the region of PSD112 that received the user input. Before gesture module 112 classifies user input, UI module 120 may receive and process output from PSD 112. UI module 120 may process the indication in any number of ways, such as, as one example, processing the indication to reduce the indication to a sequence of one or more points that occur within the duration of time.
To process the user input, UI module 120 may receive an indication representing the user input as a sequence of capacitance indications. The capacitance indication may represent a capacitance reflecting the user input at the initial region 119A. The capacitance indication may define the capacitance of each point of the two-dimensional grid in region 119A of PSD112 (thereby defining a graph that may be referred to as a "thermal map" or "capacitance thermal map"). The UI module 120 may evaluate the capacitive indication of the area 119A to determine a centroid reflection of the principal point of the user input at the area 119A. That is, the user input may span multiple capacitive points in a two-dimensional grid with different capacitance values reflecting the degree of user contact with the PSD 112. The higher the capacitance value, the more extensive the contact with the PSD112 and better indicates that the underlying capacitance point is the expected location of the user input. UI module 120 may determine the centroid coordinates using any number of processes, some of which may involve implementation of a spatial model (such as calculation of the centroid of a virtual keyboard user interface) that involves a bivariate gaussian model for a graphical element (e.g., a key in a virtual keyboard user interface).
UI module 120 may output one or more centroid coordinates as an indication representing the user input instead of the capacitive indication to facilitate real-time or near real-time processing of the user input. Real-time or near real-time processing of user input may improve user experience by delaying latency and achieving better responsiveness. UI module 120 may output, for each centroid coordinate, the centroid coordinate along with a timestamp indicating when each indication of the centroid coordinate was determined. To facilitate explanation of the process, UI module 120 may output the centroid coordinates and corresponding timestamps to gesture module 122 and/or other modules not shown in the example of fig. 1 as an indication representative of the user input.
Although shown as separate from PSD112, UI module 112 may be integrated within PSD 112. In other words, PSD112 may implement the functionality described with respect to UI module 120 in hardware or a combination of hardware and software.
Gesture module 122 may represent a component configured to process one or more indications representative of user input to determine a classification of the user input. The gesture module 122 may determine different types of classifications, including a long press event, a tap event, a scroll event, a drag event (which may refer to a long press followed by a movement), and so forth.
The gesture module 122 may perform time-based thresholding to determine a classification of the user input based on the indication representative of the user input. For example, the gesture module 122 may determine the long press event classification when the centroid coordinate remains in a relatively stable position for a duration (measured by a difference of the corresponding timestamps) that exceeds the long press duration threshold. When the final timestamp in chronological order of the timestamps is less than the flick duration threshold, the gesture module 122 may determine a flick event classification in a time-based thresholding.
The gesture module 122 may also perform spatial thresholding to determine various spatial event classifications based on indications representative of user input, such as a scroll event classification, a swipe event classification, a drag event classification, and so forth. For example, the gesture module 122 determines the scrolling event classification when the difference between the two centroid coordinates exceeds a distance threshold.
Gesture module 122 may output the classification of the user input to UI module 120. UI module 120 may perform operations associated with the classification of the user input, such as scrolling user interface 116 when the classification indicates a scrolling event, opening a menu when the classification indicates a long press event, or invoking one of application modules 124 when the classification indicates a tap event.
In some cases, UI module 120 may perform operations with respect to graphical elements (such as graphical element 118). For example, UI module 120 may determine one or more base graphical elements 118 displayed at locations within PSD112 identified by one or more centroids (or, in other words, centroid coordinates). UI module 120 may then perform operations associated with the classification of the user input with respect to one or more graphical elements 118. To illustrate, given that the user input is classified as a long press event centered on graphical element 118A, and that graphical element 118A is an icon associated with one of application modules 124, UI module 120 may generate a long press menu including operations that can be performed by one of application modules 124, and interface with PSD112 to update user interface 116 to display the long press menu with quick links to perform additional operations provided by one of application modules 124.
Each application module 124 is an executable application, or sub-component thereof, that performs one or more particular functions or operations of the computing device 110, such as an electronic messaging application, a text editor, an internet web browser, or a gaming application. Each application module 124 may independently perform various functions of the computing device 110 or may operate in cooperation with other application modules 124 to perform functions.
As mentioned above, the gesture module 122 may perform time-based thresholding to determine a plurality of different classifications of user input. While time-based thresholding may allow functional interaction with the computing device 110 via the PSD112, various duration thresholds typically introduce latency that may impact the user experience and introduce arbitrary timeliness into user input detection. While shortening the duration threshold may improve the overall user experience, the shortened duration threshold may result in erroneous disambiguation of user inputs (where, for example, user inputs expected to be long presses are incorrectly classified as scrolling events) or other misclassification of user inputs. Even infrequent misclassifications can result in a frustrating user experience because the user may think that the use of the computing device 110 is not intuitive, and that it is operating incorrectly and fails.
In accordance with techniques described in this disclosure, gesture module 122 may perform heat map-based disambiguation of user input. Rather than reducing the heat map to a single centroid coordinate that maps to one pixel of PSD112, gesture module 122 may receive a sequence of heat maps (in whole or in part), which UI module 120 may determine based on an indication representative of a user input, as mentioned above. The sequence of heat maps (which may also be more generally referred to as "heat maps" because the techniques may operate with respect to heat maps received in a non-sequential order) may provide a more detailed representation of user input, as compared to the sequence of centroid coordinates, thereby potentially allowing faster, more accurate disambiguation of user input.
Assuming that the heat map provides a two-dimensional representation of the user input (e.g., the square of the capacitive indication around the centroid coordinates), and that the heat map sequence varies in a third dimension (i.e., time), the heat map sequence may be referred to as a multi-dimensional heat map sequence. Based on the multi-dimensional heat map sequence representing the user input over the duration, the gesture module 122 may determine a change in a shape of the multi-dimensional heat map sequence over the duration.
The change in shape may represent many different types of events. For example, when a user presses their finger against the screen, the natural plasticity of the finger may cause the shape to expand, possibly indicating that more pressure is being applied to the PSD112, and the gesture module 122 may disambiguate the pressure as a hard pressure event. As another example, the gesture module 122 may disambiguate a small change in shape to a tap event or a tap event. In this regard, the gesture module 122 may determine a classification of the user input in response to a change in the shape of the multi-dimensional heat map.
In some examples, the gesture module 122 may use shape-based disambiguation to combine the time-based disambiguation and the shape-based disambiguation to potentially more quickly determine a classification of the user input or to facilitate accuracy of the classification process. For example, the gesture module 122 may use the change in shape to more quickly determine that the user input is a press event (e.g., as fast as 100 milliseconds (ms)), allowing the time-based threshold to be set low (typically up to 500ms in the only time-based disambiguation scheme). As another example, the gesture module 122 may use additional information in the form of a shape change to determine that the user input is a tap event based on the change in shape and relative to a tap event threshold to facilitate verification of the tap event and reduce instances of misinterpreting an expected tap user input as a scroll event.
The gesture module 122 may also utilize the shape of any one of the heat map sequences to derive additional information about the user input. For example, the gesture module 122 may determine which hand of the user is used to input the user input based on the shape of one or more of the multi-dimensional heat map sequences. As another example, the gesture module 122 may also determine which finger of the user is used to input the user input.
Gesture module 122 may output the classification to other modules of the operating system (not shown in the example of fig. 1 for ease of illustration) that may perform some operations associated with the classification of the user input (e.g., invoke one of application modules 124, pass the classification to one of application modules 124-application module 124 itself may perform operations such as scrolling, transitioning, current menu, etc.). In general, in this regard, the computing device 110 may perform operations associated with the classification of the user input.
As such, techniques of this disclosure may improve the operation of computing device 110. As one example, the techniques may configure the computing device 110 in a manner that facilitates faster classification of user input as compared to disambiguation schemes that rely solely on temporal thresholds. Further, the techniques may facilitate more accurate classification of user inputs by an increased amount of information, resulting in fewer misclassifications for user inputs. Both benefits may improve user interaction with the computing device 110, allowing the computing device 110 to recognize user inputs more efficiently (in terms of processor cycles and power utilization). The faster classification provided by the technique may allow the computing device 110 to utilize fewer time processing cycles, thereby saving power. The better accuracy provided by this technique may allow the computing device 110 to respond in a manner desired by the user, such that the user need not undo accidental operations initiated by misclassification of user input, and may re-enter user input to attempt to perform desired operations, which may reduce the number of processing cycles, thereby saving power.
Fig. 2 is a block diagram illustrating an example computing device configured to present a graphical keyboard in accordance with one or more aspects of the present disclosure. Computing device 210 of fig. 2 is described below as an example of computing device 110 illustrated in fig. 1. Fig. 2 illustrates only one particular example of computing device 110, and many other examples of computing device 210 may be used in other instances and may include a subset of the components included in example computing device 210, or may include additional components not shown in fig. 2.
As shown in the example of fig. 2, computing device 110 includes PSD 212, one or more processors 240, one or more communication units 242, one or more input components 244, one or more output components 246, and one or more storage components 248. Presence-sensitive display 212 includes display component 202 and presence-sensitive input component 204. Storage components 248 of computing device 210 may include UI module 220, gesture module 222, and one or more application modules 224. Additionally, storage component 248 is configured to store a multidimensional heat map ("MDHM") store 260A and a threshold data store 260B (collectively, "data store 260"). Gesture module 222 may include a shape-based disambiguation ("SBD") model module 226 and a time-based disambiguation ("TBD") model module 228. Communication channel 250 may interconnect each of components 212, 240, 242, 244, 246, 248, 220, 222, 224, 226, 228, and 260 for inter-component communication (physically, communicatively, and/or operatively). In some examples, communication channel 250 may include a system bus, a network connection, an interprocess communication data structure, or any other method for communicating data.
One or more communication units 242 of computing device 210 may communicate with external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals over one or more networks. Examples of the communication unit 242 include: a network interface card (such as, for example, an ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send and/or receive information. Other examples of the communication unit 242 may include a short wave radio, a cellular data radio, a wireless network radio, and a Universal Serial Bus (USB) controller.
One or more input components 244 of computing device 210 may receive input. Examples of inputs are tactile inputs, audio inputs and video inputs. In one example, input component 242 of computing device 210 includes a presence-sensitive input device (e.g., touch-sensitive screen, PSD), mouse, keyboard, voice response system, camera, microphone, or any other type of device for detecting input from a human or machine. In some examples, input components 242 may include one or more sensor components, one or more location sensors (GPS components, Wi-Fi components, cellular components), one or more temperature sensors, one or more movement sensors (e.g., accelerometers, gyroscopes), one or more pressure sensors (e.g., barometers), one or more ambient light sensors, and one or more other sensors (e.g., microphones, cameras, infrared proximity sensors, hygrometers, etc.). Other sensors may include heart rate sensors, magnetometers, glucose sensors, hygrometer sensors, olfactory sensors, compass sensors, step counter sensors, to name a few other non-limiting examples.
One or more output components 246 of the computing device 110 may generate output. Examples of outputs are tactile outputs, audio outputs, and video outputs. In one example, output components 246 of computing device 210 include a PSD, sound card, video graphics adapter card, speaker, Cathode Ray Tube (CRT) monitor, Liquid Crystal Display (LCD), or any other type of device for generating output to a person or machine.
PSD 212 of computing device 210 includes display component 202 and presence-sensitive input component 204. Display component 202 may be a screen on which PSD 212 displays information, and presence-sensitive input component 204 may detect objects at and/or near display component 202. As one example range, presence-sensitive input component 204 may detect an object, such as a finger or stylus, within two inches or less of display component 202. Presence-sensitive input component 204 may determine the location (e.g., [ x, y ] coordinates) of display component 202 where the object was detected. In another example range, presence-sensitive input component 204 may detect objects that are six inches or less from display component 202, and other ranges are possible. Presence-sensitive input component 204 may use capacitive, inductive, and/or optical recognition techniques to determine the position of display component 202 selected by the user's finger. In some examples, presence-sensitive input component 204 also provides output to the user using tactile, audio, or video stimuli described with respect to display component 202. In the example of FIG. 2, PSD 212 may present a user interface (such as graphical user interface 116 for receiving text input and outputting a sequence of characters inferred from the text input shown in FIG. 1).
Although illustrated as internal components of computing device 210, PSD 212 may also represent external components that share a data path with computing device 210 to transmit and/or receive input and output. For example, in one example, PSD 212 represents a built-in component of computing device 210 that is located within an external enclosure of computing device 210 (e.g., a screen on a mobile phone) and that is physically connected to the external enclosure of computing device 210. In another example, PSD 212 represents an external component of computing device 210 that is external to an enclosure or housing of computing device 210 (e.g., a monitor, projector, etc. that shares a wired and/or wireless data path with computing device 210) and is physically separate from the enclosure or housing of computing device 210.
PSD 212 of computing device 210 may receive tactile input from a user of computing device 210. PSD 212 may receive an indication of a tactile input by detecting one or more tap or non-tap gestures from a user of computing device 210 (e.g., the user touching or pointing to one or more locations of PSD 212 with a finger or stylus). PSD 212 may present output to a user. PSD 212 may present the output as a graphical user interface (e.g., graphical user interface 114 of fig. 1) that may be associated with the functionality provided by various functionalities of computing device 210. For example, PSD 212 may present various user interfaces (e.g., an electronic message application, a navigation application, an internet browser application, a mobile operating system, etc.) of components of a computing platform, operating system, application, or service executing at computing device 210 or accessible by computing device 210. The user may interact with the respective user interfaces to cause computing device 210 to perform operations related to one or more of the various functions. A user of computing device 210 may view output presented as feedback associated with text input functions and provide input to PSD 212 using the text input functions to compose text.
PSD 212 of computing device 210 may detect two-dimensional and/or three-dimensional gestures as input from a user of computing device 210. For example, a sensor of PSD 212 may detect movement of the user (e.g., moving a hand, arm, pen, stylus, etc.) within a threshold distance of the sensor of PSD 212. PSD 212 may determine a two-dimensional or three-dimensional vector representation of the movement and correlate the vector representation with a gesture input having multiple dimensions (e.g., waving, pinching, applauding, stroking, etc.). In other words, PSD 212 may detect multi-dimensional gestures without requiring the user to gesture at or near the screen or surface where PSD 212 outputs information for display. In contrast, PSD 212 may detect multi-dimensional gestures performed at or near a sensor that may or may not be located near a screen or surface on which PSD 212 outputs information for display.
The one or more processors 240 may implement the functionality and/or execute instructions associated with the computing device 210. Examples of processor 240 include an application processor, a display controller, an auxiliary processor, one or more sensor hubs, and any other hardware configured to act as a processor, processing unit, or processing device. Modules 220, 222, 224, 226, and 228 may be operable by processor 240 to perform various actions, operations, or functions of computing device 210. For example, processor 240 of computing device 210 may retrieve and execute instructions stored by storage component 248 that cause processor 240 to execute operational modules 220, 222, 224, 226, and 228. The instructions, when executed by the processor 240, may cause the computing device 210 to store information within the storage component 248.
One or more storage components 248 within computing device 210 may store information for processing during operation of computing device 210 (e.g., computing device 210 may store data accessed by modules 220, 222, 224, 226, and 228 during execution at computing device 210). In some examples, storage component 248 is a temporary memory, meaning that the primary purpose of storage component 248 is not long-term storage. Storage component 248 on computing device 210 may be configured for short-term storage of information as volatile memory, and therefore, will not retain stored content if power is lost. Examples of volatile memory include Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), and other forms of volatile memory known in the art.
In some examples, storage component 248 also includes one or more computer-readable storage media. In some examples, storage component 248 includes one or more non-transitory computer-readable storage media. Storage component 248 may be configured to store a greater amount of information than is typically stored by volatile memory. Storage component 248 may further be configured for long-term storage of information as non-volatile memory space, and to retain information after power on/off cycles. Examples of non-volatile memory include magnetic hard disks, optical disks, floppy disks, flash memory, or forms of electrically programmable memory (EPROM) or Electrically Erasable and Programmable (EEPROM) memory. Storage component 248 may store program instructions and/or information (e.g., data) associated with modules 220, 222, 224, 226, and 228 and data store 260. Storage component 248 may include memory configured to store data or other information associated with modules 220, 222, 224, 226, and 228, and data store 260.
UI module 220 may include all of the functionality of UI module 120 of computing device 110 of fig. 1, and may perform operations similar to UI module 120 to manage a user interface (e.g., user interface 116) provided by computing device 210 at presence-sensitive display 212 for handling input from a user. UI module 220 may transmit display commands and data over communication channel 250 to cause PSD 212 to present a user interface at PSD 212. For example, UI module 220 may detect an initial user input selecting one or more keys of a graphical keyboard. In response to detecting the initial selection of one or more keys, UI module 220 may generate one or more touch events based on the initial selection of one or more keys.
Application modules 224 represent all of the various individual applications and services that execute at and are accessible from computing device 210. A user of computing device 210 may interact with interfaces (e.g., graphical user interfaces) associated with one or more application modules 224 to cause computing device 210 to perform functions. Many examples of application modules 224 may exist and include a fitness application, a calendar application, a personal assistant or prediction engine, a search application, a mapping or navigation application, a transportation service application (e.g., a bus or train tracking application), a social media application, a gaming application, an email application, a messaging application, an internet browser application, or any and all other applications that may be executed at computing device 210.
The gesture module 222 may include all of the functionality of the gesture module 122 of the computing device 110 of fig. 1, and may perform similar operations as the gesture module 122 to disambiguate user input. That is, the gesture module 222 may perform various aspects of the techniques described in this disclosure to disambiguate user input, determine a classification of the user input based on the heat map sequence described above.
SBD model module 226 of gesture module 222 may represent a model configured to disambiguate user input based on the shape of the multidimensional heat map sequence stored to MDHM data store 260A. In some examples, each heat map of the multi-dimensional heat map sequence represents capacitance values for an area where the sensitive display 212 is present for a duration of 8 ms. As one example, SBD model module 226 may include a neural network or other machine learning model that is trained to perform the disambiguation techniques described in this disclosure.
TBD model module 228 may represent a model configured to disambiguate user input based on a time-based or in other words duration-based threshold. The TBD model module 228 can perform time-based thresholding to disambiguate user input. As one example, TBD model module 228 may represent a neural network or other machine learning model that is trained to perform time-based disambiguation aspects of the techniques described in this disclosure. Although shown as separate models, SBD model module 226 and TBD model module 228 may be implemented as a single model capable of performing both shape-based and time-based disambiguation aspects of the techniques described in this disclosure.
When applying a neural network or other machine learning algorithm, both SBD model module 226 and TBD model module 228 may be trained based on a set of example indications (such as the heat map and centroid, respectively, mentioned above) that represent user input. That is, SBD model module 226 may be trained using each of a different sequence of heat maps representing user input, a sequence of heat maps associated with different classification events (e.g., long press event, tap event, scroll event, etc.). SBD model module 226 may be trained until it is configured to correctly classify unknown events with a certain confidence (or percentage). Similarly, TBD model module 228 can be trained using each of a different sequence of centroids representing user input, a sequence of centroids associated with different classification events (e.g., long press event, tap event, scroll event, etc.).
MDHM data store 260A may store a plurality of multidimensional heatmaps. Although described as storing a sequence of multidimensional heat maps, MDHM data store 260 may store other data related to gesture disambiguation, including handedness, finger recognition, or other data. Threshold data store 260B may include one or more temporal, distance or space based, probability, or other comparison values that gesture module 222 uses to infer classification events from user input. The threshold stored at threshold data store 260B may be a variable threshold (e.g., based on a function or a lookup table) or a fixed value.
Although described with respect to handedness (e.g., right hand, left hand) and finger identification (e.g., index finger, thumb, or other fingers), the techniques may determine other data based on the heat map, such as weighted regions of the heat map, perimeters of the heat map (after an edge finding operation), histograms of row/column values of the heat map, peaks of the heat map, peak locations relative to edges, centroid correlation calculations for these features, or derivatives of these features. Threshold data store 260B may also store this other data.
Presence-sensitive input component 204 may initially receive the capacitive indication, which presence-sensitive input component 204 forms into a plurality of capacitive heatmaps representing the capacitance in the area of presence-sensitive display 212 (e.g., area 114) that reflect the user input entered at the area of presence-sensitive display 212 over the duration of time. In some cases, the communication channel 250 (which may also be referred to as a "bus 250") may have limited throughput (or in other words, bandwidth). In these cases, presence-sensitive input component 204 may reduce the number of indications to obtain a reduced set of indications. For example, presence-sensitive input component 204 may determine a centroid where primary contact with presence-sensitive display 212 occurred and reduce the indication to a centroid-centered indication (such as a centroid-centered 7x7 grid). Presence-sensitive input component 204 may determine a plurality of multi-dimensional heat maps based on the reduced set of indications, which are stored to MDHM data store 260A via bus 250.
SBD model module 226 may access heat maps stored to MDHM data store 260A, applying one or more neural networks to determine the change in shape of the sequence of multidimensional heat maps over a duration of time. SBD model module 226 may then apply one or more neural networks in response to changes in the shape of the plurality of multi-dimensional heat maps to determine a classification of the user input.
SBD model module 226 may also determine the handedness of the user entering the user input or which finger of the user entering the input was used to enter the user input based on the changes in the shape of the multi-dimensional heat map. SBD model module 226 may apply one or more neural networks to determine handedness or which finger, and apply one or more neural networks to determine a classification of the user input based on the determination of handedness or determination of which finger.
The gesture module 222 may also invoke the TBD model module 228 to determine a classification of the user input using a time-based threshold (possibly in addition to the centroid of the heat map sequence). As an example, TBD model module 228 may determine a tap event based on a duration threshold, the tap event indicating that a user inputting user input performed at least one tap on a presence-sensitive screen. Gesture module 222 may then determine a classification from the combined results output by SBD model module 226 and TBD model module 228.
Fig. 3 is a block diagram illustrating an example computing device outputting graphical content for display at a remote device in accordance with one or more techniques of this disclosure. In general, graphical content may include any visual information that may be output for display, such as text, an image, and a set of moving images, to name a few examples. The example shown in fig. 3 includes computing device 310, PSD 312, communication unit 342, projector 380, projector screen 382, mobile device 386, and visual display component 390. In some examples, PSD 312 may be the presence-sensitive display described in fig. 1-2. Although shown in fig. 1 and 2 as separate computing devices 110 and 210, respectively, for purposes of example, a computing device, such as computing device 310, may generally be any component or system that includes a processor or other suitable computing environment for executing software instructions and, for example, need not include a presence-sensitive display.
As shown in the example of fig. 3, computing device 310 may be a processor that includes the functionality described with respect to processor 240 in fig. 2. In such an example, computing device 310 may be operatively coupled to PSD 312 via communication channel 362A, which communication channel 362A may be a system bus or other suitable connection. Computing device 310 may also be operatively coupled to communication unit 342 by a communication channel 362B, which may also be a system bus or other suitable connection, as described further below. Although shown separately as an example in fig. 3, computing device 310 may be operatively coupled to PSD 312 and communication unit 342 by any number of one or more communication channels.
In other examples, computing devices may refer to portable or mobile devices, such as mobile phones (including smart phones), laptops, and the like, such as previously illustrated by computing devices 110 and 210 in fig. 1 and 2, respectively. In some examples, the computing device may be a desktop computer, a tablet computer, a smart television platform, a camera, a Personal Digital Assistant (PDA), a server, or a mainframe.
PSD 312 may include display component 302 and presence-sensitive input component 304. The display component 302 can receive data from the computing device 310 and display graphical content, for example. In some examples, presence-sensitive input component 304 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures) at PSD 312 using capacitive, inductive, and/or optical recognition techniques and send indications of such user inputs to computing device 310 using communication channel 362A. In some examples, presence-sensitive input component 304 may be physically positioned on top of display component 302 such that, when a user positions an input unit over a graphical element displayed by display component 302, presence-sensitive input component 304 is in a location that corresponds to the location of display component 302 where the graphical element is displayed.
As shown in fig. 3, computing device 310 may also include a communication unit 342 and/or be operatively coupled to communication unit 342. The communication unit 342 may comprise the functionality of the communication unit 242 described in fig. 2. Examples of the communication unit 342 may include: a network interface card, an ethernet card, an optical transceiver, a radio frequency transceiver, or any other type of device that can send and receive information. Other examples of such communication units may include bluetooth, 3G and WiFi radios, Universal Serial Bus (USB) interfaces, and the like. For purposes of brevity and description, computing device 310 may also include and/or be operatively coupled with one or more other devices not shown in fig. 3 (e.g., input devices, output components, memory, storage devices).
Fig. 3 also illustrates a projector 380 and a projector screen 382. Other such examples of projection devices may include electronic whiteboards, holographic display assemblies, and any other suitable device for displaying graphical content. Projector 380 and projector screen 382 may include one or more communication units that enable the respective devices to communicate with computing device 310. In some examples, the one or more communication units may enable communication between projector 380 and projector screen 382. Projector 380 may receive data from computing device 310, including graphical content. In response to receiving the data, projector 380 may project graphical content onto projector screen 382. In some examples, projector 380 may use optical recognition techniques or other suitable techniques to determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures) at the projector screen and send indications of such user inputs to computing device 310 using one or more communication units. In such an example, projector screen 382 may not be necessary, and projector 380 may project graphical content on any suitable medium and detect one or more user inputs using optical recognition techniques or other such suitable techniques.
In some examples, projector screen 382 may include presence-sensitive display 384. Presence-sensitive display 384 may include a subset or all of the functionality of presence- sensitive displays 112, 212, and/or 312 described in this disclosure. In some examples, presence-sensitive display 384 may include additional functionality. Projector screen 382 (e.g., an electronic whiteboard) may receive data from computing device 310 and display graphical content. In some examples, presence-sensitive display 384 may use capacitive, inductive, and/or optical recognition techniques to determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures) at projector screen 382 and send indications of such user inputs to computing device 310 using one or more communication units.
Fig. 3 also illustrates a mobile device 386 and a visual display assembly 390. Mobile device 386 and visual display component 390 can include computing capabilities and connectivity capabilities, respectively. Examples of mobile device 386 may include electronic reader devices, convertible notebook devices, hybrid tablet devices, and the like. Examples of visual display assembly 390 may include other semi-stationary devices such as televisions, computer monitors, and the like. As shown in fig. 3, mobile device 386 may include presence-sensitive display 388. Visual display component 390 may include presence-sensitive display 392. Presence- sensitive displays 388, 392 may include a subset or all of the functionality of presence- sensitive displays 112, 212, and/or 312 described in this disclosure. In some examples, presence- sensitive displays 388, 392 may include additional functionality. In any case, presence-sensitive display 392 may, for example, receive data from computing device 310 and display graphical content. In some examples, presence-sensitive display 392 may use capacitive, inductive, and/or optical recognition processes to determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures) at the projector screen and send indications of such user inputs to computing device 310 using one or more communication units.
As described above, in some examples, computing device 310 may output graphical content for display at PSD 312 coupled to computing device 310 via a system bus or other suitable communication channel. Computing device 310 may also output graphical content for display at one or more remote devices, such as projector 380, projector screen 382, mobile device 386, and visual display component 390. For example, in accordance with the techniques of this disclosure, computing device 310 may execute one or more instructions to generate and/or modify graphical content. Computing device 310 may output data including graphical content to a communication unit of computing device 310, such as communication unit 342. Communication unit 342 can transmit the data to one or more remote devices, such as projector 380, projector screen 382, mobile device 386, and/or visual display assembly 390. In this manner, the computing device 310 may output graphical content for display at one or more remote devices. In some examples, one or more remote devices may output graphical content at a presence-sensitive display included in and/or operatively coupled to the respective remote device.
In some examples, the computing device 310 may not output the graphical content at the PSD 312 operatively coupled to the computing device 310. In other examples, computing device 310 may output graphical content for display at both PSD 312 coupled to computing device 310 over communication channel 362A and at one or more remote devices. In such an example, the graphical content may be displayed at each respective device substantially simultaneously. For example, some delay may be introduced by communication latency to transmit data including graphical content to a remote device. In some examples, the graphical content generated and output by computing device 310 for display at PSD 312 may be different from the graphical content display output for display at one or more remote devices.
Computing device 310 may send and receive data using suitable communication techniques. For example, computing device 310 may be operatively coupled to external network 374 using network link 373A. Each remote device illustrated in fig. 3 may be operatively coupled to a network-external network 374 through one of the respective network links 373B, 373C, or 373D. External network 374 may include network hubs, network switches, network routers, etc., that are operably coupled to each other to provide for the exchange of information between computing device 310 and remote devices illustrated in fig. 3. In some examples, network links 373A-373D may be ethernet, ATM, or other network connections. Such a connection may be a wireless and/or wired connection.
In some examples, computing device 310 may be operatively coupled to one or more remote devices included in fig. 3 using direct device communication 378. Direct device communication 378 may include communication by which computing device 310 sends and receives data directly with a remote device using wired or wireless communication. That is, in some examples of direct device communication 378, data sent by computing device 310 may not be forwarded by one or more additional devices before being received at a remote device, and vice versa. Examples of direct device communication 378 may include bluetooth, near field communication, universal serial bus, WiFi, infrared, and the like. One or more of the remote devices illustrated in fig. 3 may be operatively coupled to computing device 310 by communication links 376A-376D. In some examples, communication links 376A-376D may be connections using bluetooth, near field communication, universal serial bus, infrared, and the like. Such a connection may be a wireless and/or wired connection.
In accordance with the techniques of this disclosure, computing device 310 may be operatively coupled to visual display component 390 using external network 374. Computing device 310 may output a graphical user interface for display at PSD 312. For example, computing device 310 may send data comprising a representation of a graphical user interface to communication unit 342. The communication unit 342 may send data including a representation of the graphical user interface to the visual display component 390 using the external network 374. In response to receiving the data using external network 374, visual display component 390 can cause PSD 312 to output a graphical user interface. In response to receiving user input at PSD 312 to select one or more graphical elements of the graphical user interface, visual display device 130 may send an indication of the user input to computing device 310 using external network 374. Communication unit 342 may receive an indication of a user input and send the indication to computing device 310.
Computing device 310 may receive an indication representative of a user input entered at a region of presence-sensitive screen 312 over a duration of time. Next, the computing device 310 may determine, based on the indication representative of the user input, a plurality of multi-dimensional heat maps indicative of the user input. Based on the plurality of multi-dimensional heat maps, the computing device 310 may then determine a change in a shape of the plurality of multi-dimensional heat maps over a duration of time and determine a classification of the user input in response to the change in the shape of the plurality of multi-dimensional heat maps. Computing device 310 may then perform some operations associated with the classification of the user input, which may include updating a graphical user interface. Communication unit 342 may receive a representation of the updated graphical user interface and may send the representation to visual display component 390, such that visual display component 390 may cause PSD 312 to output the updated graphical user interface.
Fig. 4A-4C are diagrams illustrating example heat map sequences used by a computing device to perform disambiguation of user inputs in accordance with aspects of the technology described in this disclosure. In the example of fig. 4A-4C, heat maps 402A-402E ("heat map 402"), heat maps 404A-404E ("heat map 404"), and heat maps 406A-406E ("heat map 406") include a 7x7 grid of capacitance values, where darker colored boxes indicate higher or lower capacitance values relative to lighter colored boxes. Heat map 402 represents a sequence of heat maps beginning in duration from heat map 402A and collected chronologically through heat map 402E. Thus, the heat map 402 may represent the change in capacitance over the duration of the area. In these aspects, heatmaps 404 and 406 are also similar to heatmap 402.
Referring first to fig. 4A, heat map 402 is captured after a user taps presence-sensitive display 212. Gesture module 222 (shown in the example of fig. 2) may invoke SBD model module 226 and TBD model module 228 to determine a classification of the user input based on a change in shape of heat map 402 over a duration of time. In some examples, each heatmap 402 represents 8ms of time. Thus, the entire sequence of heatmap 402 may represent 40 ms. In response to receiving the heat map 402, SBD model module 226 may determine that a tap occurred given the consistency of the shape and intensity of the capacitance values. TBD model module 228 may determine tap events that occur due to the short duration of heat map sequence 402.
Referring next to fig. 4B, heat map 404 is captured after the user presses presence-sensitive display 212. The gesture module 222 may invoke the SBD model module 226 and the TBD model module 228 to determine a classification of the user input based on the shape change of the heat map 404 over the duration of time. Again, each heatmap 404 may represent 8ms of time. Thus, the entire sequence of heatmap 404 may represent 40 ms. In response to receiving the heat map 404, the SBD model module 226 may determine that a compression event occurred given an increase in intensity over time. The TBD model module 228 may determine compression events that occur due to the longer duration of the heat map sequence 404 (which represents only a larger number of subsets of the entire heat map sequence 404 for ease of illustration).
Referring next to fig. 4C, heat map 406 is captured after the user scrolls across presence-sensitive display 212. Gesture module 222 may invoke SBD model module 226 and TBD model module 228 to determine a classification of the user input based on the shape change of heat map 406 over the duration. Again, each heatmap 406 may represent 8ms of time. Thus, the entire sequence of heatmap 406 may represent 40 ms. In response to receiving heat map 406, SBD model module 226 may determine that a scrolling event occurred given that the intensity is highly variable over time (and the location of the centroid may vary). The TBD model module 228 may determine compression events that occur due to the longer duration of the heat map sequence 406 (and the change in position of the centroid).
Fig. 5 is a diagram illustrating an example heat map used by a computing device to determine classifications of user inputs in accordance with various aspects of the technology described in this disclosure. As shown in the example of fig. 5, heat maps 502A-502C ("heat map 502") represent different classifications of the same user input entered via presence-sensitive display 212 (of fig. 2). Computing device 210 may invoke gesture module 222 in response to receiving heat map 502. The gesture module 222 may then invoke the SBD model module 226 to determine a classification of the user input based on the shape of the heat map 502.
As shown with respect to heat map 502A, SBD model module 226 may determine the area of the user input based on heat map 502A. SBD model module 226 may determine the area as the sum of the values of heat map 502A.
As shown with respect to heat map 502B, SBD model module 226 may determine the perimeter of heat map 502B. SBD model module 226 may first perform some form of binary thresholding (to eliminate occasional or negligible capacitance values). The SBD model module 226 may then determine the perimeter of the heat map 502B as the sum of the remaining external values of the heat map 502B.
As shown with respect to heat map 502C, SBD model module 226 may determine the orientation of the user input. To determine orientation, SBD model module 226 may apply a neural network to heat map 502C, which may analyze the capacitance values to identify orientation 504. In the example of fig. 5, based on heat map 502C, the user input has a left-to-right orientation at approximately a 45 degree angle above the X-axis. Based on orientation, area, and/or perimeter, SBD model module 226 may determine which finger to use to enter user input or to enter the handedness of the user input.
Fig. 6 is a flowchart illustrating example operations of a computing device configured to perform disambiguation of user inputs in accordance with one or more aspects of the present disclosure. Fig. 6 is described below in the context of computing device 210 of fig. 2.
Computing device 210 may receive an indication representing a user input entered at a region of presence-sensitive screen 212 for a duration of time (602). Next, the computing device 210 may determine, based on the indication representative of the user input, a plurality of multi-dimensional heat maps (such as the heat maps 402, 404, and/or 406 shown in the examples of fig. 4A-4C) indicative of the user input (604). Based on the plurality of multi-dimensional heat maps, computing device 210 may then determine a change in a shape of the plurality of multi-dimensional heat maps over a duration of time (606), and determine a classification of the user input in response to the change in the shape of the plurality of multi-dimensional heat maps (608). Computing device 310 may then perform some operations associated with the classification of the user input, which may include updating the graphical user interface (610).
The techniques set forth in this disclosure may address issues related to touch screens (which refers to another way of a presence-sensitive display) that report the location of a user's touch based on a centroid algorithm that estimates the exact touch point (e.g., at a resolution of 1 millimeter (mm)) through the contact area of the user's finger on the screen. However, the information passed through the user's touch contact area may be much more than the information captured by the centroid algorithm, which may lead to interaction errors. For example, noise and jitter in the signal values may cause the centroid to move sporadically and appear as a fast movement, rather than a slight touch or lift, during the initial stages of finger contact with the screen and the final stages of finger disengagement from the screen. This movement is typically translated into a scrolling event, causing unnecessary interaction (so-called "micro-flipping").
Furthermore, the centroid algorithm may discard potentially useful information about how the user's touch contact evolves over time. The only output of the algorithm is the location, which can be shifted over time (indicating a user drag); the nature of this movement (in terms of how the finger is repositioned when in contact with the screen) may be lost. This means that existing centroid-based implementations can only use threshold-based algorithms to distinguish a user's touch intent with simple centroid movement and contact time information, which can increase latency and inaccuracy.
Having additional information about the nature of the finger's contact with the screen and how it changes over time can be used to enhance existing touch interactions and eliminate errors due to ambiguity in the user's touch intent. Touch screens detect interactions using a grid of electrodes that sense the presence of a human finger through changes in capacitance at the electrodes caused by the human body "leaching" out capacitance. By analyzing the capacitance values at each location on the grid, a "heat map" may be derived in accordance with the techniques described in this disclosure.
That is, the value at each location is an analog signal that roughly corresponds to the concentration of the contact region (i.e., a location near the center of contact of the finger typically has a higher value than the surrounding locations). The value at each location is highly dependent on the electrical characteristics of the user's finger and the device (e.g., touching while holding the device's aluminum bezel produces a value that is substantially different from touching on a table).
In some examples, these capacitance values have little to no relation to the applied force, i.e., pressing hard on the screen alone does not change the measured capacitance. However, when the user presses the screen hard, an organic change occurs: the contact area between the finger and the screen increases due to the plasticity of the skin and the increase in the force behind it. This enlarged contact area of the finger may result in increased heat map values at/near the new contact location. Similar variations may be observed during the stages of finger contact or removal from the touch screen, where the smoothly curved shape of the finger creates an enlarged/contracted contact area, i.e., when the user normally touches the screen, their finger tip first contacts the screen, which expands as the contact increases to a comfortable force level. When the user taps the screen, the shape of the contact area may also change with the user's selection of fingers and gestures.
The techniques described in this disclosure contemplate examining such expanded shapes represented in the heat map and using the shapes to disambiguate user intent. These intentions include whether the user is tapping a target, whether the user is attempting to press a target (i.e., at a greater force level, but with a similar time profile as a tap), attempting to initiate a scrolling action, the user's finger selection (i.e., index finger and thumb), and the user's handedness (i.e., holding the device with either the left or right hand).
The faster these intentions can be identified from the beginning of the touch action, the better the user experience-i.e., reducing the dependency on time or distance centroid thresholds to reduce interaction delays. For example, the above-described interaction of the user increasing his touch force may be detected by observing the touch expansion of one side of the original contact area. This is due to the biomechanical structure of the finger, and the increase in pressure is reflected primarily by the expansion of the base of the fingertip (rather than above the nail). Thus, if the expansion appears to be "anchored" on at least one side and expanded on the other side (e.g., the expansion ratio relative to the original centroid position), the touch intent may be interpreted as an increase in touch force, i.e., "press".
The handedness of the user may also be detected by distortions in the orientation of the contact area observed in the heat map that occur in a particular finger (e.g., single-hand interaction with their thumb). This additional information can then be used to adjust the centroid calculation to adjust for positioning deviations caused by the user's gestures.
Other interesting features include: weighted regions of the heat map, perimeters of the heat map (after an edge finding operation), histograms of row/column values of the heat map, peaks of the heat map, locations of the peaks relative to the edges, centroid correlation calculations of the features, or derivatives of the features. Analysis of these features may potentially be performed in a temporal context, i.e. the intention is not identified from only a single frame, but from signals evolving over a certain period of time.
These features may be used with heuristic algorithms (as described above) or may be used with machine learning algorithms that extract basic features corresponding to various touch intents. In some examples, the technique does not need to check the heat map of the entire screen, but only the area proximate to the current location of the touch contact (e.g., a 7x7 grid centered on that location). In these and other examples, the techniques also do not necessarily replace current threshold-based processes, but if there is sufficient confidence in the heat map signals, they may act as accelerators for disambiguation intentions (leaving the existing approach as a backup approach).
The following numbered examples may illustrate one or more aspects of the present disclosure:
example 1: a method, the method comprising: receiving, by one or more processors of a computing device, an indication representing a user input entered at an area of a presence-sensitive screen for a duration of time; determining, by the one or more processors and based on the indication representative of the user input, a plurality of multi-dimensional heat maps indicative of the user input; determining, by the one or more processors and based on the plurality of multi-dimensional heat maps, a change in a shape of the plurality of multi-dimensional heat maps over a duration of time; determining, by the one or more processors and in response to a change in the shape of the plurality of multi-dimensional heat maps, a classification of the user input; and performing, by one or more processors, an operation associated with the classification of the user input.
Example 2: the method of example 1, wherein the indication comprises an indication of capacitance in an area where the sensitive screen is present for a duration of time, and wherein determining the plurality of multi-dimensional heat maps comprises: based on the capacitance indications, a plurality of capacitance heatmaps representing capacitances in areas of the presence-sensitive screen are determined that reflect user inputs entered at the areas of the presence-sensitive screen over a duration of time.
Example 3: the method of any combination of examples 1 and 2, wherein determining the plurality of multi-dimensional heat maps comprises: reducing the number of indications to obtain a reduced set of indications; and determining, based on the reduced set of indications, a plurality of multi-dimensional heat maps indicative of the user input.
Example 4: the method of any combination of examples 1-3, wherein determining the classification of the user input comprises: in response to a change in the shape of the plurality of multi-dimensional heat maps, a press event is determined that indicates that a user inputting the user input applied increased pressure to the presence-sensitive screen for a duration of time.
Example 5: the method of any combination of examples 1-4, wherein determining the classification of the user input comprises: in response to a change in a shape of the plurality of multi-dimensional heat maps and based on the duration threshold, a tap event is determined that indicates that a user that inputs the user input performed at least one tap on the presence-sensitive screen.
Example 6: the method of any combination of examples 1 to 5, further comprising: determining a handedness of a user inputting the user input in response to a change in shape of the plurality of multi-dimensional heat maps, wherein determining the classification of the user input comprises: a classification of the user input is determined based on a determination of a handedness of the user inputting the user input and in response to a change in a shape of the heat map.
Example 7: the method of any combination of examples 1 to 6, further comprising: in response to a change in the shape of the plurality of multi-dimensional heat maps, determining which finger of the user that entered the user input was used to enter the user input, wherein determining the classification of the user input comprises: a classification of the user input is determined based on which finger of the user that entered the user input was used to enter the determination of the user input and in response to a change in the shape of the heat map.
Example 8: the method of any combination of examples 1-7, wherein determining the classification of the user input comprises: a classification of the user input is determined in response to a change in a shape of the plurality of multi-dimensional heat maps and based on a duration threshold.
Example 9: the method of any combination of examples 1 to 8, further comprising: in response to determining the plurality of multi-dimensional heat maps indicative of the user input, determining one or more centroid coordinates indicative of a relative center of one or more of the plurality of multi-dimensional heat maps within the area of the presence-sensitive screen; and determining one or more base graphical elements displayed at locations within the presence-sensitive display identified by the one or more centroid coordinates, wherein performing operations associated with the classification of the user input comprises: an operation associated with the classification of the user input is performed with respect to the one or more base graphical elements.
Example 10: a computing device, the computing device comprising: a presence-sensitive screen configured to output an indication representative of a user input entered at a region of the presence-sensitive screen for a duration of time; and one or more processors configured to: based on the indication representing the user input, determining a plurality of multi-dimensional heat maps indicative of the user input; determining, based on the plurality of multi-dimensional heat maps, a change in a shape of the plurality of multi-dimensional heat maps over a duration of time; determining a classification of the user input in response to a change in shape of the plurality of multi-dimensional heat maps; and performing an operation associated with the classification of the user input.
Example 11: the device of example 10, wherein the indication comprises a capacitance indication in an area where the sensitive screen is present for a duration of time, and wherein the one or more processors are configured to: based on the capacitance indications, a plurality of capacitance heatmaps representing capacitances in areas of the presence-sensitive screen are determined that reflect user inputs entered at the areas of the presence-sensitive screen over a duration of time.
Example 12: the apparatus of any combination of examples 10 and 11, wherein the one or more processors are configured to: reducing the number of indications to obtain a reduced set of indications; and determining, based on the reduced set of indications, a plurality of multi-dimensional heat maps indicative of the user input.
Example 13: the apparatus of any combination of examples 10 to 12, wherein the one or more processors are configured to: in response to a change in the shape of the plurality of multi-dimensional heat maps, a press event is determined that indicates that a user inputting the user input applied increased pressure to the presence-sensitive screen for a duration of time.
Example 14: the apparatus of any combination of examples 10 to 13, wherein the one or more processors are configured to: in response to a change in a shape of the plurality of multi-dimensional heat maps and based on the duration threshold, a tap event is determined that indicates that a user that inputs the user input performed at least one tap on the presence-sensitive screen.
Example 15: the apparatus of any combination of examples 10 to 14, wherein the one or more processors are further configured to: in response to a change in the shape of the plurality of multi-dimensional heat maps, determining a handedness of a user inputting the user input, and wherein the one or more processors are configured to: a classification of the user input is determined based on a determination of a handedness of the user inputting the user input and in response to a change in a shape of the heat map.
Example 16: the apparatus of any combination of examples 10 to 15, wherein the one or more processors are further configured to: in response to a change in the shape of the plurality of multi-dimensional heat maps, determining which finger of the user that entered the user input was used to enter the user input, and wherein the one or more processors are configured to: a classification of the user input is determined based on which finger of the user that entered the user input was used to enter the determination of the user input and in response to a change in the shape of the heat map.
Example 17: the apparatus of any combination of examples 10 to 16, wherein the one or more processors are configured to determine the classification of the user input based on a duration threshold in response to a change in a shape of the plurality of multi-dimensional heat maps.
Example 18: the apparatus of any combination of examples 10 to 17, wherein the one or more processors are further configured to: in response to determining the plurality of multi-dimensional heat maps indicative of the user input, determining one or more centroid coordinates indicative of a relative center of one or more of the plurality of multi-dimensional heat maps within the area of the presence-sensitive screen; and determining one or more base graphical elements displayed at a location within the presence-sensitive display identified by the one or more centroid coordinates, and wherein the one or more processors are configured to: an operation associated with the classification of the user input is performed with respect to the one or more base graphical elements.
Example 19: a system comprising means for performing any of the methods of examples 1-9.
Example 20: a computing device comprising means for performing any of the methods of examples 1-9.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. The computer-readable medium may comprise a computer-readable storage medium, which corresponds to a tangible medium (such as a data storage medium or a communication medium), including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, the computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium or (2) a communication medium (such as a signal or carrier wave). A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. However, it should be understood that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but rather refer to non-transitory tangible storage media. Magnetic and optical disks used include: compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disc and blu-ray disc where discs usually reproduce data magnetically, while optical discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor" as used may refer to any of the foregoing structure or any other structure suitable for implementing the described techniques. Further, in some aspects, the described functionality may be provided within dedicated hardware modules and/or software modules. Furthermore, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in various devices or apparatuses, including a wireless handset, an Integrated Circuit (IC), or a group of ICs (e.g., a chipset). In the present disclosure, various components, modules or units are described as being used to enhance the functional aspects of a device configured to perform the disclosed techniques, but need not necessarily be implemented by different hardware units. Rather, as noted above, the various units may be combined in hardware units or provided by a collection of interoperating hardware units (including one or more of the processors described above), in conjunction with suitable software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.

Claims (15)

1. A method, comprising:
receiving, by one or more processors of a computing device, an indication representing a user input entered at an area of a presence-sensitive screen for a duration of time;
determining, by the one or more processors and based on the indication representative of the user input, a plurality of multi-dimensional heat maps indicative of the user input;
determining, by the one or more processors and based on the plurality of multi-dimensional heat maps, a change in a shape of the plurality of multi-dimensional heat maps over the duration of time;
determining, by the one or more processors and in response to a change in shape of the plurality of multi-dimensional heat maps, a classification of the user input; and
performing, by the one or more processors, an operation associated with the classification of the user input.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein the indication comprises a capacitance indication in an area of the presence-sensitive screen over the duration, an
Wherein determining the plurality of multi-dimensional heat maps comprises: based on the capacitance indication, determining a plurality of capacitance heat maps representing capacitances in areas of the presence-sensitive screen, the plurality of capacitance heat maps reflecting the user input entered at the areas of the presence-sensitive screen over the duration of time.
3. The method of any combination of claims 1 and 2, wherein determining the plurality of multi-dimensional heat maps comprises:
reducing the number of indications to obtain a reduced set of indications; and
determining, based on the reduced set of indications, the plurality of multi-dimensional heat maps indicative of the user input.
4. The method of any combination of claims 1-3, wherein determining the classification of the user input comprises: in response to a change in shape of the plurality of multi-dimensional heat maps, determining a press event indicating that a user inputting the user input applied increased pressure to the presence-sensitive screen for the duration of time.
5. The method of any combination of claims 1-4, wherein determining the classification of the user input comprises: determining, in response to a change in a shape of the plurality of multi-dimensional heat maps and based on a duration threshold, a tap event indicating that a user inputting the user input performed at least one tap on the presence-sensitive screen.
6. The method of any combination of claims 1-5, further comprising: determining a handedness of a user inputting the user input in response to a change in shape of the plurality of multi-dimensional heat maps,
wherein determining the classification of the user input comprises: determining a classification of the user input based on a determination of a handedness of a user inputting the user input and in response to a change in shape of the heat map.
7. The method of any combination of claims 1-6, further comprising: determining which finger of a user inputting the user input is used to input the user input in response to a change in shape of the plurality of multi-dimensional heat maps,
wherein determining the classification of the user input comprises: determining a classification of the user input based on a determination of which finger of a user that entered the user input was used to enter the user input and in response to a change in shape of the heat map.
8. The method of any combination of claims 1-7, wherein determining the classification of the user input comprises: determining a classification of the user input in response to a change in shape of the plurality of multi-dimensional heat maps and based on a duration threshold.
9. The method of any combination of claims 1-8, further comprising:
in response to determining the plurality of multi-dimensional heat maps indicative of the user input, determining one or more centroid coordinates indicative of a relative center of one or more of the plurality of multi-dimensional heat maps within an area of the presence-sensitive screen; and
determining one or more base graphical elements displayed at locations within the presence-sensitive display identified by the one or more centroid coordinates,
wherein performing an operation associated with the classification of the user input comprises: performing an operation associated with the classification of the user input with respect to the one or more base graphical elements.
10. A computing device, comprising:
a presence-sensitive screen configured to output an indication representative of a user input entered at a region of the presence-sensitive screen for a duration of time; and
one or more processors configured to:
determining, based on the indication representative of the user input, a plurality of multi-dimensional heat maps indicative of the user input.
Determining, based on the plurality of multi-dimensional heat maps, a change in a shape of the plurality of multi-dimensional heat maps over the duration of time;
determining a classification of the user input in response to a change in shape of the plurality of multi-dimensional heat maps; and
performing an operation associated with the classification of the user input.
11. The apparatus as set forth in claim 10, wherein,
wherein the indication comprises a capacitance indication in an area of the presence-sensitive screen over the duration, an
Wherein the one or more processors are configured to: based on the capacitance indication, determining a plurality of capacitance heat maps representing capacitances in areas of the presence-sensitive screen, the plurality of capacitance heat maps reflecting the user input entered at the areas of the presence-sensitive screen over the duration of time.
12. The device of any combination of claims 10 and 11, wherein the one or more processors are configured to:
reducing the number of indications to obtain a reduced set of indications; and
determining, based on the reduced set of indications, the plurality of multi-dimensional heat maps indicative of the user input.
13. The device of claim 10, wherein the one or more processors are configured to perform any combination of the steps recited by the methods of claims 2-9.
14. A computer-readable medium having instructions stored thereon that, when executed, cause one or more processors to:
receiving an indication representative of a user input entered at an area of a presence-sensitive screen for a duration of time;
determining, based on the indication representative of the user input, a plurality of multi-dimensional heat maps indicative of the user input.
Determining, based on the plurality of multi-dimensional heat maps, a change in a shape of the plurality of multi-dimensional heat maps over the duration of time;
determining a classification of the user input in response to a change in shape of the plurality of multi-dimensional heat maps; and
performing an operation associated with the classification of the user input.
15. The computer-readable medium of claim 19, further having instructions stored thereon that, when executed, cause the one or more processors to perform any combination of the steps recited by the method of claims 2-9.
CN201880039649.0A 2017-12-12 2018-09-17 Disambiguating gesture input types using multi-dimensional heat maps Pending CN110799933A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762597767P 2017-12-12 2017-12-12
US62/597,767 2017-12-12
PCT/US2018/051368 WO2019118034A1 (en) 2017-12-12 2018-09-17 Disambiguating gesture input types using multiple heatmaps

Publications (1)

Publication Number Publication Date
CN110799933A true CN110799933A (en) 2020-02-14

Family

ID=63763016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880039649.0A Pending CN110799933A (en) 2017-12-12 2018-09-17 Disambiguating gesture input types using multi-dimensional heat maps

Country Status (4)

Country Link
US (1) US20200142582A1 (en)
EP (1) EP3622382A1 (en)
CN (1) CN110799933A (en)
WO (1) WO2019118034A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11042249B2 (en) * 2019-07-24 2021-06-22 Samsung Electronics Company, Ltd. Identifying users using capacitive sensing in a multi-view display system
JP7419993B2 (en) * 2020-07-02 2024-01-23 コニカミノルタ株式会社 Reliability estimation program, reliability estimation method, and reliability estimation device
US11790028B2 (en) * 2020-12-10 2023-10-17 International Business Machines Corporation Dynamic user interface enhancement based on user interactions
EP4332739A4 (en) 2021-09-16 2024-08-21 Samsung Electronics Co., Ltd. Electronic device and touch recognition method of electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110221684A1 (en) * 2010-03-11 2011-09-15 Sony Ericsson Mobile Communications Ab Touch-sensitive input device, mobile device and method for operating a touch-sensitive input device
US20130063389A1 (en) * 2011-09-12 2013-03-14 Motorola Mobility, Inc. Using pressure differences with a touch-sensitive display screen
US20130176270A1 (en) * 2012-01-09 2013-07-11 Broadcom Corporation Object classification for touch panels

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9280234B1 (en) * 2013-06-25 2016-03-08 Amazon Technologies, Inc. Input correction for touch screens
US20170147164A1 (en) * 2015-11-25 2017-05-25 Google Inc. Touch heat map
US10386948B2 (en) * 2017-07-12 2019-08-20 Microsoft Technology Licensing, Llc Method for touch detection enhancement based on identifying a cover film on a touch-screen

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110221684A1 (en) * 2010-03-11 2011-09-15 Sony Ericsson Mobile Communications Ab Touch-sensitive input device, mobile device and method for operating a touch-sensitive input device
US20130063389A1 (en) * 2011-09-12 2013-03-14 Motorola Mobility, Inc. Using pressure differences with a touch-sensitive display screen
US20130176270A1 (en) * 2012-01-09 2013-07-11 Broadcom Corporation Object classification for touch panels

Also Published As

Publication number Publication date
EP3622382A1 (en) 2020-03-18
US20200142582A1 (en) 2020-05-07
WO2019118034A1 (en) 2019-06-20

Similar Documents

Publication Publication Date Title
US9035883B2 (en) Systems and methods for modifying virtual keyboards on a user interface
KR102544780B1 (en) Method for controlling user interface according to handwriting input and electronic device for the same
EP2820511B1 (en) Classifying the intent of user input
US9501218B2 (en) Increasing touch and/or hover accuracy on a touch-enabled device
US9665276B2 (en) Character deletion during keyboard gesture
US20160034046A1 (en) System and methods for determining keyboard input in the presence of multiple contact points
CN107924286B (en) Electronic device and input method of electronic device
EP2551759A2 (en) Gesture recognition method and touch system incorporating the same
EP2570905A1 (en) User inputs of a touch-sensitive device
US8884930B2 (en) Graphical display with optical pen input
WO2009074047A1 (en) Method, system, device and terminal for correcting touch screen error
CN110799933A (en) Disambiguating gesture input types using multi-dimensional heat maps
WO2017197636A1 (en) Method for identifying palm rejection operation and electronic device
EP3008556B1 (en) Disambiguation of indirect input
US20160357301A1 (en) Method and system for performing an action based on number of hover events
CN105700727A (en) Interacting With Application layer Beneath Transparent Layer
WO2018112803A1 (en) Touch screen-based gesture recognition method and device
US20140152586A1 (en) Electronic apparatus, display control method and storage medium
WO2023136873A1 (en) Diffusion-based handedness classification for touch-based input
US10175825B2 (en) Information processing apparatus, information processing method, and program for determining contact on the basis of a change in color of an image
CA2817318C (en) Graphical display with optical pen input
KR20150060475A (en) Method and apparatus for controlling an input on a touch-screen
US20170123623A1 (en) Terminating computing applications using a gesture
WO2023229646A1 (en) Using touch input data to improve fingerprint sensor performance
CN117157611A (en) Touch screen and trackpad touch detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214