HK1186799B - Detecting gestures involving intentional movement of a computing device - Google Patents
Detecting gestures involving intentional movement of a computing device Download PDFInfo
- Publication number
- HK1186799B HK1186799B HK13114175.3A HK13114175A HK1186799B HK 1186799 B HK1186799 B HK 1186799B HK 13114175 A HK13114175 A HK 13114175A HK 1186799 B HK1186799 B HK 1186799B
- Authority
- HK
- Hong Kong
- Prior art keywords
- computing device
- movement
- input
- user
- contact
- Prior art date
Links
Description
Background
Handheld computing devices often allow a user to input information by making direct contact with a display surface of the device. These types of input mechanisms are referred to herein as contact-type input mechanisms. For example, a touch input mechanism provides a direct touch input event when a user touches a display surface of a computing device with a finger (or fingers). The pen input mechanism provides direct pen input events when a user touches the display surface with a pen device, also referred to as a stylus.
The computing device also permits the user to perform gestures by using one or more fingers or a pen device. For example, a gesture may correspond to a telltale marker that a user uses a finger or pen input device to track on a display surface. The computing device correlates the gesture with the associated command. The computing device then executes the command. This execution may be during the user's input action (as in a direct manipulation drag action) or after the user completes the input action.
In general, developers may wish to provide expressive, contact-type input mechanisms that accommodate a rich set of input gestures. However, increasing the number of gestures may introduce a number of challenges. For example, assume that the computing device accommodates two or more, but still similar, predefined intentional gestures. In this case, the user may want to enter a particular gesture, but the computing device may incorrectly interpret the gesture as another, but similar, gesture. In another case, a user may seek to use a computing device to perform tasks that do not involve intentional interaction with a contact-type input mechanism. The user may dispose of the computing device in a manner that results in inadvertent contact with the contact-type input mechanism. Or the user may accidentally swipe or touch the display surface of the contact-type input mechanism while inputting information, which results in accidental contact with the contact-type input mechanism. Contact-type input mechanisms may incorrectly interpret these accidental contacts as legitimate input events. It will be appreciated that these problems can frustrate users in situations where they occur frequently, or where they cause significant interruptions in the task that the user is performing, even if not commonly.
Developers can address some of these issues by developing complex idiosyncratic poses. However, this is not a completely satisfactory solution, as the user may have difficulty remembering and performing these gestures, especially when they are complex and "unnatural". Furthermore, complex gestures often cause the user to spend longer periods of time clearly expressing. For this reason, adding these types of gestures to the set of possible gestures yields a reduced reward.
SUMMARY
A computing device is described herein that receives one or more contact input events from one or more contact-type input mechanisms, such as a touch input mechanism and/or a pen input mechanism. The computing device also receives one or more movement input events from one or more movement-type input mechanisms, such as an accelerometer and/or a gyroscope device. These movement-type input mechanisms indicate orientation or dynamic motion (or both) of the computing device during operation of the contact-type input mechanisms. (more specifically, as used herein, the term movement broadly encompasses orientation of the computing device or motion of the computing device, or both.) based on these input events, the computing device is able to recognize gestures that incorporate movement of the computing device as intentional and integrated parts of the gestures.
The detailed description sets forth numerous examples involving intentionally moved gestures. In some cases, a user may apply contact to an object presented on the display surface while also applying some type of prescribed movement. The computing device may then apply the specified behavior to the specified object. In other cases, the user may interleave the contacts and movements in any manner. In many cases, the behavior may result in visual manipulation of content presented by the computing device. Some of the examples described in the detailed description are summarized below.
According to one example, a user may rotate a computing device from an initial starting point to a prescribed orientation. The degree of rotation describes the manner in which the behavior is performed. In one case, the behavior may include a zoom action or a scroll action.
According to another example, a user may flip the computing device up and/or down and/or in any other direction while maintaining contact on an object presented on the display surface. The computing device may interpret the gesture as a command to flip or otherwise move an object, such as one or more pages, on the display surface.
According to another example, the user may establish contact with the display surface at a point of contact and then rotate the computing device approximately 90 degrees or some other amount. The computing device may interpret the gesture as a command to set or release a lock that prevents rotation of information presented on the display surface.
According to another example, a user may point a computing device at a target entity while selecting an object presented on a display surface. The computing device may interpret the gesture as a request to perform some action on the object relative to the target entity.
Still other gestures are possible in conjunction with intentional movement of a computing device. These gestures are referred to as foreground gestures because the user intentionally and consciously performs these actions with mind about a particular target. This type of movement is the opposite of background movement. Background movement refers to movement that is incidental to the targeted goal of a user performing a gesture or providing any other type of input. The user may not even be aware of the background movement.
The above functionality may be manifested in various types of systems, components, methods, computer-readable media, data structures, articles of manufacture, and so on.
This summary is provided to introduce a selection of concepts in a simplified form; these concepts will be further described in the following detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Brief Description of Drawings
FIG. 1 shows an illustrative computing device that includes functionality for interpreting contact input events in the context of movement input events and/or for interpreting movement input events in the context of contact input events.
FIG. 2 illustrates an Interpretation and Behavior Selection Module (IBSM) used in the computing device of FIG. 1.
FIG. 3 shows an illustrative system in which the computing device of FIG. 1 may be used.
Fig. 4 and 5 show examples of rotational movements that may be used in intentional gestures.
Fig. 6-9 illustrate different gestures that may incorporate the type of rotational movement illustrated in fig. 4 or 5.
FIG. 10 illustrates an intentional gesture involving pointing a computing device at a target entity.
FIG. 11 illustrates an intentional gesture involving moving a computing device against a fixed finger.
FIG. 12 illustrates an intentional gesture involving the application of a touch to two display portions of a computing device having at least two display portions.
FIG. 13 illustrates intentional gestures enabled by application of idiosyncratic multi-touch gestures.
FIG. 14 illustrates an intentional gesture that involves applying a contact point to a computing device and then rotating the computing device.
FIG. 15 illustrates functionality for using movement input events to enhance interpretation of contact input events (e.g., when a finger (or another hand) is applied or removed from a display surface of a computing device).
FIG. 16 illustrates background movement that typically results in a large movement input event.
Fig. 17 and 18 illustrate environments in which a computing device may disregard touch input events based on an orientation of the computing device.
FIG. 19 shows a flow chart explaining one manner of operation of the computing device of FIG. 1 in a foreground mode of operation.
FIG. 20 shows a flow chart illustrating one manner of operation of the computing device of FIG. 1 in a background mode of operation.
FIG. 21 shows illustrative processing functionality that can be used to implement any aspect of the features shown in the above-described figures.
The same reference numerals are used throughout the disclosure and figures to reference like components and features. Series 100 numbers refer to features originally found in fig. 1, series 200 numbers refer to features originally found in fig. 2, series 300 numbers refer to features originally found in fig. 3, and so on.
Detailed Description
The present invention is organized as follows. Section A describes an illustrative computing device that accommodates gestures involving intentional movement of the computing device. Section B describes an illustrative method of explaining the operation of the system of section a. Section C describes illustrative processing functionality that can be used to implement any aspect of the features described in sections a and B.
This application is related to the following applications: commonly assigned patent application serial number entitled "Using Movement of a Computing Device to enhanced interpretation of Input Events Produced while Interacting with a Computing Device with the Computing Device," filed on even date herewith and entitled "Using Movement of a Computing Device to Enhance interpretation of Input Events Produced while Interacting with a Computing Device," by the inventor, KennethHinckley et al (attorney docket number 330015.01). This application is incorporated herein by reference in its entirety.
As a matter of text onwards, some of the figures describe concepts in the context of one or more structural components (variously referred to as functions, modules, features, elements, etc.). The components shown in the various figures can be implemented in any manner by any physical and tangible mechanism, such as by hardware, software, firmware, etc., or any combination thereof. In one case, the separation of various components shown in the figures into distinct units may reflect the use of corresponding distinct components in an actual implementation. Alternatively or additionally, any single component shown in the figures may be implemented by a plurality of actual components. Alternatively, or in addition, the depiction of any two or more separate components in the figures may reflect different functions performed by a single actual component. Further discussion of FIG. 21 provides additional details regarding one illustrative implementation of the functionality illustrated in the figures.
Other figures depict the concepts in flow chart form. In this form, certain operations are described as constituting distinct blocks performed in a certain order. Such implementations are illustrative and not restrictive. Some blocks described herein may be grouped together and performed in a single operation, some blocks may be separated into component blocks, and some blocks may be performed in a different order than shown herein (including performing the blocks in a parallel manner). The various blocks shown in the various flowcharts can be implemented in any manner by any physical and tangible mechanism (such as by hardware, software, firmware, etc., or any combination thereof).
With respect to terminology, the phrase "configured to" encompasses any manner in which any kind of physical and tangible function may be constructed to perform an identified operation. The functions may be configured to perform operations using, for example, software, hardware, firmware, etc., and/or any combination thereof.
The term "logic" encompasses any physical and tangible function for performing a task. For example, each operation illustrated in the flowcharts corresponds to a logical component for performing the operation. The operations may be performed using, for example, software, hardware, firmware, etc., and/or any combination thereof. When implemented by a computing system, the logical components represent electronic components that are physical parts of the computing system, regardless of how they are implemented.
The following description may identify one or more features as "optional". This type of statement should not be construed as an exhaustive indication of features that may be considered optional; that is, other features may also be considered optional, although not explicitly identified in the text. Similarly, the interpretation may identify a single instance of a feature or multiple instances of a feature. The reference to a single instance of a feature does not exclude a plurality of instances of this feature; further, the use of multiple instances does not preclude a single instance of this feature. Finally, the terms "exemplary" or "illustrative" refer to one implementation out of a possible plurality of implementations.
A. Illustrative computing device
A.1. Overview
FIG. 1 illustrates an example of a computing device 100 that accounts for movement of the computing device 100 when analyzing contact input events. The computing device 100 optionally includes a display mechanism 102 in combination with various input mechanisms 104. The display mechanism 102 (as included) provides a visual rendering of the digital information on the display surface. The display mechanism 102 may be implemented as any type of display, such as a liquid crystal display or the like. Although not shown, the computing device 100 may also include an audio output mechanism, a haptic (e.g., vibratory) output mechanism, and so forth.
The input mechanism 104 may include a touch input mechanism 106 and a pen input mechanism 108. The touch input mechanism 106 may be implemented using any technology, such as resistive touch screen technology, capacitive touch screen technology, acoustic touch screen technology, bi-directional touch screen technology, and so forth. In bi-directional touch screen technology, the display mechanism provides elements dedicated to displaying information and elements dedicated to receiving information. Thus, the surface of the bi-directional display mechanism is also the catch mechanism. The touch input mechanism 106 and the pen input mechanism 108 may also be implemented using a pad-type input mechanism that is separate (or at least partially separate) from the display mechanism 102. Tablet-type input mechanisms are also known as tablets, digitizers, graphics pads, and the like.
The pen input mechanism 108 may be implemented using any technology, such as passive pen technology, active pen technology, and the like. In the passive case, the computing device 100 detects the presence of the pen device when the pen device is in contact with (or in proximity to) the display surface. In this case, the pen device may simply appear as an elongate implementation without a separate power source and processing functionality, or it may be passively powered by inductive coupling with the display mechanism 102. In the active case, the pen device may incorporate a separate detection function for sensing its position relative to the display surface. Further, the active pen device may comprise a separate movement sensing mechanism. Further, the active pen device may include a separate depth sensing mechanism. In these examples, the active pen device may forward its input data to computing device 100 for analysis. In the following description, it is understood that input data regarding the pen device may originate from the computing device 100, the pen device itself, or a combination of both.
In the terminology used herein, a contact-type input mechanism describes any type of input mechanism in which a user establishes actual or near contact with a display surface of display mechanism 102 or other portion of computing device 100. The contact-type input mechanism may include the touch input mechanism 106 and the pen input mechanism 108 described above, among others. Further, any contact with the computing device 100 may include one or more instances of a separate contact. For example, a user may make contact with the display surface by approaching or actually contacting one or more fingers with the display surface.
The input mechanism 104 also includes various types of movement-type input mechanisms 110. The term mobile-type input mechanism describes any type of input mechanism that measures the orientation or motion, or both, of the computing device 100. The mobile input mechanism 100 may be implemented using linear accelerometers, gyroscopic sensors (a "gyroscopic device" according to the term used herein), vibration sensors tuned to various frequency bandwidths of motion, sensors for detecting particular gestures or movements of the computing device 100 or portions of the computing device 100 with respect to gravity, torque sensors, strain gauges, flexible sensors, optical encoder mechanisms, and the like. Further, any movement-type input mechanism may sense movement along any number of spatial axes. For example, the computing device 100 may incorporate an accelerometer and/or gyroscope device that measures movement along three spatial axes.
The input mechanism 104 may also include any type of image sensing input mechanism 112, such as a video capture input mechanism, a depth sensing input mechanism, a stereoscopic image capture mechanism, and so forth. The depth sensing input mechanism measures the distance of objects from certain portions of the computing device 100. For example, in one case, the depth sensing input mechanism measures the distance of an object from the display surface of the computing device 100. The depth sensing input mechanism may be implemented using any type of capture technology (e.g., time-of-flight technology) in combination with any type of electromagnetic radiation (e.g., visible spectrum radiation, infrared spectrum radiation, etc.). Portions of image sensing input mechanism 112 may also be used as a mobile input mechanism, so long as they can be used to determine movement of computing device 100 relative to the surrounding environment. Alternatively or additionally, a portion of the image sensing input mechanism 112 may be used in conjunction with the mobile input mechanism 100, for example, to enhance the performance and/or accuracy of the mobile input mechanism 110. Although not specifically enumerated in fig. 1, other input mechanisms may include a keypad input mechanism, a mouse input mechanism, a voice input mechanism, and the like.
In the terminology used herein, each input mechanism is considered to generate an input event when the input mechanism is invoked. For example, when a user touches a display surface of the display mechanism 102 (or other portion of the computing device 100), the touch input mechanism 106 generates a touch input event. The pen input mechanism 108 generates pen input events when a user applies a pen device to a display surface. More generally, a contact-type input mechanism generates a contact input event that represents the close or actual physical contact of an object with the computing device 100. Conversely, a movement-type input mechanism is considered to generate a movement input event. Any input event may itself comprise one or more input event components. For ease of reference, the following explanation will often describe the output of an input mechanism in plural numbers, such as "multiple input events". However, various analyses may also be performed on the basis of a single input event.
Fig. 1 shows the input mechanism 104 as partially overlapping the display mechanism 102. This is because at least a portion of the input mechanism 104 may be integrated with the functionality associated with the display mechanism 102. This is the case with respect to contact-type input mechanisms, such as touch input mechanism 106 and pen input mechanism 108. For example, the touch input mechanism 106 depends in part on the functionality provided by the display mechanism 102.
An Interpretation and Behavior Selection Module (IBSM) 114 receives input events from the input mechanism 104. That is, it collects input events during input actions, where the input actions are intentional or unintentional, or a combination of both. As the name implies, the IBSM114 performs the task of interpreting input events. In doing so, optionally in conjunction with other factors that interpret the contact input event in the context of the movement input event (or vice versa) after performing its interpretation task, the IBSM114 can perform zero, one, or more of the associated behaviors with the interpreted input event.
From a high-level perspective, the IBSM114 performs two tasks, depending on the nature of the movement that occurs during the input action. In a first task, the IBSM114 analyzes both contact input events and movement input events to determine whether the user has intentionally moved the computing device 100 as part of an intentional gesture. If so, the IBSM114 recognizes the gesture and performs any actions mapped to the gesture. In this context, the movement that occurs is a foreground movement, since the user intentionally performs this action; that is, it is in the foreground of the user's concentration of awareness.
More specifically, in a first instance, the IBSM114 performs a behavior upon completion of the gesture. In the second case, the IBSM114 performs the behavior during the gesture. In any case, the IBSM114 can also interpret two or more gestures that occur simultaneously and/or sequentially and perform an equal amount of behavior associated with the gestures. In any case, the IBSM114 can also continue to analyze the type of gesture that has occurred (or is currently occurring) and correct the interpretation of this gesture appropriately. The following is an example that sets forth these aspects of the IBSM 114.
In a second task, the IBSM114 analyzes contact input events along with movement input events, wherein the movement input events are categorized as background movements of the computing device 100. This movement is background, meaning that it is in the background of user awareness. The movement may be incidental to any goal-directed behavior of the user (if any). In this case, the IBSM114 attempts to interpret the type of input actions associated with these input events, for example, to determine whether contact with the display surface was intentional or unintentional. The IBSM114 can then perform various actions in response to its analysis.
Section a.2 (below) provides additional details regarding the foreground mode of operation, while section a.3 (below) provides additional details regarding the background mode of operation. Although many of the examples illustrate the use of touch input in conjunction with movement input events, any of these examples also apply to the use of pen input (or any other contact input) in conjunction with movement input events.
Finally, the computing device 100 may run one or more applications 116 received from any one or more application sources. The applications 116 may provide any high-level functionality in any application domain.
In one case, the IBSM114 represents an independent component with respect to the application 116. In another case, one or more functions attributed to the IBSM114 can be performed by one or more applications 116. For example, in one implementation, the IBSM114 can interpret gestures that have been performed, and an application can select and perform behaviors based on the interpretation. Accordingly, the concept of the IBSM114 will be interpreted herein without limitation to include functionality that can be performed by any number of components in a particular implementation.
FIG. 2 shows another illustration of the IBSM114 introduced in FIG. 1. As shown therein, the IBSM114 receives various input events. For example, the IBSM114 can receive touch input events, pen input events, directional input events, motion input events, image sensing input events, and the like. In response to these events, the IBSM114 provides various output behaviors. For example, the IBSM114 can execute various commands in response to detecting an intentional motion-based gesture. Depending on environment-specific factors, the IBSM114 can perform these functions during and/or after the user has completed a gesture. The IBSM114 can also provide various visual, audible, tactile, etc. feedback indicators to indicate its current interpretation of a gesture that may have been (or is being) performed. Alternatively or additionally, if the IBSM114 determines that the input action is unintentional, the IBSM114 can discard the partial contact input event. Alternatively or additionally, the IBSM114 can restore a state (such as a display state and/or an application state) to a point before the input action occurred. Alternatively or additionally, when the IBSM114 determines that at least a portion of the input action is unintentional, the IBSM114 can correct the interpretation of the contact input events to remove the effects of the movement input events. Alternatively or additionally, the IBSM114 can adjust the configuration of the input device 110 in any manner based on its understanding of the input actions, e.g., in order to more efficiently receive further intentional contact input events and minimize the effects of unintentional contact input events. Alternatively or additionally, the IBSM114 can modify or remove the feedback indicators described above based on its current interpretation of the input events, e.g., so long as the current interpretation can be different from the previous interpretation.
To function as described above, the IBSM114 can incorporate a suite of analysis modules, wherein detection of different gestures and background motion scenarios can rely on different respective analysis modules. Any analysis module may rely on one or more techniques for classifying input events, including pattern matching techniques, rule-based techniques, statistical techniques, and the like. For example, each gesture or background noise scenario may be characterized by a particular indicative pattern of input events. To classify a particular sequence of input events, a particular analysis module may compare those input events to a data store having a known pattern. Representative features that may help differentiate between gestures or background scenes include: the manner in which the contact is applied and then removed, the manner in which the contact is moved (if at all) while the contact is being applied, the amplitude of the movement input event, the particular signal shape and other characteristics of the movement input event, and the like. And as described above, the analysis module may continually test conclusions with respect to new input events that arrive.
FIG. 3 shows an illustrative system 300 in which the computing device 100 of FIG. 1 may be used. In this system 300, a user interacts with the computing device 100 to provide input events and receive output information. Computing device 100 may be physically implemented as any type of device, including any type of handheld device and any type of conventional stationary device. For example, the computing device 100 may be implemented as a personal digital assistant, a mobile communication device, a tablet device, an electronic book reading device, a handheld gaming device, a laptop computing device, a personal computing device, a workstation device, a game console device, a set-top box device, and so forth. Further, computing device 100 may include one or more device portions, some (or all or none) of which may have a display surface portion.
Fig. 3 shows a representative (but non-limiting) set of implementations of computing device 100. In scenario A, computing device 100 is a handheld device having any size. In scenario B, the computing device 100 is an electronic book reading device having multiple device portions. In scenario C, the computing device 100 includes a tablet-type input device, e.g., whereby a user makes touch and/or pen gestures on a surface of the tablet-type input device instead of (or in addition to) a display surface of the display mechanism 102. The pad-type input device may be integrated with the display mechanism 102 or separate from each other (or some combination thereof). In scenario D, the computing device 100 is a laptop computer of any size. In scenario E, computing device 100 is any type of personal computer. In scenario F, computing device 100 is associated with a wall type of display mechanism. In scenario G, computing device 100 is associated with a desktop display mechanism or the like.
In one scenario, computing device 100 may act in a local mode without interacting with any other functionality. Alternatively or additionally, the computing device 100 may interact with any type of remote computing functionality 302 via any type of network 304 (or multiple networks). For example, the remote computing functionality 302 may provide applications that are executable by the computing device 100. In one case, computing device 100 may download an application; in another case, the computing device 100 may utilize an application via a web interface or the like. The remote computing functionality 302 may also implement any one or more aspects of the IBSM 114. Accordingly, in any implementation, one or more functions identified as components of computing device 100 may be implemented by remote computing functionality 302. The remote computing functionality 302 can be physically implemented using one or more server computers, data stores, routing devices, and the like. The network 304 may be implemented by any type of local area network, wide area network (e.g., the internet), or a combination thereof. Network 304 may be physically implemented with any combination of wireless links, hardwired links, name servers, gateways, etc., governed by any protocol or combination of protocols.
A.2. Foreground related movement
This section describes the operation of the IBSM114 for situations where a user intentionally moves or strikes a computing device as part of an intentional gesture. In general, gestures incorporating any type of movement may be defined, where the term movement is given a broad interpretation as described herein. In some cases, the gesture involves moving the computing device 100 to a specified orientation relative to the initial orientation. Alternatively or additionally, the gesture may involve applying a specified motion (e.g., a motion along a path, a vibratory motion, etc.) to the computing device 100. Motion may be characterized as any combination of parameters, such as velocity, acceleration, direction, frequency, and the like.
In some cases, gestures involve the joint application of contact (via a touch and/or pen device, etc.) with movement. For example, a user may touch an object on the display surface and then move the computing device 100 to a specified orientation and/or move the computing device 100 in a specified dynamic manner, such as by tracing a specified gesture path. The IBSM114 can interpret the resulting input events as a request to perform some action on a specified object or other content on the display surface (e.g., an object that the user touches with a finger or a pen device). In this case, the contact input event at least partially overlaps with the movement input event in time. In other cases, the user may apply the contact to computing device 100 and then apply some indicative movement to computing device 100, or first apply some indicative movement to computing device 100 and then apply the contact to computing device 100. The gesture may include one or more additional sequential phases of movement and/or contact, and/or one or more additional phases of simultaneous movement and contact. In these cases, the contact input events may be interleaved with the movement input events in any manner. In other words, the contact input event does not need to overlap in time with the movement input event. In other cases, a gesture may be uniquely related to applying a specified movement to the computing device 100. In other cases, the user may apply two or more gestures simultaneously. In other cases, the user may apply a gesture that transitions seamlessly to another gesture, and so on. The following explanation illustrates these general points with respect to a specific example. These examples are representative and are not exhaustive of the many gestures that may be created based on intentional movement of computing device 100.
In many of the examples below, the user is illustrated as making contact with the display surface of the display mechanism 102. Alternatively or additionally, the user may interact with the pad-type input device as shown in scenario C of FIG. 3. Further, in many cases, the user is illustrated as making selections using a two-handed approach, e.g., where the user holds the device in one hand and makes selections with the fingers (or fingers) of the other hand. In either of these cases, however, the user can optionally make a selection with the thumb (or other hand portion) of the hand holding the device. Finally, a general reference to a user's hand may be understood to include any portion of the hand.
In any of the examples that follow, computing device 100 may present any type of feedback to the user indicating that a gesture has been recognized and a corresponding behavior is to be applied or is in the process of being applied. For example, the computing device 100 may present any combination of visual feedback indicators, audible feedback indicators, tactile (e.g., vibratory) feedback indicators, and the like. Using non-visual feedback indicators may be useful in some situations because it may be difficult for a user to notice a visual indicator when moving a computing device. According to another general feature, the computing device 100 may render an undo command to allow the user to remove the effects of any undesired gestures.
Fig. 4 illustrates a scenario in which a user grips computing device 402 in hand 404 (or both hands, not shown). The user then rotates the computing device 402 so that its distal end 406 moves downward in the direction of arrow 408. In a supplemental movement, the user may rotate the computing device 402 such that its distal end 406 moves upward in a direction opposite to arrow 408.
Fig. 5 illustrates a scenario in which a user grips a computing device 502 in a hand 504. The user then rotates the computing device 502 so that its side edge 506 moves upward in the direction of arrow 508. In a supplemental movement, the user may rotate the computing device 502 such that its side edge 506 moves downward in a direction opposite to arrow 508.
Although not shown, in another rotation, the user may rotate the computing device in a plane parallel to the floor. More generally, these three axes are merely representative; the user may rotate the computing device in any plane along any axis. Further, the user may combine different types of rotational and/or translational movements into a composite gesture.
FIG. 6 illustrates one gesture that incorporates a rotational movement of the type illustrated in FIG. 4. In this case, assume that the user's intent is to perform some function with respect to the object 602 displayed on the display surface of the computing device 100. For example, the user may want to zoom in and enlarge the object 602. As part of the gesture, the user applies contact to the object 602, such as by touching the object 602 with his or her thumb (in one merely representative case). Note that in this case, the center of expansion for the zoom action may correspond to the (x, y) point associated with the position of the thumb on the display surface. Alternatively, the center of expansion may correspond to a fixed offset from the (x, y) point associated with the thumb position; in this case, the zoom action (in one example) may be generated at a point directly above the thumb (rather than below the thumb). In another example, a user may want to enter a value via object 602. To do so, the user may touch the object to increase the value (and touch some other object not shown to decrease the value).
In these scenarios, the angle at which the user holds the computing device 604 controls the rate at which actions are performed, such as the rate at which scaling occurs (in the first example described above) or the rate at which the value in the numeric field increases or decreases (in the second example described above), and so forth. The angle may be measured with respect to an initial orientation of the computing device when the user touches the object 602. Optionally, the angle may be measured in relation to the initial orientation of the device when the user starts an indicative rotational movement, which substantially coincides with a touch action (or other contact action), wherein such indicative movement may in some cases follow the contact or in other cases immediately precede the contact.
In general, to perform a rotation-based gesture, a user may hold computing device 604 with a hand 606 (or both hands), touch object 602, and tilt computing device 604. That is, if the user wants to substantially increase the rate of behavior, the user tilts the computing device 604 a greater amount with respect to the original orientation. If the user wants to increase the rate of behavior by a smaller amount, the user may tilt the computing device 604 by a smaller amount. The user can reduce the rate at any time by reducing the tilt angle. In other cases, the IBSM114 can also map different behaviors to different rotational directions with respect to the initial tilt angle. For example, the user may zoom in on the object 602 by tilting the computing device 604 downward, and zoom out on the object 602 by tilting the computing device 604 upward. To prevent minor unintended rate changes, the IBSM114 can incorporate a range of initial tilt angles in which no rate changes occur. If the user exceeds a threshold at the end of this dead band, the IBSM114 begins to change rates. Such a threshold may be defined in both rotational directions relative to the initial tilt angle.
The IBSM114 can apply any type of function to map the tilt angle to the amount of zoom (or whatever behavior is mapped to the tilt angle). In one case, the IBSM114 can apply a linear rate function. In another case, the IBSM114 can apply any type of nonlinear rate function. For example, in the latter case, the IBSM114 can apply inertia-based functions, any type of control system function, and the like.
In the example of fig. 6, the user may also apply a pan command to the content presented by the computing device 604. For example, as described above, a user may touch the object 602 with one finger and rotate the computing device 604 to a specified orientation. This may expand a portion of the content associated with object 604. Simultaneously, or in an interleaved manner, the user may use his or her other hand (or the same hand) to pan (pan) the content in a lateral direction, for example, by touching the content with a finger and moving the finger in a desired direction. The embodiment of fig. 6 includes this type of complex control because it leaves one hand of the user to make a pan-type gesture (or any other type of meaningful gesture or input command). This example is also an illustration of how one gesture (scaled by rotating the device) seamlessly joins another gesture (such as panning by a finger moving across the display surface) if performed in an interleaved manner.
In other cases, the user may perform the above-described rotational movement without specifying any particular object on the display surface. In response, the IBSM114 can apply the zoom (or any other prescribed behavior) to all content presented on the display surface, or any other global content that is appropriate for the context of the environment-specific scenario.
Fig. 7 shows a similar concept as the scenario of fig. 6. Here, the user holds the computing device 702 in one hand 704 (where, in this particular case, the user does not touch any portion of the display surface with the hand 704). The user then manipulates the scrolling mechanism 706 with his or her other hand 708. To clarify this description, FIG. 7 shows that the scroll mechanism 706 is associated with an explicit scroll handle. In other cases, however, the scrolling mechanism may be invoked by touching and moving any content presented on the display surface. This behavior performs a conventional scrolling operation. Additionally, the user may tilt the computing device 702 from an initial position to increase or decrease the rate at which scrolling is performed. The scenario illustrated in fig. 7 provides a good user experience, as it provides a convenient means for quickly progressing through very long content items, and may also provide a means for progressing through content items more slowly when needed.
In the examples of fig. 6 and 7, the computing device (604, 702) provides a behavior governed by the orientation of the device (604, 702) relative to an initial starting position. Additionally or alternatively, the computing device may provide behavior responsive to the rate at which the user rotates the device or performs some other indicative movement. For example, in the example of fig. 6, a user may touch the object 602 with one finger and tilt the computing device 604 in a downward direction as shown. The IBSM114 interprets the rate at which the user performs this tilting movement as an indication of the rate at which some prescribed behavior is to be performed. For example, if the user quickly tilts the computing device 604 in a downward direction (in the manner of casting a line with a fishing rod), the IBSM114 can quickly increase (or decrease) the zoom level. Or assume that object 602 is associated with a scroll command. The IBSM114 can interpret the user's "line-throwing" action as an indication of a request to quickly scroll through a large document. For example, a user may repeatedly, graphically, close (snap) the device down to make a continuous large jump through a long document (or, alternatively, make a discrete jump to the beginning or end of a document). In this mode of illustrated behavior, the IBSM114 can be configured to ignore the motion of the computing device when the user returns the computing device to an initial position after completing the downward closure.
Fig. 8 shows a scenario incorporating the rotational movement shown in fig. 5. Here, assume that the user's intent is to apply a finger to mark the original page, travel to another page in the document, and then return to the original page. To do so, the user may hold the computing device 802 in a hand 804. The user may place a finger 806 on a tab 808 to indicate a particular page or other location in the multi-part content item. Assume that the user then browses subsequent pages or other parts of the multi-part content item. To return to the original page, the user can flip the computing device 802 in the direction of arrow 810, just as a person can flip through the pages of a book. This restores the original page on the display surface of the computing device 802.
The above scenario may change in any manner. For example, the label 808 may be located at a different location on the display surface or eliminated altogether to facilitate some other visual aid. Optionally, the computing device 802 may eliminate all of these permanent visual aids. Alternatively or additionally, the computing device 802 may use visual, audible, and/or tactile indicators, or the like, that may be dynamically invoked when the user places a finger on the display surface in a manner that is interpreted as a bookmark. When the computing device 802 communicates a page turn action to the user, the computing device 802 may also provide any type of suitable visual experience, such as by displaying a visual simulation that the page is being turned. Further, as described below, the interaction illustrated in FIG. 8 may be implemented on a dual-screen device; here, a user may hold one device portion while rotating the opposite device portion to invoke a "flip back" behavior.
The flip gesture shown in FIG. 8 may be used to implement other flip behaviors on the display surface as defined by a particular environment or application. In one illustrative case only, assume that the display surface of computing device 802 displays any type of object 812 having multiple sides, dimensions, planes, etc. The user may proceed to the other side by a flipping motion as shown in fig. 8. This has the effect of flipping the object 812 to its side to display the new top of the object 812. The user may also perform this type of operation to transition between different applications, options, filters, views, and the like. The user may also perform this type of operation to remove an upper first object that is presented below a second object. That is, the user may quickly tilt the computing device 802 to move an upper object to one side of a lower object, thereby revealing the lower object, and thus making the operation of stacking objects simple and intuitive. The user may repeatedly perform this tilting operation to continuously rotate the object by a certain amount (where the amount may be defined in an application-specific manner), or to continuously remove layers, etc.
FIG. 9 illustrates the same basic page-turning gesture described above with respect to FIG. 8, but in the context of an electronic book reader type computing device 902. Here, the user may secure his or her thumb 904 (or other finger) at the lower left edge (or some other location) of the display surface of the computing device 902. This bookmarking operation marks the original page. Then, assume again that the user electronically flips through subsequent pages of the document. In one case, to return to the original page, the user may then flip one device portion of the reader-type computing device 902 upward in the manner shown in FIG. 8. The IBSM114 can detect this indicative flip gesture based on the motion sensing input mechanism. Alternatively or additionally, the IBSM114 can detect this gesture based on movement of the two display portions (906, 908) relative to each other, such as from an angle sensor (which detects the angle between the device portions).
In another case, to enable the IBSM114 to more clearly distinguish the page-turning gesture, the user can close two device portions (906, 908) of the computing device 902, sandwiching his or her thumb 904 (or other finger) between the two device portions (906, 908) in a graphical manner. The user may then perform the above-described flipping motion to return to the original page. The computing device 902 may determine that the user has closed the device portions (906, 908) around the thumb 904 by sensing the angle between the device portions (906, 908). Even without the flipping action described above, the action of placing the thumb between the two device portions (906, 908) serves a useful purpose of registering bookmark locations.
Fig. 10 illustrates a scenario in which a user holds a computing device 1002 with a hand 1004 and then points the computing device 1002 in the direction of some target entity, or some designated agent of the target entity. The user may then use the other hand 1006 (or the same hand) to identify a certain object 1008 (e.g., a document, a command, etc.) on the display surface of the computing device 1002. The IBSM114 interprets this gesture as a request to send a copy of the object 1008 to a target entity, or a request to synchronize the content and/or state of the object 1008 with the target entity, or a request to achieve other special purposes with respect to the object 1008 and the target entity. The IBSM114 can determine the direction in which the computing device 1002 is pointing based on any type of movement-type input mechanism. The IBSM114 can determine the relative position of the target entities in different ways, such as by manually recording the position of the target entities in advance, automatically sensing the position of the target entities based on signals emitted by the target entities, automatically sensing the position of the target entities based on image capture techniques performed by the computing device 1002 itself, and so forth. Alternatively or additionally, the IBSM114 can determine the orientation of the computing device 1002; the supplemental IBSM of the target computing device may also determine an orientation of that target device. The IBSM114 can then determine that the computing device 1002 is pointing at the target computing device by determining whether the various users are pointing their devices at each other. In other cases, one or more users may be pointing at a shared target object, and the IBSM of the respective computing devices may detect this fact in any of the methods described above. For example, the shared target object may correspond to a shared peripheral device (e.g., a printer, a display device, etc.).
In one example of the scenario shown in FIG. 10, a user may select a file (or other content item) by touching a representation of the file on a display surface. The user may then point the device at the trash can to facilitate deletion of the file (assuming that the trash can's location was previously registered or otherwise determinable in one or more of the ways described above). In a variation, the user may touch a representation of a file on the display surface and tilt the computing device 1002 toward a trash icon on the display surface itself, thus metaphorically transferring this file to a trash can. Still other applications are possible. In one case, whenever a content item is deleted in this manner, the IBSM114 can present a visual representation of the undo command. This enables the user to reverse the effect of unintentional deletion.
Fig. 11 illustrates another scenario in which a user holds a computing device 1102 in a hand 1104 and moves the computing device 1102 to contact fingers of the user's other hand 1106 or other portion of the other hand 1106. This is in contrast to conventional movement in which a user depresses a display surface of the computing device 1102 using the hand 1106. The IBSM114 can associate any type of command with this movement. This type of gesture may be coupled with various security measures to prevent its inadvertent activation. For example, although not shown, the user may apply the thumb of the left hand 1104 to contact an icon or the like to achieve this particular gesture pattern.
In another case (not shown), the user may tap different objects on the display surface with different degrees of force, e.g., with normal force or with greater force. The IBSM114 can interpret these two types of touch contacts in different ways. The IBSM114 can then perform a first type of behavior for a soft tap and a second type of behavior for a heavier tap. The behavior invoked is application specific. For example, an application may answer an incoming call with a soft tap and ignore the incoming call with a heavier tap. Or the application may use a light tap to mute the ring tone and a heavy tap to ignore the incoming call altogether.
To implement this type of gesture, the mobile input mechanism 110 may provide input events corresponding to, for example, gently leaning a finger against the display surface, tapping the display surface with "normal" strength, and robustly tapping the display surface. The IBSM114 can then examine the input signals associated with these input events to classify the type of input action that has occurred. For example, the IBSM114 can interpret input events having large magnitude signal spikes as an indication of a hard tap gesture. Additionally or alternatively, the IBSM114 can use audio signals (e.g., received from one or more microphones) to distinguish between soft and heavy taps, and/or signals received from any other input mechanism (such as a pressure sensor, etc.).
In related cases, the IBSM114 can distinguish between different types of drag movements based on the force with which the user initially applies a finger to the display surface. This allows, for example, the IBSM114 to distinguish between a light swipe and a heavy slap and drag on the screen. The IBSM114 can again map any behavior to different types of tap-and-drag gestures.
Fig. 12 shows another scenario in which a user holds a computing device 1202 having two device portions (1204, 1206) in a hand 1208. Using the other hand 1210, the user touches a display surface provided on the two device portions (1204, 1206) with a prescribed angle 1212 between the two device portions (1204, 1206). The IBSM114 can map this gesture to any type of behavior. Variations of this gesture may be defined by dynamically changing the angle 1212 between the two device portions (1204, 1206) and/or dynamically changing the position of the fingers and thumb placed on the display surfaces of the two device portions (1204, 1206).
FIG. 13 depicts a more general gesture in scenario A, where a user places a distinctive combination of fingers 1304 on a display surface of a computing device 1306 using a hand 1302. Using the other hand 1308, the user may continue to move the computing device 1306 in any indicative manner, such as by shaking the computing device 1306 in a particular direction. Because of the existence of this specific multi-touch gesture, it is unlikely that computing device 1306 will confuse this gesture with the accompanying motion of computing device 1306. The user may also apply different multi-touch gestures to convey different commands in conjunction with the same base motion.
In addition, repeating again, the user may apply any prescribed type of contact to individual objects or particular regions on the display surface and then apply prescribed movements to the computing device. The combination of focused contact and movement helps to distinguish intentional gesture-based movement from accidental movement. For example, as shown in scenario B of fig. 13, a user may touch an object 1310 on a display surface of a computing device 1312 and then vigorously shake the computing device 1312. The IBSM114 can interpret this action in any predetermined manner, such as a request to delete the specified object 1310, a request to undo the last action taken on the specified object 1310, a request to move the specified object 1310 to a particular folder, a request to drag the object 1310 so that it is displayed in the foreground, and so forth.
In another case, the user may touch an object on the display surface where the IBSM114 provides a menu. The user may then apply the indicative movement to the computing device while still touching the display surface. The IBSM114 can interpret this joint action as a request to expand a menu or a request to select a particular option from a menu. Note that this method combines the following three steps into a single continuous action: (1) selecting an object; (2) activating a gesture mode of motion sensing; and (3) selecting a particular command or gesture to apply to the selected object. Other examples of selecting and moving gestures are described above, such as with respect to zooming, scrolling, and paging.
Further, the IBSM114 can interpret dynamic motion gestures in different ways based on the orientation of the computing device. For example, in a first scenario, assume that a user holds a computing device in his or her hand and shakes back and forth. In a second scenario, assume that the user places the computing device on a table and shakes it back and forth while the device is lying prone down. The IBSM114 can interpret these two scenarios as conveying two different types of gestures. Further, the IBSM114 can interpret the movement-based gestures in the context of whatever application the user is interacting with at that time. For example, when performing the same movement-type gesture in the context of two separate applications, the IBSM114 can interpret the same movement-type gesture in different ways.
Finally, FIG. 14 illustrates one scenario in which a user wants to rotate computing device 1402 from landscape mode to portrait mode (or vice versa) without causing the content presented on computing device 1402 to also rotate. To perform this task, the user may touch any portion of the display surface, such as (but not limited to) corner 1404. For example, when the computing device 1402 is placed on a flat surface (such as a table), the user may press on the display surface as indicated by the gesture made with the hand 1406. Or the user may clip the computing device while holding the computing device 1402, as indicated by the gesture made with the hand 1408. The user then rotates the computing device 1402 approximately 90 degrees (or some other application-specific amount of rotation). In an implementation, once this gesture is recognized, the IBSM114 can prevent the computing device 1402 from automatically rotating the text. The same behavior can also be performed in reverse. That is, by default, the computing device 1402 does not rotate content while the computing device 1402 rotates. However, contacting the display surface while rotating the computing device 1402 will cause the content to also rotate. The rotation may be performed about any axis, such as along an axis defined by the touch point itself.
In another example, a user may establish a touch point and then rotate the computing device, but not move the computing device relative to the finger establishing the touch point. For example, a user may clamp the computing device to establish a touch point and then place on a couch or bed, and in doing so, rotate the computing device about 90 degrees relative to the initial orientation of the computing device. Also in this case, the IBSM114 can interpret the device movement and application of the touch point together as an instruction to disable (or enable) content rotation.
In one case, the computing device may maintain the rotational latch after the computing device is moved to the proper orientation even if the user moves the touch point. When operation is ended in this mode, the user can then quickly activate a command to remove the rotational latch. Additionally or alternatively, the user may change the orientation of the device to automatically remove the rotation latch.
A.3. Background relevant move
This section describes the operation of the IBSM114 when the movements applied to the computing device 100 are not intentional components of unintentional gestures and therefore constitute background movements. The IBSM114 can interpret the nature of this movement and act on it to enhance accurate analysis of the input action. The examples presented in this section are representative and not exhaustive.
Fig. 15 illustrates a first scenario in which a user depresses a display surface of a computing device 1506 using a finger 1502 of a hand 1504. Or assume that the user removes the finger 1502 from the display surface. In response to this action, the IBSM114 receives touch input events from the touch input mechanism 106. These events register actual (or imminent) contact of the finger 1502 with the display surface, or removal of the contact. The IBSM114 also receives movement input events from the movement-type input mechanism 110. These events register the motion that the user inadvertently causes when applying or removing his finger 1502 from the display surface.
The first input interpretation module 1508 can analyze the contact input event, for example, by determining the location and time of application or removal of the touch contact from its perspective. The first input interpretation module 1508 concludes primarily based on the degree (and/or shape) of contact between the user's finger and the display surface. The second input interpretation module 1510 can analyze the movement input event, for example, by also determining from its perspective the location and time of application or removal of the touch contact and any jostling that may occur in the process. This is possible because the location and time of contact application or removal can also be inferred from the movement input events. More specifically, the second input interpretation module 1510 concludes based in part on the amplitude, general shape, and frequency characteristics of the movement signal provided by the movement-type input mechanism 110. This can help to ascertain when contact is applied or removed. The second input interpretation module 1510 can also conclude based on the manner in which the computing device moves once contact is applied or removed. This may help to distinguish where the contact is applied or removed. For example, if a user holds the computing device upright and taps on a corner, the computing device may be expected to shake in an indicative manner, indicating the location where the touch occurred.
The final input interpretation module 1512 can then use the conclusions of the second interpretation module 1510 to modify (e.g., correct) the conclusions of the first interpretation module 1508. For example, the final input interpretation module 1512 may conclude that a touch was applied at the true location (x, y), but that there was movement to location (x + Δ x, y + Δ y) due to unintentional movement associated with the application of the touch. Similar unintentional movements may occur when the user moves his finger. Alternatively or additionally, the final input interpretation module 1512 may adjust the time at which a touch is deemed to be applied or removed. As another example, if a contact is registered with little or no associated movement signal, the final input interpretation module 1512 may conclude that the contact may represent an inadvertent contact with the display surface. The IBSM114 then therefore ignores this contact or otherwise treats it differently from a "normal" contact action.
The IBSM114 can take various actions based on the conclusions reached by the final input interpretation module 1512. In one case, the IBSM114 can correct touch input events to account for accidental movement that has been detected, for example, by refining the indication of the location and/or time at which the user applied or removed the touch contact. In addition, the IBSM114 can restore the display surface and/or application to a state prior to any inadvertent movement occurring.
In the case of FIG. 15, the multiple component interpretation modules feed back conclusions to the final interpretation module 1512. In another implementation, however, the final interpretation module 1512 may perform analysis on the raw input events provided by the input mechanisms, that is, without the component interpretation module.
The basic functionality of FIG. 15 may also be used to enhance the recognition of gestures. For example, a flip-type movement may resemble a fast scrolling action (both made to the display surface from the point of view of the nature of the contact), and the user may perform both actions to convey different input commands. To help distinguish between the two input actions, the IBSM114 can interpret the resulting contact input events in conjunction with movement input events. The movement input event may help to distinguish between two input actions as long as the flipping motion has a different movement profile compared to the scrolling operation.
The IBSM114 can also use the motion signals to detect different methods of expressing a particular action, such as scrolling with the thumb of the hand holding the computing device, and scrolling with the index finger of the other hand. Such a situation may be used to optimize user interaction with the computing device when detected, for example, by enabling the IBSM114 to more accurately distinguish the user's input actions. For example, the computing device may adjust its interpretation of the input event based on an understanding of what fingers are used to contact the computing device, e.g., may allow for greater latitude for off-axis motion when the off-axis motion is determined to be the user is scrolling with a thumb.
In another example, the IBSM114 can determine the fingers that are used when the user is typing on a soft (touch screen) keyboard. In one case, the user may type using two thumbs. In another case, the user may type with a single finger while holding the device with the other hand. The two modes have respective different movement profiles describing keystrokes. The IBSM114 can use different movement profiles to help infer what method the user employs to strike a given key. The IBSM114 can then apply the appropriate Gaussian distribution when interpreting each type of touch contact. This may improve the efficiency of touch screen typing and reduce the likelihood of errors.
The functionality of FIG. 15 may also be used to detect input actions that are entirely unexpected. For example, in some cases, legitimate input events may have one or more prescribed movement profiles, while unintentional input events may have one or more other prescribed movement profiles. For example, a "clean" tap on the display surface with an index finger may have a profile indicating legitimate input activity, while an accidental contact with the display surface may have a profile indicating an unintentional input activity, which may be caused, for example, by a thumb or little finger sweeping across the display surface.
Fig. 16 illustrates another scenario in which a user picks up the computing device 1602 by hand or drops the computing device 1602 after use. In the specific example of fig. 16, the user is in the process of placing the computing device 1602 in the pocket 1604 by hand 1606. In this case, the IBSM114 can receive movement input events from the movement-type input mechanism 110 that exceed a prescribed threshold in a time window. The input event may also reveal that the device has been moved to an uncommon viewing gesture. A larger movement input event is generated because the user may quickly move or spatially flip the computing device 1602 to place the computing device 1602 in the pocket 1604. It is also assumed that the user inadvertently touches the display surface of the computing device 1602 during the process of placing the computing device 1602 into the pocket 1604. If so, the IBSM114 can conclude (based on the large movement input events and optionally other factors) that the touch input event is likely unintentional. The computing device 1602 may also include a light sensor and/or a proximity sensor that can independently confirm the fact that the computing device 1602 has been placed in the pocket 1604, such as when the light sensor suddenly indicates that the computing device 1602 has been moved into a dark environment. The IBSM114 then responds to the touch input events by ignoring them. Further, as above, the IBSM114 can restore the display surface to a state prior to the inadvertent input action. Additionally or alternatively, the IBSM114 can restore a previous application state.
For example, assume that a user touches the display surface of a computing device while the device is in his pocket. The IBSM114 can conclude that the touch is unintentional on a mobile basis only. The device may then "undo" any commands triggered by the inadvertent touch. If the movement is not sufficiently informative, the IBSM114 can conclude on the basis of the light sensors and/or other sensors that the touch is unintentional. The computing device 1602 may then remove the effects of the inadvertent contact again.
In another scenario (not shown), a user may operate the computing device 1602 in a noisy environment (e.g., an unstable environment), such as on a bumpy ride or the like. In this case, the IBSM114 can analyze the movement input events to determine regular shaking of the computing device 1602, which is indicative of a noisy environment. To address this situation, the IBSM114 can perform the above-described actions, such as by ignoring those portions of the touch input event that are evaluated as accidental, and restoring the display state and/or application state to remove the erroneous changes. Additionally or alternatively, the IBSM114 can attempt to modify the touch input events to mitigate background noise associated with the movement input events. Additionally or alternatively, the IBSM114 can reduce the sensitivity to the extent that it indicates that a valid touch event has occurred. This will force the user to more intentionally accurately input the intended touch input event. But this will also have the desirable effect of more effectively ignoring unintentional input events caused by noisy environments.
FIG. 17 illustrates a scenario in which a user holds the computing device 1702 in a hand 1704 during use. In this case, the user's thumb 1706 may inadvertently contact the display surface of the computing device 1702. Conversely, fig. 18 shows a scenario in which a user lays the computing device 1802 flat against a table or the like. The user then uses the hand 1804 to press the index finger 1806 in the lower left corner of the display surface of the computing device 1802. The IBSM114 can examine the respective orientations of the computing devices (1702, 1802) and conclude that: the thumb placement that occurs in fig. 17 is more likely to be inadvertent than finger placement as shown in fig. 18, even though the contacts occur in the same area of the display surface and may have a similar shape. This is because it is natural for the user to hold the computing device 1702 in the manner shown in FIG. 17 when holding the computing device aloft. But when the computing device 1802 is lying flat in the manner shown in fig. 18, it is less likely that a single finger will be accidentally placed on the display surface. The motion analysis described above may help to confirm conclusions based on orientation. Additionally, if one or more other sensors (such as light sensors, proximity sensors, etc.) are placed at the appropriate location of the computing device 1802, it may be helpful to determine whether the computing device is lying flat on a table in the manner shown in fig. 18. The IBSM114 can address the accidental touches shown in FIG. 17 in any of the ways described above, such as by ignoring these touch input events.
In the case of FIG. 17 (and, in fact, in all scenarios), the IBSM114 can also take into account other environmental factors when interpreting contact input events and movement input events. One such contextual factor is the application that the user is interacting with at the time the input event occurs. For example, assume that a first user is reading content using an e-book reader type device and places a thumb on a particular portion of a page. Assume that a second user is scrolling through a web page and places a thumb on the same general portion of the page. The IBSM114 is more likely to interpret thumb placement as intentional for the first scenario than for the second scenario. This is because in the first scenario, the user can bookmark a page by placing his or her thumb on the page. Further, the cost of erroneously bookmarking a page is negligible, while the cost of accidentally scrolling or flipping to another page, interrupting the user's current task, is high.
In another dual-screen scenario, the IBSM114 can use a movement input mechanism to sense the orientation in which the user is holding the computing device. The determined orientation may fit a profile indicating that the user is looking at one display portion rather than another. That is, a user may navigate the computing device to optimize interaction with only one display portion. In response, the IBSM114 can place the presumed unused display portion in a low power state. Alternatively or additionally, the IBSM114 can rely on other input mechanisms to determine which display portion the user is viewing, such as the image sensing input mechanism 112. Alternatively or additionally, in some cases, the IBSM114 can be configured to ignore input events received via the unused display portions, thereby classifying them as unintentional contact.
In general, the IBSM114 can implement the above scenarios in different ways. In one case, the IBSM114 can maintain a data store that provides various profiles of indicative input characteristics describing various known actions, including intentional and unintentional actions. The IBSM114 can then compare the input events associated with the unknown input actions to a data store of pre-stored profiles to help determine the unknown input actions. The data store may also include information indicative of behavior that, once judged, may be executed to undertake the input action.
Alternatively or additionally, the IBSM114 can apply any type of algorithmic technique to combine contact input events and movement input events. For example, the IBSM114 can apply a formula that indicates the likely location of a touch contact on the display surface based on movement input events; the IBSM114 can then apply another formula that indicates how this position (evaluated based on movement only) can be used to correct the position evaluated based on the contact input event. Such algorithms are device specific in nature and may be developed based on theoretical analysis and/or experimental analysis.
B. Illustrative Process
Fig. 19 and 20 show steps (1900, 2000) explaining the respective operation of the computing device 100 of fig. 1 in foreground and background modes of operation of portion a. Since the underlying principles of operation of a computing device have been introduced in section a, certain operations are described in this section in a general manner.
Fig. 19 shows a process 1900 for controlling the computing device 100 in response to intentional movement of the computing device 100. At block 1902, the IBSM114 receives a first input event from one or more contact-type input mechanisms. At block 1904, the IBSM114 receives second input events from the one or more mobile-type input mechanisms. The second input event indicates movement of the computing device in an intentional manner. The right margin of fig. 19 enumerates some of the possible intentional movements that may occur. More generally, blocks 1902 and 1904 may be performed in any order, and/or these operations may overlap. At block 1906, the IBSM interprets intentional gestures made by the user based on the first input event and the second input event, if possible. At block 1908, the ISBM114 applies a behavior associated with the intentional gesture that has been detected. In this sense, the interpretation performed in block 1906 enables actions to be performed at block 1908, where the operations of blocks 1906 and 1908 may be performed by the same component or by different individual components. The right margin of FIG. 19 enumerates some of the behaviors that may be invoked.
The feedback loop shown in FIG. 19 indicates that the IBSM114 repeatedly analyzes input events over the course of time of an input action. During the course of an input action, the IBSM114 can increase the confidence with which it detects the nature of the input action. Initially, the IBSM114 can be unable to judge the nature of the input action. As the action continues, the IBSM114 can then predict the action with relatively low confidence, which may in fact misinterpret the input action. As the input action continues further, the IBSM114 increases the confidence level and, if possible, ultimately correctly determines the type of input action that has occurred. Or the IBSM114 can discard interpretations that are no longer supported by the received input event. In the manner described above, the IBSM114 can "reclaim" any incorrect assumptions by restoring the display surface and/or restoring any applications to a pre-action state.
Fig. 20 illustrates a process 2000 for controlling the computing device 100 in response to unintentional movement of the computing device 100. At block 2002, the IBSM114 receives first input events from one or more contact-type input mechanisms. In block 2004, the IBSM114 receives second input events from one or more movement-type input mechanisms in response to movement of the computing device 100. More generally, blocks 2002 and 2004 may be performed in any order, and/or these operations may overlap. Movement in this case indicates background movement of the computing device 100, meaning that the movement is not the active focus of the user. The right margin of fig. 20 enumerates some of the background movements that may occur. At block 2006, the IBSM114 determines the type of input action that has occurred based on the first input event and the second input event, optionally in combination with other factors. In effect, the second input event modifies or restricts the interpretation of the first input event. At block 2008, the IBSM114 can apply the appropriate behavior in response to detecting the type of input event at block 2006. The operations of blocks 2006 and 2008 may be performed by the same component or by different individual components. The right margin of FIG. 20 enumerates some of the possible behaviors that the IBSM114 can perform.
In a manner similar to that in FIG. 19, the feedback loop shown in FIG. 20 indicates that the IBSM114 can repeat the analysis of the IBSM114 in the course of the input action gradually improving the confidence of its analysis.
C. Representative processing function
Fig. 21 sets forth illustrative electrical data processing functionality 2100 that can be used to implement any aspect of the functionality described above. Referring to FIG. 1, for example, the type of processing functionality 2100 illustrated in FIG. 21 may be used to implement any aspect of the computing device 100. In one case, the processing functionality 2100 may correspond to any type of computing device that includes one or more processing devices. In any case, the electronic data processing functionality 2100 represents one or more physical and tangible processing mechanisms.
The processing functionality 2100 can include volatile and non-volatile memory, such as RAM2102 and ROM2104, as well as one or more processing devices 2106. The processing functionality 2100 also optionally includes various media devices 2108, such as a hard disk module, an optical disk module, and so forth. The processing functionality 2100 may perform various operations identified above when the processing device 2106 executes instructions maintained by a memory (e.g., RAM2102, ROM2104, or others).
More generally, instructions and other information can be stored on any computer-readable medium 2110, including but not limited to static memory storage devices, magnetic storage devices, optical storage devices, and the like. The term computer readable medium also encompasses a plurality of storage devices. In any case, the computer-readable media 2110 represent some form of physical and tangible entity.
The processing functionality 2100 also includes an input/output module 2112 for receiving various inputs from a user (via input module 2114) and for providing various outputs to the user (via output mechanisms). One particular output mechanism may include a display module 2116 and associated Graphical User Interface (GUI) 2118. The processing functionality 2100 can also include one or more network interfaces 2120 for exchanging data with other devices via one or more communication conduits 2122. One or more communication buses 2124 communicatively couple the above-described components together.
The communication conduit 2122 can be implemented in any manner, e.g., via a local area network, a wide area network (e.g., the internet), etc., or any combination thereof. The communication pipes 2122 may comprise any combination of hardwired links, wireless links, routers, gateway functions, name servers, and so forth, which may be managed by any protocol or combination of protocols.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (11)
1. A computing device, comprising:
a plurality of input mechanisms comprising:
at least one contact-type input mechanism for providing at least one contact input event indicative of contact with the computing device;
at least one movement-type input mechanism for providing at least one movement input event indicative of one or more of orientation and motion of the computing device; and
at least one image sensing input mechanism to enhance performance of the at least one mobile input mechanism; and
an Interpretation and Behavior Selection Module (IBSM) for receiving the at least one contact input event and the at least one movement input event, the IBSM configured to use the at least one contact input event and the at least one movement input event to detect gestures involving intentional movement of the computing device in a prescribed manner,
the IBSM enabling performance of a behavior associated with the gesture that results in a visual manipulation of content rendered by the computing device, the IBSM relying on the at least one image sensing input mechanism to determine the content rendered by the computing device that is being viewed by a user.
2. The computing device of claim 1, wherein the at least one contact event is generated in response to contact with the computing device with a plurality of fingers.
3. The computing device of claim 1, wherein the intentional movement of the computing device in the prescribed manner involves a rotation of the computing device.
4. The computing device of claim 3, wherein at least one of an angle and a rate of the rotation defines a manner in which the behavior is performed.
5. The computing device of claim 3, wherein the rotation comprises a flipping action relative to at least one axis, and wherein the performed action comprises moving content on a display surface of the computing device to reveal underlying content.
6. The computing device of claim 3, wherein the rotation comprises a flipping action relative to at least one axis, and wherein the performed action comprises rotating content on a display surface of the computing device.
7. The computing device of claim 3, wherein the rotation comprises a flip action relative to at least one axis, and wherein the content is at least one page, and wherein the performed action comprises flipping the at least one page.
8. The computing device of claim 7, wherein the at least one contact input event indicates placement of a finger at a bookmark location, and wherein the flipping of the at least one page is performed relative to the bookmark location.
9. A method for controlling a computing device in response to intentional movement of the computing device, comprising:
receiving at least one contact input event from at least one touch input mechanism in response to establishing a point of contact on a surface of the computing device in a prescribed manner;
receiving at least one movement input event from at least one movement-type input mechanism in response to movement of the computing device in a prescribed manner;
detecting a gesture associated with the at least one contact input event and the intended movement, the gesture instructing the computing device to enable or disable rotation of content presented on the computing device during rotation of the computing device; and
enhancing an accuracy of at least one movement input event such that content presented on the computing device that a user is viewing is determined.
10. A method for an Interpretation and Behavior Selection Module (IBSM), comprising:
receiving at least one contact input event from one or more touch input mechanisms in response to contact with content presented on a display surface of a computing device in a prescribed manner;
receiving the at least one movement input event from one or more movement-type input mechanisms in response to intentional rotation of the computing device about at least one axis;
detecting a gesture associated with contact with the display surface and the intentional rotation;
enhancing an accuracy of at least one movement input event such that content presented on a display surface of the computing device that a user is viewing is determined; and
performing an action associated with the gesture that modifies a zoom level of the content.
11. A system for an Interpretation and Behavior Selection Module (IBSM), comprising:
means for receiving at least one contact input event from one or more touch input mechanisms in response to contact with content presented on a display surface of a computing device in a prescribed manner;
means for receiving the at least one movement input event from one or more movement-type input mechanisms in response to intentional rotation of the computing device about at least one axis;
means for detecting a gesture associated with contact with the display surface and the intentional rotation;
means for enhancing accuracy of at least one movement input event such that content presented on a display surface of the computing device that a user is viewing is determined; and
means for performing a behavior associated with the gesture that modifies a zoom level of the content.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/970,939 | 2010-12-17 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1186799A HK1186799A (en) | 2014-03-21 |
| HK1186799B true HK1186799B (en) | 2018-03-02 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN103262005B (en) | Detecting gestures involving intentional movement of a computing device | |
| US8982045B2 (en) | Using movement of a computing device to enhance interpretation of input events produced when interacting with the computing device | |
| US20230409165A1 (en) | User interfaces for widgets | |
| US20260016949A1 (en) | Devices, Methods, and User Interfaces for Conveying Proximity-Based and Contact-Based Input Events | |
| US8902181B2 (en) | Multi-touch-movement gestures for tablet computing devices | |
| US10304163B2 (en) | Landscape springboard | |
| US10831337B2 (en) | Device, method, and graphical user interface for a radial menu system | |
| CN102272697B (en) | For selecting calculation element and the method for viewing area in response to discrete orientation input action and intelligent content analysis | |
| EP4246292A2 (en) | Gesture detection, list navigation, and item selection using a crown and sensors | |
| US20090251432A1 (en) | Electronic apparatus and control method thereof | |
| JP5951886B2 (en) | Electronic device and input method | |
| CN110647244A (en) | Terminal and method for controlling the terminal based on space interaction | |
| US20130154952A1 (en) | Gesture combining multi-touch and movement | |
| WO2013061326A1 (en) | Method for recognizing input gestures. | |
| US10613732B2 (en) | Selecting content items in a user interface display | |
| US10037217B2 (en) | Device, method, and user interface for integrating application-centric libraries and file browser applications | |
| US20110119579A1 (en) | Method of turning over three-dimensional graphic object by use of touch sensitive input device | |
| EP3433713B1 (en) | Selecting first digital input behavior based on presence of a second, concurrent, input | |
| HK1186799B (en) | Detecting gestures involving intentional movement of a computing device | |
| HK1186799A (en) | Detecting gestures involving intentional movement of a computing device | |
| CN103257729B (en) | Touch signal processing method and electronic device | |
| HK1190210B (en) | Using movement of a computing device to enhance interpretation of input events produced when interacting with the computing device | |
| HK1190210A (en) | Using movement of a computing device to enhance interpretation of input events produced when interacting with the computing device |