US6346894B1 - Method and system for intelligent text entry on a numeric keypad - Google Patents
Method and system for intelligent text entry on a numeric keypad Download PDFInfo
- Publication number
- US6346894B1 US6346894B1 US09/414,303 US41430399A US6346894B1 US 6346894 B1 US6346894 B1 US 6346894B1 US 41430399 A US41430399 A US 41430399A US 6346894 B1 US6346894 B1 US 6346894B1
- Authority
- US
- United States
- Prior art keywords
- key
- characters
- character
- gram
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
- G06F3/0237—Character input methods using prediction or retrieval techniques
Definitions
- the present invention relates generally to entering characters on a numeric keypad such as a telephone touch-tone keypad.
- numeric keypads such as telephone touch-tone keypads
- a single key corresponds to multiple characters.
- the “2” key corresponds to the letters “A”, “B”, and “C”.
- the iteration through characters starts with the first character of the group. For example, if the “2” key is pressed, iteration starts at “A”, and if the “3” key is pressed, iteration starts at “D”. Accordingly, for at least two-thirds of the characters entered on the keypad, multiple key presses are required. It is well known that using this technique to enter entire words and sentences is a tedious, error-prone, and generally unpleasant experience for the user, resulting in very limited deployment of applications requiring text entry on devices using numeric keypads (e.g., telephones).
- numeric keypads e.g., telephones
- FIG. 1 is an illustration of a prior art telephone that may be used with a preferred embodiment of the present invention.
- FIG. 2 is an illustration of a prior art numeric keypad that may be used with a preferred embodiment of the present invention.
- FIG. 3 is a flow chart of a method used in a preferred embodiment.
- FIG. 4 is a flow chart of a method used to predict a character in a preferred embodiment.
- FIG. 5 is an illustration of a decision tree used in a preferred embodiment.
- FIG. 6 is a flow chart of a method for calculating a probability of a context n-gram of a preferred embodiment.
- FIG. 7 is a flow chart of how a decision tree of a preferred embodiment is constructed.
- FIG. 8 is an illustration of a decision tree of a preferred embodiment in which a context n-gram is stored in reverse order.
- FIG. 9 is a flow chart of a method of a preferred embodiment in which probabilities of a plurality of context n-grams are compared.
- FIG. 10 is a flow chart of an encoding method of a preferred embodiment.
- FIG. 11 is a flow chart of a method of a preferred embodiment in which a predicted character is rejected by using a cycle key.
- FIG. 12 is a flow chart of method of another preferred embodiment in which a predicted character is rejected by re-selecting a key.
- FIG. 13 is a flow chart of an initialization method of a preferred embodiment.
- FIG. 14 is a flow chart of a method of a preferred embodiment for generating a plurality of probabilities in a table.
- FIG. 15 is an illustration of two components in a system of a preferred embodiment.
- a standard telephone 100 (FIG. 1) having a keypad like keypad 200 of FIG. 2 is merely one example.
- the embodiments described below each implement the general method illustrated in FIG. 3 .
- a user selects a key on a keypad (Step 310 ).
- an application makes a context-sensitive prediction as to which character of those corresponding to the selected key is intended by the user (Step 330 ).
- There are several ways to make a context-sensitive prediction One way is to generate several context n-grams, one context n-gram for each character associated with the selected key.
- Each context n-gram comprises a number of previously-confirmed characters and one of the characters associated with the selected key. The number of times that each context n-gram was encountered in a sample text is determined by using a statistical model, and the character belonging to the context n-gram that was encountered most in the sample text is designated as the predicted character.
- the predicted character is then presented to the user for confirmation (Step 350 ). If the user confirms the selection, the character is stored (Step 370 ). If the user rejects the character, the application presents a new character to the user until a character is finally confirmed (Step 390 ). The confirmed character is then stored (Step 395 ). Accordingly, it is only when the predicted character is not the intended character that the user is required to iterate through character choices.
- This general method has several advantages.
- One advantage is that a character may be entered using fewer keystrokes than with the prior art methods described above. As long as the predicted character is correct, the user may type words or sentences by hitting a single key per character as he would on a standard typewriter keyboard.
- Another advantage is that less concentration is required to enter a string of characters.
- An additional advantage is that the ability to easily enter arbitrary text enables a much larger class of applications, such as using a standard telephone to create custom messages on alpha-numeric pagers.
- the first step in the general method of the preferred embodiments is selecting a key.
- the user depresses a desired key on a numeric keypad.
- the numeric keypad is typically part of a standard telephone or screen phone. Other ways of selecting a key can be used.
- the selected key can correspond to a set of characters.
- the “5” key corresponds to the characters “J”, “K”, and “L”.
- Step 330 Predicting the Intended Character
- FIG. 4 provides a more detailed block diagram for the prediction of step 330 .
- a context n-gram is created in step 410 for each character associated with the key selected by the user.
- a context n-gram is a sequence of characters, where “n” represents the number of characters in the sequence (e.g., a 4-gram refers to a sequence of four characters).
- the last character in the sequence is one of the characters associated with the selected key.
- the first “n ⁇ 1” number of characters in the sequence correspond to the last “n ⁇ 1” number of previously-confirmed characters.
- the keypad 200 of FIG. 2 will be used as an example.
- the 26 letters of the English alphabet are associated with numbers 2-9, and the space is associated with the “0” key. Accordingly, there are 27 characters from which a user may choose. It should be remembered that this is merely one possible character-key association and that other associations may be used with more or fewer characters.
- the user entered a character for the middle of a word.
- a character for the beginning of a word several alternatives are possible in addition to the approach described above.
- One alternative is to fill the first n ⁇ 1 positions in the context n-gram with the “space” character.
- Another alternative is to user a smaller-sized context n-gram.
- the context n-grams are used to predict which character associated with the selected key is intended by the user.
- the number of times that each context n-gram was encountered in a sample text is determined in step 430 .
- These probabilities are compared in step 450 and the character associated with the context n-gram having the highest probability is designated as the predicted character. That is, if a particular context n-gram was found more often than others in a sample text, it is likely that the particular context n-gram is meant to be entered by the user. Accordingly, the character associated with the context n-gram having the highest probability will be returned as the predicted character.
- the frequency of the particular context n-grams occurring in a sample text can be measured by using a statistical model such as a probability table or a decision tree. Other ways of predicting the intended character can also be used such as by using a probability model not based on a sample text.
- the context n-grams are used to index a probability table containing N 27 entries, assuming 27 possible characters, of the probabilities of each context n-gram in a sample text (see FIG. 9 ).
- the probabilities in this table are generated from the sample text, as described below.
- this alternative approach may reduce storage requirements in instances where the number of valid (probability>0) context n-grams is small relative to the N 27 possible context n-grams (i.e., when N is large). If this is not the case, the added overhead may instead increase storage requirements.
- This storing alternative has the advantage of reducing runtime computation and storage requirements. Storage costs can be further reduced by indicating the predicted character (one character of those associated with a key) by storing a two-bit binary number.
- indicating the predicted character one character of those associated with a key
- storing a two-bit binary number For the case in which 4 letters of a 26-letter alphabet are associated with a key. Instead of storing the binary number indicating which of the 26 letters is predicted, it would only be necessary to store the binary number indicating which of the 4 candidate characters is predicted.
- the predictions are pre-compiled in the table instead of being determined when a key is selected.
- this approach rather than storing an N 27 sized table of probabilities for N sized context n-grams and computing the maximums and corresponding predictions at runtime, these computations are performed offline, and the resulting predictions are stored in an 8*(N ⁇ 1) 27 sized table.
- This table may contain one entry for each digit key 2 - 9 and 0 (those associated with characters in the keypad 200 of FIG. 2) and each possible set of prior characters.
- a context n-gram in which four characters are to be used to determine which character is intended by the user (i.e., a four-character context n-gram, or a 4-gram).
- the user has already entered the characters “HELL” and now presses the “6” key.
- M “N”, and “O”
- the probability for each context n-gram is stored in a table, and the last letter of the context n-gram with the highest probability is selected.
- a context n-gram/key combination is used.
- a context n-gram/key combination is a context n-gram having its last position filled with a selected key instead of a character associated with the selected key.
- the context n-gram/key combination is “ELL6”. Only one entry is stored in the table. This entry corresponds to the predicted character. In the example above, the table entry for the “ELL6” context n-gram/key combination would simply contain the letter “O”, which is the predicted character itself. This embodiment has the advantage of reducing runtime computation and storage requirements.
- This embodiment may also use the storage alternative described above to store only valid (probability>0) context n-grams.
- a two-bit binary number may again be used to indicate which of the candidate characters is predicted.
- the key can also be represented by a reduced encoding in which each previously-confirmed character is represented by a 5-bit number and the key pressed is represented by a 3-bit number. That is, instead of representing each position in the context n-gram by a 5-bit number, only those positions with previously-confirmed characters are represented by a 5-bit number. The last position, that of the selected key, need only be represented by a 3-bit number. As described above, a 2-bit number indicates which of the 4 candidate characters is predicted. Using these reduced encodings, each entry for context n-grams of size N is represented by (N ⁇ 1)*5+3+2 bits or a total of N*5 bits. This same concept applies when more or fewer characters are used.
- FIG. 5 shows a portion of such a tree.
- Context n-grams are processed one character at a time from left to right. At each node, starting at the root node N 1 , the link corresponding to the next character is followed to a subsequent node.
- the context n-gram “THE” corresponds to the path node N 1 -node N 21 -node N 69 -node N 371 .
- the counts at each node refer to the number of times that the node was traversed when the tree was initialized, as described in more detail below.
- the count for “THE” is 23129.
- the count for “TH” is 52310, and the count for “T” is 93410.
- the probability of a particular context n-gram may be computed by dividing the count of a node by the count stored at the root of the tree.
- FIG. 6 illustrates one simple algorithm that may be used to calculate the probability for a particular context n-gram.
- the tree is constructed by adding context n-grams one at a time to a tree which initially consists of only the root node.
- FIG. 7 shows one simple algorithm that may be used to add context n-grams to trees. As each context n-gram is added, counts are incremented at each node encountered, and new nodes are created when there is no existing path. The count of each new node is initialize to 1.
- the decision-tree embodiment can lead to better prediction. Since a finite sample size is required to estimate context n-gram probabilities, it is possible that valid context n-grams encountered when the system is being used were not encountered in the training text. This results in non-zero probabilities being stored as zero probabilities in the context n-gram table. Furthermore, when non-zero counts are very small, statistical reliability is sacrificed. When such suspect values are used for comparison, prediction becomes arbitrary. This overfitting problem becomes more severe as context n-gram size is increased.
- smaller sized context n-grams may be used for prediction when the above-described reverse-order tree representation is used. This is accomplished by traversing the tree until counts fall below some threshold or fail to satisfy some statistical-significance test. For example, when processing the context n-gram “AQU” in FIG. 8, traversal would stop at node N 35 since nodes beyond this point contain counts too small to be reliable. In this case, the probability for the context n-gram “QU” would be used to perform prediction, rather than the entire context n-gram “AQU” which could lead to poorer performance.
- the probabilities are compared, and the character associated with the context n-gram with the highest probability is returned to the user.
- FIG. 9 illustrates how this may be done for the case where three characters are associated with the selected key and where a probability table is indexed with a context n-gram. It is important to note that this is only an example and that other ways of generating context n-gram probabilities can be used. If the probability of the first context n-gram is greater than the probability of each of the two other context n-grams (block 930 ), the character associated with the first context n-gram (block 940 ) is returned to the user (block 945 ). If the probability of the second context n-gram is greater than the probability of the other two context n-grams (block 935 ), the character associated with the second context n-gram (block 950 ) is returned to the user (block 945 ). Otherwise, the character associated with the third context n-gram (block 955 ) is returned to the user (block 945 ).
- the character is presented to the user. This typically is done via a voice prompt, but other ways of presenting the predicted character to the user may be used. For example, the predicted character can appear on the screen of a screen phone with or without speech prompts.
- a “cycle key” is a key designated on the keypad as the key that rejects the presented character and allows the presentation of a new character.
- the cycle key can be any key (e.g., the “#” key in block 1110 of FIG. 11) and typically does not correspond to any set of characters.
- FIG. 11 illustrates an example of this embodiment.
- a new character is chosen in block 1120 and presented in block 1130 to the user for confirmation.
- the new character is chosen from the set of characters corresponding to the originally selected key but is different from the one previously presented to the user.
- the new character may be chosen in any way.
- the new character may be the next most probable character, or it may simply be the next sequential character in the set (e.g., A ⁇ B ⁇ C for the “2” key in the keypad 200 of FIG. 2 ).
- the new character can be rejected by selecting the cycle key again, and another new character is then chosen and presented to the user. This choosing-and-presenting a new character continues until the user confirms the character.
- the user confirms a character in this embodiment by selecting any key which has a set of characters associated with it (e.g., not the cycle key).
- a character from the set of characters associated with the newly-selected key is predicted at block 1140 and presented in block 1130 to the user for confirmation as before.
- This embodiment has the advantage of using effectively only one keystroke to enter and confirm a character, if the predicted character is intended by the user. In this way, the user can type characters on the keypad as he would on a typewriter (i.e., selecting one key per character).
- the user can reject the character by selecting the same key again. For example, if the “2” key were selected initially and the user wishes to reject the presented character, the user would select the “2” key again.
- a new character is chosen at block 1210 and presented at block 1220 to the user for confirmation.
- the new character is chosen from the set of characters corresponding to the selected key but is different from the one previously presented to the user.
- the new character may be chosen in any way.
- the new character may be the next most probable character, or it may simply be the next sequential character in the set (e.g., A ⁇ B ⁇ C for the “2” key in the keypad 200 of FIG. 2 ).
- This new character can be rejected by selecting the same key again, and another new character is then chosen and-presented to the user. This choosing-and-presenting a new character continues until the user confirms the character.
- the user confirms a character in this embodiment by selecting any different key that has a set of characters associated with it. For example, if the “5” key were originally selected and a predicted character were presented to the user, the character would be confirmed if the user selected another key, such as the “2” key.
- a character from a set of characters associated with the different key is predicted at block 1230 and presented at block 1220 to the user for confirmation as before.
- the user needs to select an additional key if he wants to confirm the presented character and select a new character which corresponds to the key of the just-confirmed character. This would happen, for example, when the user wishes to confirm the character “N” associated with the “6” key on the keypad 200 of FIG. 2 and wishes the next character to be the character “O,” a character also associated with the “6” key. If the user selects the “6” key after the character “N” is presented, the character “N” would be rejected.
- confirm-character key is a key designated on the numeric keypad as the key that will, when pressed before a key is selected twice in a row, prevent the application from rejecting the presented character.
- the confirm-character key can be any key (e.g., the “#” key 1240 ) and typically does not correspond to any set of characters.
- This embodiment may be more convenient to the user since it allows him to reject a presented character without moving his finger from the selected key to a cycle key.
- a character When a character is confirmed by the user, it is stored (in a buffer for example) for use in the next context n-gram that is created. That is, the confirmed character would be the second to the last character in the next context n-gram (the last character in the next context n-gram being one of the characters associated with the newly-selected key).
- the character may also be sent to an application to which the user is entering characters.
- the term “character” includes, but is not limited to, a letter, a number, a blank space, or a symbol, such as a comma or a period. While the keypad 200 of FIG. 2 illustrates one example of character-key associations, others may be used. Also, while characters printed on the key itself may indicate which characters are associated with a particular key, this is not always the case. For example, on some telephone keypads, the characters “Q” and “Z” are not represented on the face of the key. The embodiments described above may still be used to predict these characters even though they are not displayed on the key itself.
- Numbers can be handled in the embodiments above by including numbers in the set of characters to be cycled through. For example, if the user desires to enter a “4”, he would press the “4” key. If the previous character entered were a letter, the system would present one of the letters associated with the “4” key (“G”, “H”, or “I” in the keypad 200 of FIG. 2) as before, and the user would be able to cycle through the set of characters including the character “4”. The character “4” may, for example, be presented after all of the letters are cycled through. For example, if the initial guess is “H”, the sequence of characters presented to the user (upon cycling) can be: “H”-“I”-, “G”- “4”. If the previous character entered is a number, however, the initial prediction would also be a number, and cycling would present letters. In this case the sequence can be, for example: “4”- “G”- “H”- “I”.
- Punctuation can be handled by associating punctuation symbols with a key that is not already associated with letters, for example.
- the “1”key can be associated with the characters “.”, “,”, “?”, “;”. The symbols chosen will often depend on the application. These symbols are cycled through in the same manner described above for other characters.
- a designated key e.g., the “*” key
- IVR Interactive Voice Response system
- these functions can be handled by various keys (including menu soft keys) on the telephone.
- an interactive television application can be used in which a remote control is used to select a key and where the predicted character is echoed on the television screen.
- a key is selected by navigating a cursor on the television screen.
- Prediction performance is poorer at the beginning of a word when there are no previously-confirmed characters available.
- Predictions can be made more reliable by using part-of-speech evidence.
- part-of-speech evidence comprises conditioning the probability of a character appearing in the i-th position of a word on the part of speech (e.g., verb, noun, etc.) of that word.
- the part of speech of the word can be estimated by examining the previous words in the sentence.
- a simple approach is to apply standard part-of-speech tagging methods that use a priori probabilities of parts of speech for individual words and probabilities of sequences of parts of speech (e.g., probability of the sequence DETERMINER-NOUN-VERB) to identify the most probable part of speech of each word in a sequence. More elaborate linguistic analysis such as parsing can also be employed.
- the probability of a particular letter being the i-th letter of a word with a particular part-of-speech can be used to make a more reliable prediction of the letter.
- This is particularly powerful for predicting the first letter of closed-classed words (a small, finite class of words known ahead of time) such as determiners (“A”, “THE”, “AN”, etc.) and prepositions (“ON”, “IN”, “OF”, etc.).
- Predictive performance can be improved in some cases by incorporating equivalence classes of characters into context n-grams.
- An equivalence class of characters is a set of characters that can be used interchangeably in a particular context.
- a single symbol denoting the class can substitute for any occurrence of a character within that class, for example, when compiling context n-gram probabilities.
- “vowel” can be the class containing the letters “A”, “E”, “I”, “O”, and “U”.
- a single context n-gram such as “vowel-S-T-R-vowel” can be used in place of several context n-grams of the form “A-S-T-R-A”, “A-S-T-R-E”, . . . , “E-S-T-R-A”, . . .
- By estimating probabilities of context n-grams containing these equivalence classes longer context n-grams can be used without sacrificing statistical significance.
- Well-known classes such as “vowel” and “consonant” classes, classes derived from speech processing knowledge (e.g., voiced consonant), or classes formed automatically via processes such as clustering can be used. In the latter case, letters that appear frequently in the same context (i.e., same preceding and subsequent context n-grams) can be clustered in the sample text.
- Neural networks have properties which can also be helpful in improving performance.
- a neural network's ability to partition a high-dimensional space into meaningful classes can lead to implicit discovery of the equivalence classes and grammatical structure described above. This, in turn, can allow use of larger context n-grams, leading to improved prediction performance.
- the input vector is a concatenation of the vectors for the individual letters in the context n-gram.
- the output vector is simply the vector corresponding to the predicted letter.
- the English language for example, is itself a conglomeration of many different languages and influences. Many words originally derive from Latin, Greek, and French, among other languages. Each such language has its own unique statistical properties. If the language of a particular word is known, the appropriate context n-gram statistics can be used to attain better predictive performance. The language can in turn be predicted probabilistically from the letter combinations in words.
- prediction performance can be improved by using context n-grams in the early part of a word to predict language which, in turn, can be used to improve prediction of letters later in the word.
- context n-grams in the early part of a word to predict language which, in turn, can be used to improve prediction of letters later in the word.
- an observed context n-gram of “CZ” is a clue that the word is of eastern European origin which, in turn, alters the probabilities of subsequent context n-grams (e.g., “YK”). This approach is, of course, particularly useful in improving performance on proper names.
- FIGS. 10, 13 , and 14 show an example of a flow chart illustrating a simple algorithm which can be used to generate probabilities from a sample text.
- the system can be tuned to particular applications by using statistics derived from a sample text unique to the application. For example, if the system is used to enter proper names, it can be trained using a text source of names such as a machine-readable telephone directory.
- the system can be tuned to particular users or applications by updating statistics in real time as users use the system. In this case, as users type characters using the system, probabilities are updated, if the tree or table is available for updating.
- the method of the embodiments described above can be used in a system having a numeric keypad (block 1510 ) and means for predicting a character from a set of at least one character corresponding to a key selected on a numeric keypad (block 1530 ), as seen in FIG. 15 .
- the means for predicting a character can be a computer comprising computer readable program code means embodied therein to perform any of the above described methods (e.g., predicting a character, generating a context n-gram, predicting a character based on the context n-gram, storing a character, and using a context n-gram/key combination to determine the intended letter).
- the embodiments described above may be programmed using, e.g., Visual Basic, Conversant, or Java.
- the pre-compiled probability table embodiment described above may be considered the most desirable, if the real-time updating alternative is not being used. If memory availability is an issue (e.g., in small devices such as telephones), the alternative storage mechanism described above can be used (i.e., by using a binary search and by encoding context n-grams and predicted letters as minimal sized binary numbers).
- a simple context n-gram model has empirically performed well when one of the more powerful models described above (e.g., Hidden Markov Models or neural networks) was not used and when a small sample size was used. In such a case, using a context n-gram of size 4 provided the best trade-off between sufficient context and overfitting.
- Hidden Markov Models or neural networks e.g., Hidden Markov Models or neural networks
- a sample text such as a machine-readable English novel (e.g., Moby Dick ) can lead to sizable predictive accuracy.
- a larger sample size can provide better performance.
- a source of proper names such as a machine-readable telephone directory.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Input From Keyboards Or The Like (AREA)
Abstract
Description
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/414,303 US6346894B1 (en) | 1997-02-27 | 1999-10-06 | Method and system for intelligent text entry on a numeric keypad |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/806,724 US6005495A (en) | 1997-02-27 | 1997-02-27 | Method and system for intelligent text entry on a numeric keypad |
US09/414,303 US6346894B1 (en) | 1997-02-27 | 1999-10-06 | Method and system for intelligent text entry on a numeric keypad |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US08/806,724 Continuation US6005495A (en) | 1997-02-27 | 1997-02-27 | Method and system for intelligent text entry on a numeric keypad |
Publications (1)
Publication Number | Publication Date |
---|---|
US6346894B1 true US6346894B1 (en) | 2002-02-12 |
Family
ID=25194715
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US08/806,724 Expired - Lifetime US6005495A (en) | 1997-02-27 | 1997-02-27 | Method and system for intelligent text entry on a numeric keypad |
US09/414,303 Expired - Lifetime US6346894B1 (en) | 1997-02-27 | 1999-10-06 | Method and system for intelligent text entry on a numeric keypad |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US08/806,724 Expired - Lifetime US6005495A (en) | 1997-02-27 | 1997-02-27 | Method and system for intelligent text entry on a numeric keypad |
Country Status (1)
Country | Link |
---|---|
US (2) | US6005495A (en) |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030190181A1 (en) * | 2000-01-17 | 2003-10-09 | Kim Min Kyum | Apparatus and method for inputting alphabet characters on keypad |
WO2003085505A1 (en) * | 2002-04-04 | 2003-10-16 | Xrgomics Pte. Ltd | Reduced keyboard system that emulates qwerty-type mapping and typing |
US6686902B2 (en) * | 2000-09-27 | 2004-02-03 | Samsung Electronics Co., Ltd. | Method and apparatus for inputting characters in a mobile terminal |
US20050043947A1 (en) * | 2001-09-05 | 2005-02-24 | Voice Signal Technologies, Inc. | Speech recognition using ambiguous or phone key spelling and/or filtering |
US20050049858A1 (en) * | 2003-08-25 | 2005-03-03 | Bellsouth Intellectual Property Corporation | Methods and systems for improving alphabetic speech recognition accuracy |
EP1528459A2 (en) * | 2003-10-27 | 2005-05-04 | Nikolaos Tselios | Method and apparatus of automatic text input in digital devices with a reduced number of keys |
US20050137868A1 (en) * | 2003-12-19 | 2005-06-23 | International Business Machines Corporation | Biasing a speech recognizer based on prompt context |
US20050140650A1 (en) * | 2000-08-31 | 2005-06-30 | Microsoft Corporation | J-key input for computer systems |
US20050159957A1 (en) * | 2001-09-05 | 2005-07-21 | Voice Signal Technologies, Inc. | Combined speech recognition and sound recording |
US20050159948A1 (en) * | 2001-09-05 | 2005-07-21 | Voice Signal Technologies, Inc. | Combined speech and handwriting recognition |
US20050200609A1 (en) * | 2004-03-12 | 2005-09-15 | Van Der Hoeven Steven | Apparatus method and system for a data entry interface |
US20050283358A1 (en) * | 2002-06-20 | 2005-12-22 | James Stephanick | Apparatus and method for providing visual indication of character ambiguity during text entry |
US20060012494A1 (en) * | 2004-07-13 | 2006-01-19 | Samsung Electronics Co., Ltd. | Method and apparatus for inputting an alphabet character in a terminal with a keypad |
US20060019707A1 (en) * | 2004-07-20 | 2006-01-26 | Griffin Jason T | Handheld electronic device having facilitated telephone dialing with audible sound tags, and associated method |
US20060139315A1 (en) * | 2001-01-17 | 2006-06-29 | Kim Min-Kyum | Apparatus and method for inputting alphabet characters on keypad |
US20060236239A1 (en) * | 2003-06-18 | 2006-10-19 | Zi Corporation | Text entry system and method |
US7143043B1 (en) * | 2000-04-26 | 2006-11-28 | Openwave Systems Inc. | Constrained keyboard disambiguation using voice recognition |
US20060274051A1 (en) * | 2003-12-22 | 2006-12-07 | Tegic Communications, Inc. | Virtual Keyboard Systems with Automatic Correction |
US20070013650A1 (en) * | 2005-07-15 | 2007-01-18 | Research In Motion Limited | Systems and methods for inputting data using multi-character keys |
US20070028019A1 (en) * | 2005-07-27 | 2007-02-01 | Millind Mittal | Method and apparatus for efficient text entry in cell phones and other small keypad devices |
US20070237310A1 (en) * | 2006-03-30 | 2007-10-11 | Schmiedlin Joshua L | Alphanumeric data entry apparatus and method using multicharacter keys of a keypad |
US20080002885A1 (en) * | 2006-06-30 | 2008-01-03 | Vadim Fux | Method of learning a context of a segment of text, and associated handheld electronic device |
US20080001788A1 (en) * | 2006-06-30 | 2008-01-03 | Samsung Electronics Co., Ltd. | Character input method and mobile communication terminal using the same |
US20080015841A1 (en) * | 2000-05-26 | 2008-01-17 | Longe Michael R | Directional Input System with Automatic Correction |
US7376938B1 (en) | 2004-03-12 | 2008-05-20 | Steven Van der Hoeven | Method and system for disambiguation and predictive resolution |
US20080141125A1 (en) * | 2006-06-23 | 2008-06-12 | Firooz Ghassabian | Combined data entry systems |
US20080154576A1 (en) * | 2006-12-21 | 2008-06-26 | Jianchao Wu | Processing of reduced-set user input text with selected one of multiple vocabularies and resolution modalities |
US20080162113A1 (en) * | 2006-12-28 | 2008-07-03 | Dargan John P | Method and Apparatus for for Predicting Text |
US20080195388A1 (en) * | 2007-02-08 | 2008-08-14 | Microsoft Corporation | Context based word prediction |
US20080195571A1 (en) * | 2007-02-08 | 2008-08-14 | Microsoft Corporation | Predicting textual candidates |
US7444286B2 (en) | 2001-09-05 | 2008-10-28 | Roth Daniel L | Speech recognition using re-utterance recognition |
US20090037623A1 (en) * | 1999-10-27 | 2009-02-05 | Firooz Ghassabian | Integrated keypad system |
US20090146848A1 (en) * | 2004-06-04 | 2009-06-11 | Ghassabian Firooz Benjamin | Systems to enhance data entry in mobile and fixed environment |
US20090174580A1 (en) * | 2006-01-13 | 2009-07-09 | Vadim Fux | Handheld Electronic Device and Method for Disambiguation of Text Input Providing Suppression of Low Probability Artificial Variants |
US20090199092A1 (en) * | 2005-06-16 | 2009-08-06 | Firooz Ghassabian | Data entry system |
US20090213134A1 (en) * | 2003-04-09 | 2009-08-27 | James Stephanick | Touch screen and graphical user interface |
US7809574B2 (en) | 2001-09-05 | 2010-10-05 | Voice Signal Technologies Inc. | Word recognition using choice lists |
US20100271299A1 (en) * | 2003-04-09 | 2010-10-28 | James Stephanick | Selective input system and process based on tracking of motion parameters of an input object |
US20100283638A1 (en) * | 2009-05-05 | 2010-11-11 | Burrell Iv James W | World's fastest multi-tap phone and control means |
US20100302163A1 (en) * | 2007-08-31 | 2010-12-02 | Benjamin Firooz Ghassabian | Data entry system |
US20100328112A1 (en) * | 2009-06-24 | 2010-12-30 | Htc Corporation | Method of dynamically adjusting long-press delay time, electronic device, and computer-readable medium |
US20110010174A1 (en) * | 2004-06-02 | 2011-01-13 | Tegic Communications, Inc. | Multimodal disambiguation of speech recognition |
US7880730B2 (en) | 1999-05-27 | 2011-02-01 | Tegic Communications, Inc. | Keyboard system with automatic correction |
CN101099131B (en) * | 2004-12-07 | 2011-06-29 | 字源加拿大公司 | Equipment and method for searching and finding |
US20110193797A1 (en) * | 2007-02-01 | 2011-08-11 | Erland Unruh | Spell-check for a keyboard system with automatic correction |
US20120041757A1 (en) * | 2004-06-02 | 2012-02-16 | Research In Motion Limited | Handheld electronic device with text disambiguation |
CN102439540A (en) * | 2009-03-19 | 2012-05-02 | 谷歌股份有限公司 | Input method editor |
US8201087B2 (en) | 2007-02-01 | 2012-06-12 | Tegic Communications, Inc. | Spell-check for a keyboard system with automatic correction |
US20120188168A1 (en) * | 2009-07-23 | 2012-07-26 | Ki-Sup Yoon | Device for inputting english characters for a mobile communication terminal, and method for same |
US20120253701A1 (en) * | 2007-05-22 | 2012-10-04 | Avaya Inc. | Monitoring key-press delay and duration to determine need for assistance |
US20120323561A1 (en) * | 2004-06-02 | 2012-12-20 | Research In Motion Limited | Handheld electronic device with text disambiguation |
US8381137B2 (en) | 1999-12-03 | 2013-02-19 | Tegic Communications, Inc. | Explicit character filtering of ambiguous text entry |
US8704761B2 (en) | 2009-03-19 | 2014-04-22 | Google Inc. | Input method editor |
US20140115491A1 (en) * | 2011-04-15 | 2014-04-24 | Doro AB | Portable electronic device having a user interface features which are adjustable based on user behaviour patterns |
US8938688B2 (en) | 1998-12-04 | 2015-01-20 | Nuance Communications, Inc. | Contextual prediction of user words and user actions |
US9286288B2 (en) | 2006-06-30 | 2016-03-15 | Blackberry Limited | Method of learning character segments during text input, and associated handheld electronic device |
US9639266B2 (en) | 2011-05-16 | 2017-05-02 | Touchtype Limited | User input prediction |
Families Citing this family (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5786776A (en) * | 1995-03-13 | 1998-07-28 | Kabushiki Kaisha Toshiba | Character input terminal device and recording apparatus |
CA2302595C (en) * | 1997-09-25 | 2002-09-17 | Tegic Communications, Inc. | Reduced keyboard disambiguating system |
AU9060498A (en) * | 1998-09-09 | 2000-03-27 | Qi Hao | Keyboard and thereof input method |
US6219731B1 (en) | 1998-12-10 | 2001-04-17 | Eaton: Ergonomics, Inc. | Method and apparatus for improved multi-tap text input |
US6885317B1 (en) | 1998-12-10 | 2005-04-26 | Eatoni Ergonomics, Inc. | Touch-typable devices based on ambiguous codes and methods to design such devices |
US6204848B1 (en) * | 1999-04-14 | 2001-03-20 | Motorola, Inc. | Data entry apparatus having a limited number of character keys and method |
US6621424B1 (en) * | 2000-02-18 | 2003-09-16 | Mitsubishi Electric Research Laboratories Inc. | Method for predicting keystroke characters on single pointer keyboards and apparatus therefore |
US6646572B1 (en) * | 2000-02-18 | 2003-11-11 | Mitsubish Electric Research Laboratories, Inc. | Method for designing optimal single pointer predictive keyboards and apparatus therefore |
WO2001074133A2 (en) | 2000-03-31 | 2001-10-11 | Ventris, Inc. | Method and apparatus for input of alphanumeric text data from twelve key keyboards |
WO2001082044A2 (en) * | 2000-04-20 | 2001-11-01 | Glenayre Electronics, Inc. | Tree-based text entry in messaging devices having limited keyboard capability |
US7277732B2 (en) * | 2000-10-13 | 2007-10-02 | Microsoft Corporation | Language input system for mobile devices |
US7162694B2 (en) * | 2001-02-13 | 2007-01-09 | Microsoft Corporation | Method for entering text |
FI20010644L (en) * | 2001-03-28 | 2002-09-29 | Nokia Corp | Specifying the language of a character sequence |
GB2373907B (en) * | 2001-03-29 | 2005-04-06 | Nec Technologies | Predictive text algorithm |
US7103534B2 (en) * | 2001-03-31 | 2006-09-05 | Microsoft Corporation | Machine learning contextual approach to word determination for text input via reduced keypad keys |
US7117144B2 (en) * | 2001-03-31 | 2006-10-03 | Microsoft Corporation | Spell checking for text input via reduced keypad keys |
JP4084582B2 (en) * | 2001-04-27 | 2008-04-30 | 俊司 加藤 | Touch type key input device |
JP3722359B2 (en) * | 2001-06-29 | 2005-11-30 | Esmertecエンジニアリングサービス株式会社 | Character input system and communication terminal |
US7761175B2 (en) | 2001-09-27 | 2010-07-20 | Eatoni Ergonomics, Inc. | Method and apparatus for discoverable input of symbols on a reduced keypad |
US20030197689A1 (en) * | 2002-04-23 | 2003-10-23 | May Gregory J. | Input device that allows multiple touch key input |
US8200865B2 (en) | 2003-09-11 | 2012-06-12 | Eatoni Ergonomics, Inc. | Efficient method and apparatus for text entry based on trigger sequences |
GB0406451D0 (en) | 2004-03-23 | 2004-04-28 | Patel Sanjay | Keyboards |
US7478081B2 (en) * | 2004-11-05 | 2009-01-13 | International Business Machines Corporation | Selection of a set of optimal n-grams for indexing string data in a DBMS system under space constraints introduced by the system |
US7599830B2 (en) * | 2005-03-16 | 2009-10-06 | Research In Motion Limited | Handheld electronic device with reduced keyboard and associated method of providing quick text entry in a message |
GB0505942D0 (en) * | 2005-03-23 | 2005-04-27 | Patel Sanjay | Human to mobile interfaces |
GB0505941D0 (en) * | 2005-03-23 | 2005-04-27 | Patel Sanjay | Human-to-mobile interfaces |
US7403188B2 (en) * | 2005-04-04 | 2008-07-22 | Research In Motion Limited | Handheld electronic device with text disambiquation employing advanced word frequency learning feature |
US9606634B2 (en) * | 2005-05-18 | 2017-03-28 | Nokia Technologies Oy | Device incorporating improved text input mechanism |
US8374846B2 (en) | 2005-05-18 | 2013-02-12 | Neuer Wall Treuhand Gmbh | Text input device and method |
US8036878B2 (en) * | 2005-05-18 | 2011-10-11 | Never Wall Treuhand GmbH | Device incorporating improved text input mechanism |
US8117540B2 (en) | 2005-05-18 | 2012-02-14 | Neuer Wall Treuhand Gmbh | Method and device incorporating improved text input mechanism |
US7551162B2 (en) * | 2005-07-05 | 2009-06-23 | Chang-Sung Yu | Method for keypad optimization |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US7649478B1 (en) | 2005-11-03 | 2010-01-19 | Hyoungsoo Yoon | Data entry using sequential keystrokes |
US8065135B2 (en) * | 2006-04-06 | 2011-11-22 | Research In Motion Limited | Handheld electronic device and method for employing contextual data for disambiguation of text input |
US7477165B2 (en) | 2006-04-06 | 2009-01-13 | Research In Motion Limited | Handheld electronic device and method for learning contextual data during disambiguation of text input |
KR100765887B1 (en) * | 2006-05-19 | 2007-10-10 | 삼성전자주식회사 | Character input method of mobile terminal by extracting candidate character group |
US7683886B2 (en) * | 2006-09-05 | 2010-03-23 | Research In Motion Limited | Disambiguated text message review function |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
EP2069891A2 (en) * | 2006-09-14 | 2009-06-17 | Eatoni Ergonomics, Inc. | Keypads row similar to a telephone keypad |
GB2443652B (en) * | 2006-11-08 | 2009-06-17 | Samsung Electronics Co Ltd | Mobile communications |
US8035534B2 (en) * | 2006-11-10 | 2011-10-11 | Research In Motion Limited | Method for automatically preferring a diacritical version of a linguistic element on a handheld electronic device based on linguistic source and associated apparatus |
WO2009107111A1 (en) * | 2008-02-28 | 2009-09-03 | Nxp B.V. | Text entry using infrared remote control |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
DE602008005428D1 (en) * | 2008-06-11 | 2011-04-21 | Exb Asset Man Gmbh | Apparatus and method with improved text input mechanism |
KR100948124B1 (en) * | 2008-08-14 | 2010-03-18 | 강윤기 | Word input method |
US20100285435A1 (en) * | 2009-05-06 | 2010-11-11 | Gregory Keim | Method and apparatus for completion of keyboard entry |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9760559B2 (en) * | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179309B1 (en) | 2016-06-09 | 2018-04-23 | Apple Inc | Intelligent automated assistant in a home environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4727357A (en) | 1984-06-08 | 1988-02-23 | Amtel Communications, Inc. | Compact keyboard system |
US4737980A (en) | 1985-07-19 | 1988-04-12 | Amtelco | Computer data entry method and apparatus |
US4866759A (en) | 1987-11-30 | 1989-09-12 | Riskin Bernard N | Packet network telecommunication system having access nodes with word guessing capability |
US5031206A (en) | 1987-11-30 | 1991-07-09 | Fon-Ex, Inc. | Method and apparatus for identifying words entered on DTMF pushbuttons |
US5062070A (en) | 1983-01-21 | 1991-10-29 | The Laitram Corporation | Comprehensive computer data and control entries from very few keys operable in a fast touch typing mode |
US5117455A (en) | 1990-03-28 | 1992-05-26 | Danish International, Inc. | Telephone keypad matrix |
US5184315A (en) | 1983-01-21 | 1993-02-02 | The Laitram Corporation | Comprehensive computer data and control entries from very few keys operable in a fast touch typing mode |
US5200988A (en) | 1991-03-11 | 1993-04-06 | Fon-Ex, Inc. | Method and means for telecommunications by deaf persons utilizing a small hand held communications device |
US5392338A (en) | 1990-03-28 | 1995-02-21 | Danish International, Inc. | Entry of alphabetical characters into a telephone system using a conventional telephone keypad |
US5911485A (en) * | 1995-12-11 | 1999-06-15 | Unwired Planet, Inc. | Predictive data entry method for a keypad |
US5952942A (en) * | 1996-11-21 | 1999-09-14 | Motorola, Inc. | Method and device for input of text messages from a keypad |
US5953541A (en) * | 1997-01-24 | 1999-09-14 | Tegic Communications, Inc. | Disambiguating system for disambiguating ambiguous input sequences by displaying objects associated with the generated input sequences in the order of decreasing frequency of use |
US6307548B1 (en) * | 1997-09-25 | 2001-10-23 | Tegic Communications, Inc. | Reduced keyboard disambiguating system |
-
1997
- 1997-02-27 US US08/806,724 patent/US6005495A/en not_active Expired - Lifetime
-
1999
- 1999-10-06 US US09/414,303 patent/US6346894B1/en not_active Expired - Lifetime
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5062070A (en) | 1983-01-21 | 1991-10-29 | The Laitram Corporation | Comprehensive computer data and control entries from very few keys operable in a fast touch typing mode |
US5184315A (en) | 1983-01-21 | 1993-02-02 | The Laitram Corporation | Comprehensive computer data and control entries from very few keys operable in a fast touch typing mode |
US4727357A (en) | 1984-06-08 | 1988-02-23 | Amtel Communications, Inc. | Compact keyboard system |
US4737980A (en) | 1985-07-19 | 1988-04-12 | Amtelco | Computer data entry method and apparatus |
US5031206A (en) | 1987-11-30 | 1991-07-09 | Fon-Ex, Inc. | Method and apparatus for identifying words entered on DTMF pushbuttons |
US4866759A (en) | 1987-11-30 | 1989-09-12 | Riskin Bernard N | Packet network telecommunication system having access nodes with word guessing capability |
US5117455A (en) | 1990-03-28 | 1992-05-26 | Danish International, Inc. | Telephone keypad matrix |
US5392338A (en) | 1990-03-28 | 1995-02-21 | Danish International, Inc. | Entry of alphabetical characters into a telephone system using a conventional telephone keypad |
US5200988A (en) | 1991-03-11 | 1993-04-06 | Fon-Ex, Inc. | Method and means for telecommunications by deaf persons utilizing a small hand held communications device |
US5911485A (en) * | 1995-12-11 | 1999-06-15 | Unwired Planet, Inc. | Predictive data entry method for a keypad |
US5952942A (en) * | 1996-11-21 | 1999-09-14 | Motorola, Inc. | Method and device for input of text messages from a keypad |
US5953541A (en) * | 1997-01-24 | 1999-09-14 | Tegic Communications, Inc. | Disambiguating system for disambiguating ambiguous input sequences by displaying objects associated with the generated input sequences in the order of decreasing frequency of use |
US6307548B1 (en) * | 1997-09-25 | 2001-10-23 | Tegic Communications, Inc. | Reduced keyboard disambiguating system |
Non-Patent Citations (1)
Title |
---|
"Probabilistic Characer Disambiguation for Reduced Keyboards Using Text Samples", J. L. Arnott and M. Y. Javed, Augmentative and Alternative Communication, pp. 215-223; 09/92. |
Cited By (126)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9626355B2 (en) | 1998-12-04 | 2017-04-18 | Nuance Communications, Inc. | Contextual prediction of user words and user actions |
US8938688B2 (en) | 1998-12-04 | 2015-01-20 | Nuance Communications, Inc. | Contextual prediction of user words and user actions |
US9557916B2 (en) | 1999-05-27 | 2017-01-31 | Nuance Communications, Inc. | Keyboard system with automatic correction |
US7880730B2 (en) | 1999-05-27 | 2011-02-01 | Tegic Communications, Inc. | Keyboard system with automatic correction |
US8294667B2 (en) | 1999-05-27 | 2012-10-23 | Tegic Communications, Inc. | Directional input system with automatic correction |
US20100277416A1 (en) * | 1999-05-27 | 2010-11-04 | Tegic Communications, Inc. | Directional input system with automatic correction |
US8441454B2 (en) | 1999-05-27 | 2013-05-14 | Tegic Communications, Inc. | Virtual keyboard system with automatic correction |
US9400782B2 (en) | 1999-05-27 | 2016-07-26 | Nuance Communications, Inc. | Virtual keyboard system with automatic correction |
US8466896B2 (en) | 1999-05-27 | 2013-06-18 | Tegic Communications, Inc. | System and apparatus for selectable input with a touch screen |
US20090284471A1 (en) * | 1999-05-27 | 2009-11-19 | Tegic Communications, Inc. | Virtual Keyboard System with Automatic Correction |
US8576167B2 (en) | 1999-05-27 | 2013-11-05 | Tegic Communications, Inc. | Directional input system with automatic correction |
US8498406B2 (en) * | 1999-10-27 | 2013-07-30 | Keyless Systems Ltd. | Integrated keypad system |
US20090037623A1 (en) * | 1999-10-27 | 2009-02-05 | Firooz Ghassabian | Integrated keypad system |
US8782568B2 (en) | 1999-12-03 | 2014-07-15 | Nuance Communications, Inc. | Explicit character filtering of ambiguous text entry |
US8381137B2 (en) | 1999-12-03 | 2013-02-19 | Tegic Communications, Inc. | Explicit character filtering of ambiguous text entry |
US8972905B2 (en) | 1999-12-03 | 2015-03-03 | Nuance Communications, Inc. | Explicit character filtering of ambiguous text entry |
US8990738B2 (en) | 1999-12-03 | 2015-03-24 | Nuance Communications, Inc. | Explicit character filtering of ambiguous text entry |
US20030190181A1 (en) * | 2000-01-17 | 2003-10-09 | Kim Min Kyum | Apparatus and method for inputting alphabet characters on keypad |
US7143043B1 (en) * | 2000-04-26 | 2006-11-28 | Openwave Systems Inc. | Constrained keyboard disambiguation using voice recognition |
US7778818B2 (en) * | 2000-05-26 | 2010-08-17 | Tegic Communications, Inc. | Directional input system with automatic correction |
US20080126073A1 (en) * | 2000-05-26 | 2008-05-29 | Longe Michael R | Directional Input System with Automatic Correction |
US8976115B2 (en) | 2000-05-26 | 2015-03-10 | Nuance Communications, Inc. | Directional input system with automatic correction |
US20080015841A1 (en) * | 2000-05-26 | 2008-01-17 | Longe Michael R | Directional Input System with Automatic Correction |
US6972748B1 (en) * | 2000-08-31 | 2005-12-06 | Microsoft Corporation | J-key input for computer systems |
US7589710B2 (en) | 2000-08-31 | 2009-09-15 | Microsoft Corporation | J-key input for computer systems |
US20050140650A1 (en) * | 2000-08-31 | 2005-06-30 | Microsoft Corporation | J-key input for computer systems |
US6686902B2 (en) * | 2000-09-27 | 2004-02-03 | Samsung Electronics Co., Ltd. | Method and apparatus for inputting characters in a mobile terminal |
US20060139315A1 (en) * | 2001-01-17 | 2006-06-29 | Kim Min-Kyum | Apparatus and method for inputting alphabet characters on keypad |
US20070092326A1 (en) * | 2001-01-17 | 2007-04-26 | Kim Min-Kyum | Apparatus and method for inputting alphabet characters on keypad |
US7505911B2 (en) | 2001-09-05 | 2009-03-17 | Roth Daniel L | Combined speech recognition and sound recording |
US7444286B2 (en) | 2001-09-05 | 2008-10-28 | Roth Daniel L | Speech recognition using re-utterance recognition |
US7526431B2 (en) | 2001-09-05 | 2009-04-28 | Voice Signal Technologies, Inc. | Speech recognition using ambiguous or phone key spelling and/or filtering |
US20050043947A1 (en) * | 2001-09-05 | 2005-02-24 | Voice Signal Technologies, Inc. | Speech recognition using ambiguous or phone key spelling and/or filtering |
US20050159948A1 (en) * | 2001-09-05 | 2005-07-21 | Voice Signal Technologies, Inc. | Combined speech and handwriting recognition |
US7809574B2 (en) | 2001-09-05 | 2010-10-05 | Voice Signal Technologies Inc. | Word recognition using choice lists |
US20050159957A1 (en) * | 2001-09-05 | 2005-07-21 | Voice Signal Technologies, Inc. | Combined speech recognition and sound recording |
US7467089B2 (en) | 2001-09-05 | 2008-12-16 | Roth Daniel L | Combined speech and handwriting recognition |
US20030193478A1 (en) * | 2002-04-04 | 2003-10-16 | Edwin Ng | Reduced keyboard system that emulates QWERTY-type mapping and typing |
US7202853B2 (en) | 2002-04-04 | 2007-04-10 | Xrgomics Pte, Ltd. | Reduced keyboard system that emulates QWERTY-type mapping and typing |
WO2003085505A1 (en) * | 2002-04-04 | 2003-10-16 | Xrgomics Pte. Ltd | Reduced keyboard system that emulates qwerty-type mapping and typing |
SG125895A1 (en) * | 2002-04-04 | 2006-10-30 | Xrgomics Pte Ltd | Reduced keyboard system that emulates qwerty-type mapping and typing |
US20050283358A1 (en) * | 2002-06-20 | 2005-12-22 | James Stephanick | Apparatus and method for providing visual indication of character ambiguity during text entry |
US8583440B2 (en) * | 2002-06-20 | 2013-11-12 | Tegic Communications, Inc. | Apparatus and method for providing visual indication of character ambiguity during text entry |
US8237682B2 (en) | 2003-04-09 | 2012-08-07 | Tegic Communications, Inc. | System and process for selectable input with a touch screen |
US7821503B2 (en) | 2003-04-09 | 2010-10-26 | Tegic Communications, Inc. | Touch screen and graphical user interface |
US20110037718A1 (en) * | 2003-04-09 | 2011-02-17 | James Stephanick | System and process for selectable input with a touch screen |
US8456441B2 (en) | 2003-04-09 | 2013-06-04 | Tegic Communications, Inc. | Selective input system and process based on tracking of motion parameters of an input object |
US20090213134A1 (en) * | 2003-04-09 | 2009-08-27 | James Stephanick | Touch screen and graphical user interface |
US8237681B2 (en) | 2003-04-09 | 2012-08-07 | Tegic Communications, Inc. | Selective input system and process based on tracking of motion parameters of an input object |
US20100271299A1 (en) * | 2003-04-09 | 2010-10-28 | James Stephanick | Selective input system and process based on tracking of motion parameters of an input object |
US20060236239A1 (en) * | 2003-06-18 | 2006-10-19 | Zi Corporation | Text entry system and method |
US20050049858A1 (en) * | 2003-08-25 | 2005-03-03 | Bellsouth Intellectual Property Corporation | Methods and systems for improving alphabetic speech recognition accuracy |
EP1528459A2 (en) * | 2003-10-27 | 2005-05-04 | Nikolaos Tselios | Method and apparatus of automatic text input in digital devices with a reduced number of keys |
EP1528459A3 (en) * | 2003-10-27 | 2005-11-16 | Nikolaos Tselios | Method and apparatus of automatic text input in digital devices with a reduced number of keys |
US20050137868A1 (en) * | 2003-12-19 | 2005-06-23 | International Business Machines Corporation | Biasing a speech recognizer based on prompt context |
US7542907B2 (en) | 2003-12-19 | 2009-06-02 | International Business Machines Corporation | Biasing a speech recognizer based on prompt context |
US8570292B2 (en) | 2003-12-22 | 2013-10-29 | Tegic Communications, Inc. | Virtual keyboard system with automatic correction |
US20060274051A1 (en) * | 2003-12-22 | 2006-12-07 | Tegic Communications, Inc. | Virtual Keyboard Systems with Automatic Correction |
US7376938B1 (en) | 2004-03-12 | 2008-05-20 | Steven Van der Hoeven | Method and system for disambiguation and predictive resolution |
US20050200609A1 (en) * | 2004-03-12 | 2005-09-15 | Van Der Hoeven Steven | Apparatus method and system for a data entry interface |
US7555732B2 (en) | 2004-03-12 | 2009-06-30 | Steven Van der Hoeven | Apparatus method and system for a data entry interface |
US8854301B2 (en) | 2004-06-02 | 2014-10-07 | Blackberry Limited | Handheld electronic device with text disambiguation |
US20110010174A1 (en) * | 2004-06-02 | 2011-01-13 | Tegic Communications, Inc. | Multimodal disambiguation of speech recognition |
US9621691B2 (en) | 2004-06-02 | 2017-04-11 | Blackberry Limited | Handheld electronic device with text disambiguation |
US20120323561A1 (en) * | 2004-06-02 | 2012-12-20 | Research In Motion Limited | Handheld electronic device with text disambiguation |
US8731900B2 (en) | 2004-06-02 | 2014-05-20 | Blackberry Limited | Handheld electronic device with text disambiguation |
US8519953B2 (en) * | 2004-06-02 | 2013-08-27 | Research In Motion Limited | Handheld electronic device with text disambiguation |
US8095364B2 (en) | 2004-06-02 | 2012-01-10 | Tegic Communications, Inc. | Multimodal disambiguation of speech recognition |
US20120041757A1 (en) * | 2004-06-02 | 2012-02-16 | Research In Motion Limited | Handheld electronic device with text disambiguation |
US8473010B2 (en) * | 2004-06-02 | 2013-06-25 | Research In Motion Limited | Handheld electronic device with text disambiguation |
US8606582B2 (en) | 2004-06-02 | 2013-12-10 | Tegic Communications, Inc. | Multimodal disambiguation of speech recognition |
US9786273B2 (en) | 2004-06-02 | 2017-10-10 | Nuance Communications, Inc. | Multimodal disambiguation of speech recognition |
US8311829B2 (en) | 2004-06-02 | 2012-11-13 | Tegic Communications, Inc. | Multimodal disambiguation of speech recognition |
US20090146848A1 (en) * | 2004-06-04 | 2009-06-11 | Ghassabian Firooz Benjamin | Systems to enhance data entry in mobile and fixed environment |
US7872595B2 (en) * | 2004-07-13 | 2011-01-18 | Samsung Electronics Co., Ltd. | Method and apparatus for inputting an alphabet character in a terminal with a keypad |
US20060012494A1 (en) * | 2004-07-13 | 2006-01-19 | Samsung Electronics Co., Ltd. | Method and apparatus for inputting an alphabet character in a terminal with a keypad |
US8400433B2 (en) | 2004-07-20 | 2013-03-19 | Research In Motion Limited | Handheld electronic device having facilitated telephone dialing with audible sound tags, and associated method |
US20060019707A1 (en) * | 2004-07-20 | 2006-01-26 | Griffin Jason T | Handheld electronic device having facilitated telephone dialing with audible sound tags, and associated method |
CN101099131B (en) * | 2004-12-07 | 2011-06-29 | 字源加拿大公司 | Equipment and method for searching and finding |
US20090199092A1 (en) * | 2005-06-16 | 2009-08-06 | Firooz Ghassabian | Data entry system |
US9158388B2 (en) | 2005-06-16 | 2015-10-13 | Keyless Systems Ltd. | Data entry system |
US20130082934A1 (en) * | 2005-07-15 | 2013-04-04 | Research In Motion Limited | Systems and methods for inputting data using multi-character keys |
US8373651B2 (en) * | 2005-07-15 | 2013-02-12 | Research In Motion Limited | Systems and methods for inputting data using multi-character keys |
US20070013650A1 (en) * | 2005-07-15 | 2007-01-18 | Research In Motion Limited | Systems and methods for inputting data using multi-character keys |
US8692766B2 (en) * | 2005-07-15 | 2014-04-08 | Blackberry Limited | Systems and methods for inputting data using multi-character keys |
US20070028019A1 (en) * | 2005-07-27 | 2007-02-01 | Millind Mittal | Method and apparatus for efficient text entry in cell phones and other small keypad devices |
US8803713B2 (en) * | 2006-01-13 | 2014-08-12 | Blackberry Limited | Handheld electronic device and method for disambiguation of text input providing suppression of low probability artificial variants |
US9250711B2 (en) | 2006-01-13 | 2016-02-02 | Blackberry Limited | Handheld electronic device and method for disambiguation of text input providing suppression of low probability artificial variants |
US20090174580A1 (en) * | 2006-01-13 | 2009-07-09 | Vadim Fux | Handheld Electronic Device and Method for Disambiguation of Text Input Providing Suppression of Low Probability Artificial Variants |
US8497785B2 (en) * | 2006-01-13 | 2013-07-30 | Research In Motion Limited | Handheld electronic device and method for disambiguation of text input providing suppression of low probability artificial variants |
US20070237310A1 (en) * | 2006-03-30 | 2007-10-11 | Schmiedlin Joshua L | Alphanumeric data entry apparatus and method using multicharacter keys of a keypad |
US8296484B2 (en) | 2006-03-30 | 2012-10-23 | Harris Corporation | Alphanumeric data entry apparatus and method using multicharacter keys of a keypad |
US20080141125A1 (en) * | 2006-06-23 | 2008-06-12 | Firooz Ghassabian | Combined data entry systems |
US20080002885A1 (en) * | 2006-06-30 | 2008-01-03 | Vadim Fux | Method of learning a context of a segment of text, and associated handheld electronic device |
US8395586B2 (en) * | 2006-06-30 | 2013-03-12 | Research In Motion Limited | Method of learning a context of a segment of text, and associated handheld electronic device |
US8060839B2 (en) * | 2006-06-30 | 2011-11-15 | Samsung Electronics Co., Ltd | Character input method and mobile communication terminal using the same |
US9286288B2 (en) | 2006-06-30 | 2016-03-15 | Blackberry Limited | Method of learning character segments during text input, and associated handheld electronic device |
US9171234B2 (en) | 2006-06-30 | 2015-10-27 | Blackberry Limited | Method of learning a context of a segment of text, and associated handheld electronic device |
US20080001788A1 (en) * | 2006-06-30 | 2008-01-03 | Samsung Electronics Co., Ltd. | Character input method and mobile communication terminal using the same |
US20080154576A1 (en) * | 2006-12-21 | 2008-06-26 | Jianchao Wu | Processing of reduced-set user input text with selected one of multiple vocabularies and resolution modalities |
US20080162113A1 (en) * | 2006-12-28 | 2008-07-03 | Dargan John P | Method and Apparatus for for Predicting Text |
US8195448B2 (en) | 2006-12-28 | 2012-06-05 | John Paisley Dargan | Method and apparatus for predicting text |
US8225203B2 (en) | 2007-02-01 | 2012-07-17 | Nuance Communications, Inc. | Spell-check for a keyboard system with automatic correction |
US9092419B2 (en) | 2007-02-01 | 2015-07-28 | Nuance Communications, Inc. | Spell-check for a keyboard system with automatic correction |
US8892996B2 (en) | 2007-02-01 | 2014-11-18 | Nuance Communications, Inc. | Spell-check for a keyboard system with automatic correction |
US20110193797A1 (en) * | 2007-02-01 | 2011-08-11 | Erland Unruh | Spell-check for a keyboard system with automatic correction |
US8201087B2 (en) | 2007-02-01 | 2012-06-12 | Tegic Communications, Inc. | Spell-check for a keyboard system with automatic correction |
US7809719B2 (en) | 2007-02-08 | 2010-10-05 | Microsoft Corporation | Predicting textual candidates |
US7912700B2 (en) | 2007-02-08 | 2011-03-22 | Microsoft Corporation | Context based word prediction |
US20080195388A1 (en) * | 2007-02-08 | 2008-08-14 | Microsoft Corporation | Context based word prediction |
US20080195571A1 (en) * | 2007-02-08 | 2008-08-14 | Microsoft Corporation | Predicting textual candidates |
US20120253701A1 (en) * | 2007-05-22 | 2012-10-04 | Avaya Inc. | Monitoring key-press delay and duration to determine need for assistance |
US8649505B2 (en) * | 2007-05-22 | 2014-02-11 | Avaya Inc. | Monitoring key-press delay and duration to determine need for assistance |
US20100302163A1 (en) * | 2007-08-31 | 2010-12-02 | Benjamin Firooz Ghassabian | Data entry system |
US9026426B2 (en) | 2009-03-19 | 2015-05-05 | Google Inc. | Input method editor |
CN102439540A (en) * | 2009-03-19 | 2012-05-02 | 谷歌股份有限公司 | Input method editor |
CN102439540B (en) * | 2009-03-19 | 2015-04-08 | 谷歌股份有限公司 | Input method editor |
US8704761B2 (en) | 2009-03-19 | 2014-04-22 | Google Inc. | Input method editor |
US20100283638A1 (en) * | 2009-05-05 | 2010-11-11 | Burrell Iv James W | World's fastest multi-tap phone and control means |
US8441377B2 (en) * | 2009-06-24 | 2013-05-14 | Htc Corporation | Method of dynamically adjusting long-press delay time, electronic device, and computer-readable medium |
US20100328112A1 (en) * | 2009-06-24 | 2010-12-30 | Htc Corporation | Method of dynamically adjusting long-press delay time, electronic device, and computer-readable medium |
US8547337B2 (en) * | 2009-07-23 | 2013-10-01 | Ki-Sup Yoon | Device for inputting english characters for a mobile communication terminal, and method for same |
US20120188168A1 (en) * | 2009-07-23 | 2012-07-26 | Ki-Sup Yoon | Device for inputting english characters for a mobile communication terminal, and method for same |
US20140115491A1 (en) * | 2011-04-15 | 2014-04-24 | Doro AB | Portable electronic device having a user interface features which are adjustable based on user behaviour patterns |
US9639266B2 (en) | 2011-05-16 | 2017-05-02 | Touchtype Limited | User input prediction |
US10416885B2 (en) | 2011-05-16 | 2019-09-17 | Touchtype Limited | User input prediction |
Also Published As
Publication number | Publication date |
---|---|
US6005495A (en) | 1999-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6346894B1 (en) | Method and system for intelligent text entry on a numeric keypad | |
RU2377664C2 (en) | Text input method | |
US11416679B2 (en) | System and method for inputting text into electronic devices | |
US10402493B2 (en) | System and method for inputting text into electronic devices | |
US7129932B1 (en) | Keyboard for interacting on small devices | |
US6636162B1 (en) | Reduced keyboard text input system for the Japanese language | |
US6646573B1 (en) | Reduced keyboard text input system for the Japanese language | |
US7440889B1 (en) | Sentence reconstruction using word ambiguity resolution | |
EP2133772B1 (en) | Device and method incorporating an improved text input mechanism | |
KR100766169B1 (en) | Computer-implemented dictionary learning method and device using the same, input method and user terminal device using the same | |
EP1514357B1 (en) | Explicit character filtering of ambiguous text entry | |
EP1950669B1 (en) | Device incorporating improved text input mechanism using the context of the input | |
RU2334269C2 (en) | Device and method of complex words creation | |
US8296484B2 (en) | Alphanumeric data entry apparatus and method using multicharacter keys of a keypad | |
Shieber et al. | Abbreviated text input | |
JP3492981B2 (en) | An input system for generating input sequence of phonetic kana characters | |
Hasan et al. | N-best hidden markov model supertagging to improve typing on an ambiguous keyboard | |
KR100397435B1 (en) | Method for processing language model using classes capable of processing registration of new words in speech recognition system | |
Rădescu et al. | Text prediction techniques based on the study of constraints and their applications for intelligent virtual keyboards in learning systems | |
JPH10105578A (en) | Similar word retrieving method utilizing point | |
Hasan et al. | N-Best Hidden Markov Model Supertagging for Typing with Ambiguous Keyboards | |
Harbusch et al. | Applications of HMM-based supertagging | |
Elumeze et al. | Intelligent Predictive Text Input System using Japanese Language | |
JPH10154143A (en) | Kana-Kanji conversion device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: AMERITECH PROPERTIES, INC., NEVADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AMERITECH CORPORATION;REEL/FRAME:013986/0525 Effective date: 20020626 Owner name: SBC HOLDINGS PROPERTIES, L.P., NEVADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AMERITECH PROPERTIES, INC.;REEL/FRAME:013974/0542 Effective date: 20020626 Owner name: SBC PROPERTIES, L.P., NEVADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SBC HOLDINGS PROPERTIES, L.P.;REEL/FRAME:014015/0689 Effective date: 20020626 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |